Top Banner
Notes on Diffy Qs Differential Equations for Engineers by Jiˇ rí Lebl May 12, 2009
252

Urban Illiois

Oct 27, 2014

Download

Documents

Salih Cihangir
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Urban Illiois

Notes on Diffy Qs

Differential Equations for Engineers

by Jirí Lebl

May 12, 2009

Page 2: Urban Illiois

2

Typeset in LATEX.

Copyright c�2008-2009 Jirí Lebl

This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0United States License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/us/ or send a letter to Creative Commons, 171 Second Street, Suite300, San Francisco, California, 94105, USA.

Page 3: Urban Illiois

Contents

Introduction 50.1 Notes about these notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50.2 Introduction to di↵erential equations . . . . . . . . . . . . . . . . . . . . . . . . . 7

1 First order ODEs 131.1 Integrals as solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.2 Slope fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181.3 Separable equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221.4 Linear equations and the integrating factor . . . . . . . . . . . . . . . . . . . . . . 271.5 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321.6 Autonomous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361.7 Numerical methods: Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . 41

2 Higher order linear ODEs 472.1 Second order linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472.2 Constant coe�cient second order linear ODEs . . . . . . . . . . . . . . . . . . . . 512.3 Higher order linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572.4 Mechanical vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622.5 Nonhomogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702.6 Forced oscillations and resonance . . . . . . . . . . . . . . . . . . . . . . . . . . 76

3 Systems of ODEs 833.1 Introduction to systems of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 833.2 Matrices and linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873.3 Linear systems of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953.4 Eigenvalue method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993.5 Two dimensional systems and their vector fields . . . . . . . . . . . . . . . . . . . 1053.6 Second order systems and applications . . . . . . . . . . . . . . . . . . . . . . . . 1103.7 Multiple eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193.8 Matrix exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243.9 Nonhomogeneous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

3

Page 4: Urban Illiois

4 CONTENTS

4 Fourier series and PDEs 1434.1 Boundary value problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1434.2 The trigonometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1514.3 More on the Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1604.4 Sine and cosine series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684.5 Applications of Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1754.6 PDEs, separation of variables, and the heat equation . . . . . . . . . . . . . . . . . 1814.7 One dimensional wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1914.8 D’Alembert solution of the wave equation . . . . . . . . . . . . . . . . . . . . . . 1984.9 Steady state temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

5 Eigenvalue problems 2115.1 Sturm-Liouville problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2115.2 Application of eigenfunction series . . . . . . . . . . . . . . . . . . . . . . . . . . 2195.3 Steady periodic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

6 The Laplace transform 2296.1 The Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2296.2 Transforms of derivatives and ODEs . . . . . . . . . . . . . . . . . . . . . . . . . 2366.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

Further Reading 247

Index 249

Page 5: Urban Illiois

Introduction

0.1 Notes about these notes

These are class notes from teaching Math 286, di↵erential equations at the University of Illinois atUrbana-Champaign in fall 2008 and spring 2009. These originated from my lecture notes. Thereis usually a little more “padding” material than I can cover in the time alloted. There are still notenough exercises throughout. Some of the exercises in the notes are things I do explicitly in classdepending on time, or let the students work out in class themselves. The book used for the class isEdwards and Penney, Di↵erential Equations and Boundary Value Problems [EP], fourth edition,from now on referenced just as EP. The structure of the notes, therefore, reflects the structure ofthis book, at least as far as the chapters that are covered in the course. Many examples and ap-plications are taken more or less from this book, though they also appear in many other sources,of course. Other books I have used as sources of information and inspiration are E.L. Ince’s clas-sic (and inexpensive) Ordinary Di↵erential Equations [I], and also my undergraduate textbooks,Stanley Farlow’s Di↵erential Equations and Their Applications [F], which is now available fromDover, and Berg and McGregor’s Elementary Partial Di↵erential Equations [BM]. See the FurtherReading section at the end of these notes.

I taught the course with the IODE software (http://www.math.uiuc.edu/iode/). IODE isa free software package that is used either with Matlab (properietary) or Octave (free software).Projects and labs from the IODE website are referenced throughout the notes. They need not beused for this course, but I think it is better to use them. The graphs in the notes were made withthe Genius software (see http://www.jirka.org/genius.html). I have used Genius in classto show essentially these and similar graphs.

I would like to acknowledge Rick Laugesen. I have used his handwritten class notes on the firstgo through the course. My organization of these present notes, and the choice of the exact materialcovered, is heavily influenced by his class notes. Many examples and computations are taken fromhis notes.

The organization of these notes to some degree requires that they be done in order. Hence,later chapters can be dropped. The dependence of the material covered is roughly given in the thefollowing diagram:

5

Page 6: Urban Illiois

6 INTRODUCTION

Introduction

✏✏Chapter 1

✏✏Chapter 2

''OOOOOOOOOOO

✏✏

wwooooooooooo

Chapter 6 Chapter 3

wwo o o o o o

Chapter 4

✏✏Chapter 5

There are some references in chapters 4 and 5 to material from chapter 3 (some linear algebra),but these references are not absolutely essential and can be skimmed over, so chapter 3 can safelybe dropped, while still covering chapters 4 and 5. The notes are done for two types of courses.Either at 4 hours a week for a semester (Math 286 at UIUC):

Introduction,chapter 1 (plus the two IODE labs),chapter 2,chapter 3,chapter 4,chapter 5.

Or a shorter version (Math 285 at UIUC) of the course at 3 hours a week for a semester:

Introduction,chapter 1 (plus the two IODE labs),chapter 2,chapter 4.

For the shorter version some additional material should be covered. IODE need not be used foreither version. If IODE is not used, some additional material should be covered instead.

There is a short introductory chapter on Laplace transform (chapter 6 that could be used asadditional material. The length of the Laplace chapter is about the same as the Sturm-Liouvillechapter (chapter 5). While Laplace transform is not normally covered at UIUC 285/286, I thinkit is essential that any notes for Di↵erential equations at least mention Laplace and/or Fouriertransforms.

Page 7: Urban Illiois

0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS 7

0.2 Introduction to di↵erential equationsNote: more than 1 lecture, §1.1 in EP

0.2.1 Di↵erential equationsThe laws of physics are generally written down as di↵erential equations. Therefore, all of scienceand engineering use di↵erential equations to some degree. Understanding di↵erential equations isessential to understanding almost anything you will study in your science and engineering classes.You can think of mathematics as the language of science, and di↵erential equations are one ofthe most important parts of this language as far as science and engineering are concerned. Asan analogy, suppose that all your classes from now on were given in Swahili. Then it would beimportant to first learn Swahili, otherwise you will have a very tough time getting a good grade inyour other classes.

You have already seen many di↵erential equations without perhaps knowing about it. Andyou have even solved simple di↵erential equations when you were taking calculus. Let us see anexample you may not have seen.

dxdt+ x = 2 cos t. (1)

Here x is the dependent variable and t is the independent variable. Equation (1) is a basic exampleof a di↵erential equation. In fact it is an example of a first order di↵erential equation, since itinvolves only the first derivative of the dependent variable. This equation arises from Newton’slaw of cooling where the ambient temperature oscillates with time.

0.2.2 Solutions of di↵erential equationsSolving the di↵erential equation means finding x in terms of t. That is, we want to find a functionof t which we will call x such that when we plug x, t, and dx

dt into (1) the equation holds. It is thesame idea as it would be for a normal (algebraic) equation of just x and t. In this case we claimthat

x = x(t) = cos t + sin t

is a solution. How do we check? Just plug it back in! First you need to compute dxdt . We find that

dxdt = � sin t + cos t. Now let us compute the left hand side of (1)

dxdt+ x = (� sin t + cos t) + (cos t + sin t) = 2 cos t.

Yay! We got precisely the right hand side. There is more! We claim x = cos t + sin t + e�t is also asolution. Let us try,

dxdt= � sin t + cos t � e�t.

Page 8: Urban Illiois

8 INTRODUCTION

Again plugging into the left hand side of (1)dxdt+ x = (� sin t + cos t � e�t) + (cos t + sin t + e�t) = 2 cos t.

And it works yet again!So there can be many di↵erent solutions. In fact, for this equation all solutions can be written

in the formx = cos t + sin t +Ce�t

for some constant C. See Figure 1 for the graph of a few of these solutions. We will see how wecan find these solutions a few lectures from now.

It turns out that solving di↵erential equations

0 1 2 3 4 5

0 1 2 3 4 5

-1

0

1

2

3

-1

0

1

2

3

Figure 1: Few solutions of dxdt +

y2 = cos t.

can be quite hard. There is no general methodthat solves any given di↵erential equation. Wewill generally focus on how to get exact formu-las for solutions of di↵erential equations, but wewill also spend a little bit of time on getting ap-proximate solutions.

For most of the course we will look at ordi-nary di↵erential equations or ODEs, by whichwe mean that there is only one independent vari-able and derivatives are only with respect to thisone variable. If there are several independentvariables, we will get partial di↵erential equa-tions or PDEs. We will briefly see these near theend of the course.

Even for ODEs, which are very well under-stood, it is not a simple question of turning a crank to get answers. It is important to know whenit is easy to find solutions and how to do this. Even if you leave much of the actual calculationsto computers in real life, you need to understand what they are doing. For example, it is oftennecessary to simplify or transform your equations into something that a computer can actuallyunderstand and solve. You may need to make certain assumptions and changes in your model toachieve this.

To be a successful engineer or scientist, you will be required to solve problems in your jobwhich you have never seen before. It is important to learn problem solving techniques, so thatyou may apply those techniques to new problems. A common mistake is to expect to learn someprescription for solving all the problems you will encounter in your later career. This course is noexception to this.

0.2.3 Di↵erential equations in practiceSo how do we use di↵erential equations in science and engineering? You have some real

Page 9: Urban Illiois

0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS 9

world problem that you want to understand. You make some simplifying assumptions and createa mathematical model. That is, you translate your real world situation into a set of di↵erentialequations. Then you apply mathematics to get some sort of mathematical solution. There is stillsomething left to do. You have to interpret the results. You have to figure out what the mathematicalsolution says about the real world problem you started with.

Learning how to formulate the mathematical

solveMathematical

Real world problem

interpret

Mathematicalsolutionmodel

abstractmodel and how to interpret the results is essen-tially what your physics and engineering classesdo. In this course we will mostly focus on themathematical analysis. Sometimes we will workwith simple real world examples so that we havesome intuition and motivation about what we aredoing.

Let us look at an example of this process. One of the most basic di↵erential equations is thestandard exponential growth model. Let P denote the population of some bacteria on a petri dish.Let us suppose that there is enough food and enough space. Then the rate of growth of bacteriawill be proportional to population. I.e. a large population growth quicker. Let t denote time (say inseconds). Hence our model will be

dPdt= kP

for some positive constant k > 0.Example 0.2.1: Suppose there are 100 bacteria at time 0 and 200 bacteria at time 10s. How manybacteria will there be in 1 minute from time 0 (in 60 seconds)?

First we have to solve the equation. We claim that a solution is given by

P(t) = Cekt,

where C is a constant. Let us try,dPdt= Ckekt = kP.

And it really is a solution.OK, so what now? We do not know C and we do not know k. Well we know something. We

know that P(0) = 100 and we also know that P(10) = 200. Let us plug these in and see whathappens.

100 = P(0) = Cek0 = C,200 = P(10) = 100 ek10.

Therefore, 2 = e10k or ln 210 = k ⇡ 0.069. So we know that

P(t) = 100 e(ln 2)t/10⇡ 100 e0.069t.

Page 10: Urban Illiois

10 INTRODUCTION

At one minute, t = 60, the population is P(60) = 6400. See Figure 2.OK, let us talk about the interpretation of the results. Does this mean that there must be exactly

6400 bacteria on the plate at 60s? No! We have made assumptions that might not be true. But ifour assumptions are reasonable, then there will be about 6400 bacteria. Also note that P in real lifeis a discrete quantity, not any real number, but our model has no problem saying that for exampleat 61 seconds, P(61) ⇡ 6859.35.

Normally, the k in P0 = kP will be known,

0 10 20 30 40 50 60

0 10 20 30 40 50 60

0

1000

2000

3000

4000

5000

6000

0

1000

2000

3000

4000

5000

6000

Figure 2: Bacteria growth in the first 60 sec-onds.

and you will want to solve the equation for dif-ferent initial conditions. What does that mean?Suppose k = 1 for simplicity. So we want tosolve dP

dt = P subject to P(0) = 1000 (the ini-tial condition). Then the solution turns out to be(exercise)

P(t) = 1000 et.

We will call P(t) = Cet the general solution,as every solution of the equation can be writ-ten in this form for some constant C. Then youwill need an initial condition to find out what Cis to find the particular solution we are lookingfor. Generally when we say particular solutionwe just mean some solution.

Let us get to what we will call the 4 fundamental equations. These appear very often and itis useful to just memorize what their solutions are. These solutions are reasonably easy to guessby recalling properties of exponentials, sines, and cosines. They are also simple to check, whichis something that you should always do. There is no need to wonder if you have remembered thesolution correctly.

First such equation is,dydx= ky,

for some constant k > 0. Here y is the dependent and x the independent variable. The generalsolution for this equation is

y(x) = Cekx.

We have already seen that this is a solution above with di↵erent variable names.

Next,dydx= �ky,

for some constant k > 0. The general solution for this equation is

y(x) = Ce�kx.

Page 11: Urban Illiois

0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS 11

Exercise 0.2.1: Check that the y given is really a solution to the equation.

Next, take the second order di↵erential equation

d2ydx2 = �k2y,

for some constant k > 0. The general solution for this equation is

y(x) = C1 cos(kx) +C2 sin(kx).

Note that because we have a second order di↵erential equation we have two constants in our generalsolution.

Exercise 0.2.2: Check that the y given is really a solution to the equation.

And finally, take the second order di↵erential equation

d2ydx2 = k2y,

for some constant k > 0. The general solution for this equation is

y(x) = C1ekx +C2e�kx,

ory(x) = D1 cosh(kx) + D2 sinh(kx).

For those that do not know, cosh and sinh are defined by

cosh x =ex + e�x

2,

sinh x =ex� e�x

2.

These functions are sometimes easier to work with than exponentials. They have some nice familiarproperties such as cosh 0 = 1, sinh 0 = 0, and d

dx cosh x = sinh x (no that is not a typo) andddx sinh x = cosh x.

Exercise 0.2.3: Check that both forms of the y given are really solutions to the equation.

An interesting note about cosh: The graph of cosh is the exact shape a hanging chain will makeand it is called a catenary. Contrary to popular belief this is not a parabola. If you invert the graphof cosh it is also the ideal arch for supporting its own weight. For example, the gateway arch inSaint Louis is an inverted graph of cosh (if it were just a parabola it might fall down). This formulais actually inscribed inside the arch:

y = �127.7 ft · cosh(x/127.7 ft) + 757.7 ft.

Page 12: Urban Illiois

12 INTRODUCTION

0.2.4 ExercisesExercise 0.2.4: Show that x = e4t is a solution to x000 � 12x00 + 48x0 � 64x = 0.

Exercise 0.2.5: Show that x = et is not a solution to x000 � 12x00 + 48x0 � 64x = 0.

Exercise 0.2.6: Is y = sin t a solution to⇣

dydt

⌘2= 1 � y2? Justify.

Exercise 0.2.7: Let y00 + 2y0 � 8y = 0. Now try a solution y = erx. Is this solution for some r? Ifso, find all such r.

Exercise 0.2.8: Verify that x = Ce�2t is a solution to x0 = �2x. Find C to solve the initial conditionx(0) = 100.

Exercise 0.2.9: Verify that x = C1e�t + C2e2t is a solution to x00 � x0 � 2x = 0. Find C1 and C2 tosolve the initial condition x(0) = 10.

Exercise 0.2.10: Using properties of derivatives of functions that you know try to find a solutionto (x0)2 + x2 = 4.

Page 13: Urban Illiois

Chapter 1

First order ODEs

1.1 Integrals as solutionsNote: 1 lecture, §1.2 in EP

A first order ODE is an equation of the form

dydx= f (x, y),

or justy0 = f (x, y).

In general, there is no simple formula or procedure one can follow to find solutions. In the nextfew lectures we will look at special cases where solutions are not di�cult to obtain. In this section,let us assume that f is a function of x alone, that is, the equation is

y0 = f (x). (1.1)

We could just integrate (antidi↵erentiate) both sides here with respect to x.Z

y0(x) dx =Z

f (x) dx +C,

that isy(x) =

Z

f (x) dx +C.

This y(x) is actually the general solution. So to solve (1.1) find some antiderivative of f (x) andthen you add an arbitrary constant to get the general solution.

Now is a good time to discuss a point about calculus notation and terminology. Calculus text-books muddy the waters by talking about integral as primarily the so-called indefinite integral. The

13

Page 14: Urban Illiois

14 CHAPTER 1. FIRST ORDER ODES

indefinite integral is really the antiderivative (in fact the whole one parameter family of antideriva-tives). There really exists only one integral and that is the definite integral. The only reason forthe indefinite integral notation is that you can always write an antiderivative as a (definite) integral.That is, by fundamental theorem of calculus you can always write

R

f (x) dx +C asZ x

x0

f (t) dt +C.

Hence, the terminology integrate when you may really mean antidi↵erentiate. Integration is justone way to compute the antiderivative (and it is a way that always works, see the following exam-ple). Integration is defined as the area under the graph, it only happens to also compute antideriva-tives. For sake of consistency, we will keep using the indefinite integral notation when we want anantiderivative, and you should always think of the definite integral.

Example 1.1.1: Find the general solution of y0 = 3x2.We see that the general solution must be y = x3 + C. Let us check: y0 = 3x2. We have gotten

precisely our equation back.

Normally, we also have an initial condition such as y(x0) = y0 for some two numbers x0 and y0

(x0 is usually 0, but not always). We can write the solution as a definite integral in a nice way.Suppose our problem is y0 = f (x), y(x0) = y0. Then the solution is

y(x) =Z x

x0

f (s) ds + y0. (1.2)

Let us check! y0 = f (x) (by fundamental theorem of calculus) and by Jupiter, this is a solution. Isit the one satisfying the initial condition? Well, y(x0) =

R x0

x0f (x) dx + y0 = y0. And it is!

Do note that the definite integral and indefinite integral (antidi↵erentiation) are completelydi↵erent beasts. The definite integral always evaluates to a number. Therefore, (1.2) is a formulayou can plug into the calculator or a computer and it will be happy to calculate specific values foryou. You will easily be able to plot the solution and work with it just like with any other function.It is not so crucial to find a closed form for the antiderivative.

Example 1.1.2: Solvey0 = e�x2

, y(0) = 1.

By the preceeding discussion, the solution must be

y(x) =Z x

0e�s2

ds + 1.

Here is a good way to make fun of your friends taking second semester calculus. Tell them to findthe closed form solution. Ha ha ha (bad math joke). It is not possible (in closed form). There isabsolutely nothing wrong with writing the solution as a definite integral. This particular integral isin fact very important in statistics.

Page 15: Urban Illiois

1.1. INTEGRALS AS SOLUTIONS 15

We can also solve equations of the form

y0 = f (y)

using this method. Let us write it in Leibniz notation

dydx= f (y)

Now use the inverse function theorem to switch roles of x and y.

dxdy=

1f (y)

What we are doing seems like algebra with dx and dy. It is tempting to just do algebra with dxand dy as if they were numbers. And in this case it does work. Be careful, however, as this sort ofhand-waving calculation can lead to trouble, especially when more than one independent variableis involved. Now we can just integrate

x(y) =Z

1f (y)

dy +C

Next, we try to solve for y.

Example 1.1.3: We guessed y0 = ky has solution Cekx. We can actually do it now. First note thaty = 0 is a solution. Henceforth, assume y , 0. We write

dxdy=

1ky.

Now integrate and get

x(y) = x =1k

ln|ky| +C0.

we solve for ykekC0ekx = |y|.

If we replace kekC with an arbitrary constant C we can get rid of the absolute value bars. In thisway we also incorporate the solution y = 0, and we get the same general solution as we guessedbefore, y = Cekx.

Example 1.1.4: Find the general solution of y0 = y2.First note that y = 0 is a solution. We can now assume that y , 0. Write

dxdy=

1y2

Page 16: Urban Illiois

16 CHAPTER 1. FIRST ORDER ODES

Now integrate to get

x =�1y+C.

Solve for y = 1C�x . So the general solution is

y =1

C � xor y = 0.

Note the singularities of the solution. If for example C = 1, then the solution blows up as weapproach x = 1. It is hard to tell from just looking at the equation itself how the solution is goingto behave sometimes. The equation y0 = y2 is very nice and defined everywhere, but the solutionis only defined on some interval (�1,C) or (C,1).

Classical problems leading to di↵erential equations solvable by integration are problems deal-ing with velocity, acceleration and distance. You have surely seen these problems before in yourcalculus class.Example 1.1.5: Suppose a car drives at a speed et/2 meters per second, where t is time in seconds.How far did the car get in 2 seconds? How far in 10 seconds.

Let x denote the distance the car travelled. The equation is

x0 = et/2.

We can just integrate this equation to get that

x(t) = 2et/2 +C.

Note that we still need to figure out C. But we know that when t = 0 then x = 0, that is: x(0) = 0so

0 = x(0) = 2e0/2 +C = 2 +C.So C = �2 and hence

x(t) = et/2� 2.

Now we just plug in to get that at 2 seconds (and 10), the car has travelled

x(2) = 2e2/2� 2 ⇡ 3.44 meters, x(10) = 2e10/2

� 2 ⇡ 294 meters.

Example 1.1.6: Suppose that the car accelerates at the rate t2 m/s2. At time t = 0 the car is at the1 meter mark and is travelling at 10 m/s. Where is the car at time t = 10.

Well this is actually a second order problem. If x is the distance travelled, then x0 is the velocity,and x00 is the acceleration. The equation with initial conditions is

x00 = t2, x(0) = 1, x0(0) = 10.

Well, what if we call x0 = v and then we have the problem

v0 = t2, v(0) = 10.

Once we solve for v, we can then integrate and find x.Exercise 1.1.1: Solve for v and then solve for x.

Page 17: Urban Illiois

1.1. INTEGRALS AS SOLUTIONS 17

1.1.1 ExercisesExercise 1.1.2: Solve dy

dx = x2 + x for y(1) = 3.

Exercise 1.1.3: Solve dydx = sin 5x for y(0) = 2.

Exercise 1.1.4: Solve dydx =

1x2�1 for y(0) = 0.

Exercise 1.1.5: Solve y0 = y3 for y(0) = 1.

Exercise 1.1.6: Solve y0 = (y � 1)(y + 1) for y(0) = 3.

Exercise 1.1.7: Solve dydx =

1y2+1 for y(0) = 0.

Exercise 1.1.8: Solve y00 = sin x for y(0) = 0.

Page 18: Urban Illiois

18 CHAPTER 1. FIRST ORDER ODES

1.2 Slope fieldsNote: 1 lecture, §1.3 in EP

At this point it may be good to first try the Lab I and/or Project I from the IODE website:http://www.math.uiuc.edu/iode/.

As we said, the general first order equation we are studying looks like

y0 = f (x, y).

In general we cannot really just solve these kinds of equations explicitly. It would be good if wecould at least figure out the shape and behavior of the solutions or even find approximate solutionsfor any equation.

1.2.1 Slope fieldsAs you have seen in IODE Lab I (if you did it), this means that at each point in the (x, y)-planewe get a slope. We can plot the slope at lots of points as a short line with this given slope. SeeFigure 1.1.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 1.1: Slope field of y0 = xy.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 1.2: Slope field of y0 = xy with a graphof solutions satisfying y(0) = 0.2, y(0) = 0, andy(0) = �0.2.

We call this the slope field of the equation. Then if we are given a specific initial conditiony(x0) = y0, we can really just look at the location (x0, y0) and follow the slopes. See Figure 1.2.

By looking at the slope field we can find out a lot about the behavior of solutions. For example,in Figure 1.2 we can see what the solutions do when the initial conditions are y(0) > 0, y(0) = 0

Page 19: Urban Illiois

1.2. SLOPE FIELDS 19

and y(0) < 0. Note that a small change in the initial condition causes quite di↵erent behavior. Onthe other hand, plotting a few solutions of the of the equation y0 = �y, we see that no matter wherewe start, all solutions tend to zero as x tends to infinity. See Figure 1.3.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 1.3: Slope field of y0 = �y with a graph of a few solutions.

1.2.2 Existence and uniquenessWe wish to ask two fundamental questions about the problem

y0 = f (x, y), y(x0) = y0.

(i) Does a solution exist?

(ii) Is the solution unique (if it exists)?

What do you think is the answer? The answer seems to be yes to both does it not? Well, prettymuch. But there are cases when the answer to either question can be no.

Since generally the equations come from real life situation, then it seems logical that a solutionexists. It also has to be unique if we believe our universe is deterministic. If the solution does notexist, or if it does is not unique, we have probably not devised the correct model. Hence, it is goodto know when things go wrong and why.

Example 1.2.1: Attempt to solve:

y0 =1x, y(0) = 0.

Integrate to find the general solution y = ln |x|+C Note that the solution does not exist at x = 0.See Figure 1.4 on the next page.

Page 20: Urban Illiois

20 CHAPTER 1. FIRST ORDER ODES

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 1.4: Slope field of y0 = 1x .

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 1.5: Slope field of y0 = 2p

|y| with twosolutions satisfying y(0) = 0.

Example 1.2.2: Solve:y0 = 2

p

|y|, y(0) = 0.

Note that y = x2 is a solution and y = 0 is a solution (but note x2 is a solution only for x > 0).See Figure 1.5.

It is actually hard to tell from the slope field that the solution will not be unique. Is there anyhope? Of course there is. It turns out that the following theorem is true. It is known as Picard’stheorem⇤.

Theorem 1.2.1 (Picard’s theorem on existence and uniqueness). If f (x, y) is continuous (as afunction of two variables) and @ f

@y exists and is continuous near some (x0, y0), then a solution to

y0 = f (x, y), y(x0) = y0,

exists (at least for some small interval of x’s) and is unique.

Note that y0 = 1x , y(0) = 0 and y0 = 2

p

|y|, y(0) = 0 do not satisfy the theorem. But we ought tobe careful about this existence business. It is quite possible that the solution only exists for a shortwhile.

Example 1.2.3:y0 = y2, y(0) = A,

for some constant A.⇤Named after the French mathematician Charles Émile Picard (1856 – 1941)

Page 21: Urban Illiois

1.2. SLOPE FIELDS 21

We know how to solve this equation. First assume that A , 0, so y is not equal to zero at leastfor some x near 0. So x0 = 1

y2 , so x = �1y +C, so y = 1

C�x . If y(0) = A, then C = 1A so

y =1

1A � x

.

Now if A = 0, then y = 0 is a solution.For example, when A = 1 the solution “blows up” at x = 1. Hence, the solution does not exist

for all x even if the equation is nice everywhere. y0 = y2 certainly looks nice.

For the most of this course we will be interested in equations where existence and uniquenessholds, and in fact will hold “globally” unlike for the y0 = y2.

1.2.3 ExercisesExercise 1.2.1: Sketch direction field for y0 = ex�y. How do the solutions behave as x grows? Canyou guess a particular solution by looking at the direction field?

Exercise 1.2.2: Sketch direction field for y0 = x2.

Exercise 1.2.3: Sketch direction field for y0 = y2.

Exercise 1.2.4: Is it possible to solve the equation y0 = xycos x for y(0) = 1? Justify.

Page 22: Urban Illiois

22 CHAPTER 1. FIRST ORDER ODES

1.3 Separable equationsNote: 1 lecture, §1.4 in EP

When the equation is of the form y0 = f (x), we can just integrate: y =R

f (x) dx + C. Unfor-tunately this method no longer works for the general form of the equation y0 = f (x, y). Integratingboth sides yields

y =Z

f (x, y) dx +C.

Notice dependence on y in the integral.

1.3.1 Separable equationsOn the other hand, what if the equation is separable, that is, if it looked like

y0 = f (x)g(y),

for some functions f (x) and g(y). Let us write the equation in Leibniz notation

dydx= f (x)g(y).

Then we rewrite the equation asdy

g(y)= f (x) dx.

Now both sides look like something we can integrate. We obtainZ

dyg(y)=

Z

f (x) dx +C.

If we can explicitly solve this integral we can maybe solve for y.

Example 1.3.1: Take the equationy0 = xy

First note that y = 0 is a solution, so assume y , 0 from now on. Write the equation as dydx = xy,

thenZ

dyy=

Z

x dx +C.

We compute the antiderivatives to get

ln |y| =x2

2+C.

Page 23: Urban Illiois

1.3. SEPARABLE EQUATIONS 23

Or|y| = e

x22 +C = e

x22 eC = De

x22 ,

where D > 0 is some constant. Because y = 0 is a solution and because of the absolute value weactually can write:

y = Dex22 ,

for any number D (including zero or negative).We check:

y0 = Dxex22 = x(De

x22 ) = xy.

Yay!

We should be a little bit more careful about the method. Because we were integrating in twodi↵erent variables, that does not sound right. We seemed to be doing a di↵erent operation to eachside. Let us see work out this method more rigorously.

dydx= f (x)g(y)

We rewrite the equation as follows. Note that y = y(x) is a function of x and so is dydx !

1g(y)

dydx= f (x)

We integrate both sides with respect to x.Z

1g(y)

dydx

dx =Z

f (x) dx +C.

We can use the change of variables formula.Z

1g(y)

dy =Z

f (x) dx +C.

And we are done.

1.3.2 Implicit solutionsIt is clear that we might sometimes get stuck even if we can do the integration. For example, takethe separable equation

y0 =xy

y2 + 1.

We separate variablesy2 + 1

ydy =

y +1y

!

dy = x dx.

Page 24: Urban Illiois

24 CHAPTER 1. FIRST ORDER ODES

Now we integrate to gety2

2+ ln |y| =

x2

2+C.

Or maybe the easier looking expression:

y2 + 2 ln |y| = x2 +C.

It is not easy to find the solution explicitly as it is hard to solve for y. We will, therefore, call thissolution an implicit solution. It is easy to check that implicit solutions still satisfy the di↵erentialequation. In this case, we di↵erentiate to get

y0

2y +2y

!

= 2x.

It is simple to see that the di↵erential equation holds. If you want to compute values for y youmight have to be tricky. For example, you can graph x as a function of y, and then flip your paper.Computers are also good at some of these tricks, but you have to be careful.

We note above that the equation also has a solution y = 0. In this case, it turns out that thegeneral solution is y2 +2 ln |y| = x2 +C together with y = 0. These outlying solutions such as y = 0are sometimes called singular solutions.

1.3.3 ExamplesExample 1.3.2: Solve x2y0 = 1 � x2 + y2

� x2y2, y(1) = 0.First factor the right hand side to obtain

x2y0 = (1 � x2)(1 + y2).

Now we separate variables, integrate and solve for y

y0

1 + y2 =1 � x2

x2

y0

1 + y2 =1x2 � 1

arctan(y) =�1x� x +C

y = tan

�1x� x +C

!

Now solve for the initial condition, 0 = tan(�2+C) to get C = 2 (or 2+ ⇡, etc...). The solution weare seeking is, therefore,

y = tan

�1x� x + 2

!

.

Page 25: Urban Illiois

1.3. SEPARABLE EQUATIONS 25

Example 1.3.3: Suppose Bob made a cup of co↵ee, and the water was boiling (100 degrees Cel-sius) at time t = 0. Suppose Bob likes to drink his co↵ee at 70 degrees. Let the Ambient (room)temperature be 26 degrees. Furthermore, suppose Bob measured the temperature of the co↵ee at 1minute (t = 60) and found that it dropped to 95 degrees. When should Bob start drinking?

Let T be the temperature of co↵ee, let A be the ambient (room) temperature. Then for some kthe temperature of co↵ee is:

dTdt= k(A � T ).

For our setup A = 26, T (0) = 100, T (1) = 95. We separate variables and integrate (C and D willdenote arbitrary constants)

1A � T

dTdt= k,

ln A � T = �kt +C,A � T = De�kt,

T = A � De�kt.

That is T = 26 � De�kt. We plug in the first condition 100 = T (0) = 26 � D and hence D = �74.Now we have T = 26 + 74e�kt. We plug in 95 = T (1) = 26 + 74e�k. Solving for k we getk = � ln(95 � 26)/74 ⇡ 0.07. Now to solve for which t gives me 70 degrees. That is we solve70 = 26 + 74e�0.07t to get t = � ln(70�26)/74

0.07 ⇡ 7.43 minutes. So Bob can begin to drink the co↵ee atabout 7 and a half minutes from the time Bob made it. Probably about the amount of time it tookus to calculate how long it would take.

Example 1.3.4: Solve y0 = �xy2

3 .First note that y = 0 is a solution (a singular solution). So assume that y , 0 and write

�3y2 y0 = x,

1y3 =

x2

2+C,

y =1

( x2

2 +C)1/3.

1.3.4 ExercisesExercise 1.3.1: Solve y0 = x

y .

Exercise 1.3.2: Solve y0 = x2y.

Exercise 1.3.3: Solve dxdt = (x2

� 1) t, for x(0) = 0.

Page 26: Urban Illiois

26 CHAPTER 1. FIRST ORDER ODES

Exercise 1.3.4: Solve dxdt = x sin(t), for x(0) = 1.

Exercise 1.3.5: Solve dydx = xy + x + y + 1. Hint: Factor the right hand side.

Exercise 1.3.6: Find an implicit solution to xy0 = y + 2x2y, where y(1) = 1.

Exercise 1.3.7: Solve x dydx � y = 2x2y, for y(0) = 10.

Page 27: Urban Illiois

1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR 27

1.4 Linear equations and the integrating factorNote: more than 1 lecture, §1.5 in EP

One of the most important types of equations we will learn how to solve are so-called linearequations. In fact the majority of this course will focus on linear equations. In this lecture we willfocus on the first order linear equation. That is a first order equation is linear if we can put it intothe following form:

y0 + p(x)y = f (x). (1.3)

The word “linear” here means linear in y. The dependence on x can be more complicated.Solutions of linear equations have nice properties. For example, the solution exists wherever

p(x) and f (x) are defined, and has the same regularity (read: it is just as nice). But most importantlyfor us right now, there is a method for solving linear first order equations.

What we will do is to multiply both sides of (1.3) by some function r(x) such that

r(x)y0 + r(x)p(x)y =ddx

h

r(x)yi

.

We can then integrate both sides of

ddx

h

r(x)yi

= r(x) f (x).

Note that the right hand side does not depend on y and the left hand side is written as a derivativeof a function. We can then solve for y. The function r(x) is called the integrating factor and themethod is called the integrating factor method.

So we are looking for a function r(x) such that if we di↵erentiate it, we get the same functionback multiplied by p(x). That seems like a job for the exponential function!

r(x) = eR

p(x)dx

Let us do the calculation.

y0 + p(x)y = f (x),

eR

p(x)dxy0 + eR

p(x)dx p(x)y = eR

p(x)dx f (x),ddx

h

eR

p(x)dxyi

= eR

p(x)dx f (x),

eR

p(x)dxy =Z

eR

p(x)dx f (x) dx +C,

y = e�R

p(x)dx

Z

eR

p(x)dx f (x) dx +C!

.

Of course, to get a closed form formula for y we need to be able to find a closed form formulafor the two integrals.

Page 28: Urban Illiois

28 CHAPTER 1. FIRST ORDER ODES

Example 1.4.1: Solvey0 + 2xy = ex�x2

y(0) = �1.

First note that p(x) = 2x and f (x) = ex�x2 . The integrating factor is r(x) = eR

p(x) dx = ex2 . Wemultiply both sides of the equation by r(x) to get

ex2y0 + 2xex2

y = ex�x2ex2,

ddx

h

ex2yi

= ex.

We integrate

ex2y = ex +C,

y = ex�x2+Cex2

.

Next, we solve for the initial condition �1 = y(0) = 1 +C, so C = �2. The solution is

y = ex�x2� 2ex2

.

Note that we do not care which antiderivative we take when computing eR

p(x)dx. You can alwaysadd a constant of integration, but those constants will not matter in the end.

Exercise 1.4.1: Try it! Add a constant of integration to the integral in the integrating factor andshow that the solution you get in the end is the same as what we got above.

An advice: Do not try to remember the formula itself, that is way too hard. It is easier toremember the process and repeat it.

Since we cannot always evaluate the integrals in closed form, it is useful to know how to writethe solution in definite integral form. A definite integral is something that you can plug into acomputer or a calculator. Suppose we are given

y0 + p(x)y = f (x) y(x0) = y0.

Look at the solution and write the integrals as definite integrals.

y(x) = e�R x

x0p(s) ds

Z x

x0

eR t

x0p(s) ds f (t) dt + y0

!

. (1.4)

You should be careful to properly use dummy variables here. If you now plug that into a computerof a calculator, it will be happy to give you numerical answers.

Exercise 1.4.2: Check that y(x0) = y0 in formula (1.4).

Page 29: Urban Illiois

1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR 29

Exercise 1.4.3: Write the solution of the following problem as a definite integral, but try to simplifyas far as you can. You will not be able to find the solution in closed form.

y0 + y = ex2�x y(0) = 10.

Example 1.4.2: The following is a simple application of linear equations and this type of a prob-lem is used often in real life. For example, linear equations are used in figuring out the concentra-tion of chemicals in bodies of water.

A 100 liter tank contains 10 kilograms of salt dissolved in 60 liters of water. Solution of waterand salt (brine) with concentration of 0.1 kg / liter is flowing in at the rate of 5 liters a minute. Thesolution in the tank is well stirred and flows out at a rate of 3 liters a minute. How much salt is inthe tank when the tank is full?

Let us come up with the equation. Let x denote the kg of salt in the tank, let t denote the timein minutes. Then for a small change �t in time, the change in x (denoted �x) is approximately

�x ⇡ (rate in ⇥ concentration in)�t � (rate out ⇥ concentration out)�t

Taking the limit �t ! 0 we see that

dxdt= (rate in ⇥ concentration in) � (rate out ⇥ concentration out)

We have

rate in = 5concentration in = 0.1

rate out = 3

concentration out =x

volume=

x60 + (5 � 3)t

Our equation is, therefore,dxdt= (5 ⇥ 0.1) �

3x

60 + 2t

Or in the form (1.3)dxdt+

360 + 2t

x = 0.5

Let us solve. The integrating factor is

r(t) = exp

Z

360 + 2t

dt!

= exp

32

ln(60 + 2t)!

= (60 + 2t)3/2

Page 30: Urban Illiois

30 CHAPTER 1. FIRST ORDER ODES

We multiply both sides of the equation to get

(60 + 2t)3/2 dxdt+ (60 + 2t)3/2 3

60 + 2tx = 0.5(60 + 2t)3/2

ddt

h

(60 + 2t)3/2xi

= 0.5(60 + 2t)3/2

(60 + 2t)3/2x =Z

0.5(60 + 2t)3/2dt +C

x = (60 + 2t)�3/2Z

0.5(60 + 2t)3/2dt +C(60 + 2t)�3/2

x = 0.5(60 + 2t)�3/2 25

(60 + 2t)5/2 +C(60 + 2t)�3/2

x =60 + 2t

5+C(60 + 2t)�3/2

Now to figure out C. We know that at t = 0, x = 10. So

10 = x(0) =605+C(60)�3/2 = 12 +C(60)�3/2

orC = �2(603/2) ⇡ �929.5

We are interested in x when the tank is full. So we note that the tank is full when 60+2t = 100,or when t = 20. So

x(20) =60 + 40

5+C(60 + 40)�3/2

⇡ 20 � 929.5(100)�3/2⇡ 19.07

The concentration at the end is approximately 0.19 kg/liter and we started with 16 or 0.167

kg/liter.

1.4.1 ExercisesIn the exercises, feel free to leave answer as a definite integral if a closed form solution cannot befound. If you can find a closed form solution, you should give that.

Exercise 1.4.4: Solve y0 + xy = x.

Exercise 1.4.5: Solve y0 + 6y = ex.

Exercise 1.4.6: Solve y0 + 3x2y = sin(x) e�x3 , with y(0) = 1.

Exercise 1.4.7: Solve y0 + cos(x)y = cos(x).

Exercise 1.4.8: Solve 1x2+1 y0 + xy = 3, with y(0) = 0.

Page 31: Urban Illiois

1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR 31

Exercise 1.4.9: Suppose there are two lakes. The output of one is flowing to the other. The inand out flow from each lake is 500 liters per hour. The first lake contains 100 thousand liters ofwater and the second lake contains 200 thousand liters of water. A truck with 500 kg of toxicsubstance crashes into the first lake. Assume that the water is being continually mixed perfectly bythe stream. a) Find the concentration of toxic substance as a function of time (in seconds) in bothlakes. b) When will the concentration in the first lake be below 0.01 kg per liter. c) When will theconcentration in the second lake be maximal.

Exercise 1.4.10: Newton’s law of cooling states that dxdt = �k(x � A) where x is the temperature,

t is time, A is the ambient temperature, and k > 0 is a constant. Suppose that A = A0 cos! t forsome constants A0 and !. That is the ambient temperature oscillates (for example night and daytemperatures). a) Find the general solution. b) In the long term, will the initial conditions makemuch of a di↵erence? Why or why not.

Page 32: Urban Illiois

32 CHAPTER 1. FIRST ORDER ODES

1.5 SubstitutionNote: 1 lecture, §1.6 in EP

Just like when solving integrals, one method is to try to change variables to end up with asimpler equation that can be solved.

1.5.1 SubstitutionThe equation

y0 = (x � y + 1)2.

is neither separable nor linear. What can we do? How about trying to change variables, so that inthe new variables the equation is simpler. We will use another variable v, which we will treat as afunction of x. Let us try

v = x � y + 1.Now we need to figure out y0 in terms of v0, v and x. We di↵erentiate (in x) to obtain v0 = 1 � y0.So y0 = 1 � v0. We plug this into the equation to get

1 � v0 = v2.

In other words, v0 = 1 � v2. Such an equation we know how to solve.1

1 � v2 dv = dx.

So12

ln�

v + 1v � 1

= x +C�

v + 1v � 1

= e2x+2C,

or v+1v�1 = De2x for some constant D. Note that v = 1 and v = �1 are also solutions.Now we need to “unsubstitute.”

x � y + 2x � y

= De2x

and also the two solutions x � y + 1 = 1 or y = x and x � y + 1 = �1 or y = x + 2. We also solvethe first equation for y.

x � y + 2 = (x � y)De2x,

x � y + 2 = Dxe2x� yDe2x,

�y + yDe2x = Dxe2x� x � 2,

y(�1 + De2x) = Dxe2x� x � 2,

y =Dxe2x

� x � 2De2x

� 1.

Page 33: Urban Illiois

1.5. SUBSTITUTION 33

Note that D = 0 gives y = x + 2, but no value of D gives the solution y = x.Substitution in di↵erential equations is applied in much the same way that it is applied in

calculus. You guess. Several di↵erent substitutions might work. There are some general things tolook for. We summarize a few of these in a table.

When you see Try substitutingyy0 y2

y2y0 y3

(cos y)y0 sin y(sin y)y0 cos yy0ey ey

Usually you try to substitute in the “most complicated” part of the equation with the hopes ofsimplifying it. The above table is just a rule of thumb. You might have to modify your guesses. Ifa substitution does not work (it does not make the equation any simpler), try a di↵erent one.

1.5.2 Bernoulli equationsThere are some forms of equations where there is a general rule for substitution which alwaysworks. For example, the so-called Bernoulli equations†.

y0 + p(x)y = q(x)yn.

This equation looks a lot like a linear equation except for the yn. If n = 0 or n = 1 then the equationis linear and we can solve it. Otherwise, a change of coordinates v = y1�n transforms the Bernoulliequation into a linear equation. Note that n need not be an integer.

Example 1.5.1: Solvexy0 + y(x + 1) + xy5 = 0, y(1) = 1.

First we note this is Bernoulli (p(x) = (x + 1)/x and q(x) = �1). We substitute

v = y1�5 = y�4, v0 = �4y�5y0.

In other words, �y5

4 v0 = y0. So

xy0 + y(x + 1) + xy5 = 0,�xy5

4v0 + y(x + 1) + xy5 = 0,

�x4

v0 + y�4(x + 1) + x = 0,�x4

v0 + v(x + 1) + x = 0,

†There are several things called Bernoulli equations, this is just one of them. The Bernoullis were a prominentSwiss family of mathematicians. These particular equations are named for Jacob Bernoulli (1654 – 1705).

Page 34: Urban Illiois

34 CHAPTER 1. FIRST ORDER ODES

and finally

v0 �4(x + 1)

xv = 4.

Now it is linear. So use the integrating factor. Let us assume that x > 0 so |x| = x. This assumptionis OK because our initial condition is for x = 1.

r(x) = exp

Z

�4(x + 1)x

dx!

= e�4x�4 ln(x) = e�4xx�4 e�4x

x4 .

Now

ddx

"

e�4x

x4 v#

= 4e�4x

x4 ,

e�4x

x4 v =Z x

14

e�4s

s4 ds + 1,

v = e4xx4

4Z x

1

e�4s

s4 ds + 1!

.

Note that the integral in this expression is not possible to find in closed form. But again, as we saidbefore, it is perfectly fine solution to have a definite integral in our solution. Now unsubstitute

y�4 = e4xx4

4Z x

1

e�4s

s4 ds + 1!

,

y =e�x

x⇣

4R x

1e�4s

s4 ds + 1⌘1/4 .

1.5.3 Homogeneous equationsAnother type of equations we can solve are the so-called homogeneous equations. Suppose thatwe can write the di↵erential equation as

y0 = F✓y

x

.

Here we try the substitutions

v =yx

and therefore y0 = v + xv0.

We note that the equation is transformed into

v + xv0 = F(v) or xv0 = F(v) � v orv0

F(v) � v=

1x.

Page 35: Urban Illiois

1.5. SUBSTITUTION 35

Hence an implicit solution isZ

1F(v) � v

dv = ln |x| +C.

Example 1.5.2: Solvex2y0 = y2 + xy, y(1) = 1.

First we transform this into the form y0 =⇣

yx

⌘2+ y

x . Now we do the substitution v = yx to get the

separable equationxv0 = v2 + v � v = v2,

which has a solutionZ

1v2 dv = ln |x| +C,

�1v= ln |x| +C,

v =�1

ln |x| +C.

We unsubstitute

y/x =�1

ln |x| +C,

y =�x

ln |x| +C.

We want y(1) = 1, so

1 = y(1) =�1

ln |1| +C=�1C.

Thus C = �1 and the solution we are looking for is

y =�x

ln |x| � 1.

1.5.4 ExercisesExercise 1.5.1: Solve xy0 + y(x + 1) + xy5 = 0, with y(1) = 1.

Exercise 1.5.2: Solve 2yy0 + 1 = y2 + x, with y(0) = 1.

Exercise 1.5.3: Solve y0 + xy = y4, with y(0) = 1.

Exercise 1.5.4: Solve yy0 + x =p

x2 + y2.

Exercise 1.5.5: Solve y0 = (x + y � 1)2.

Exercise 1.5.6: Solve y0 = x+y2

yp

y2+1, with y(0) = 1.

Page 36: Urban Illiois

36 CHAPTER 1. FIRST ORDER ODES

1.6 Autonomous equationsNote: 1 lecture, §2.2 in EP

Let us consider problems of the form

dxdt= f (x),

where the derivative of solutions depends only on x (the dependent variable). These types ofequations are called autonomous equations. If we think of t as time, the naming comes from thefact that the equation is independent of time.

Let us come back to the cooling co↵ee problem. Newton’s law of cooling says that

dxdt= �k(x � A),

where x is the temperature, t is time, k is some constant and A is the ambient temperature. SeeFigure 1.6 for an example.

Note the solution x = A (in the example A = 5). We call these types of solutions equilibriumsolutions. The points on the x axis where f (x) = 0 are called critical points. The point x = A isa critical point. In fact, each critical point corresponds to an equilibrium solution. Note also, bylooking at the graph, that the solution x = A is “stable” in that small perturbations in x do not leadto substantially di↵erent solutions as t grows. If we change the initial condition a little bit, then ast ! 1 we get x ! A. We call such critical points stable. In this simple example it turns out thatall solutions in fact go to A as t ! 1. If a critical point is not stable we would say it is unstable.

0 5 10 15 20

0 5 10 15 20

-10

-5

0

5

10

-10

-5

0

5

10

Figure 1.6: Slope field and some solutions ofx0 = �0.3(x � 5).

0 5 10 15 20

0 5 10 15 20

-5

0

5

10

-5

0

5

10

Figure 1.7: Slope field and some solutions ofx0 = �0.1x(5 � x).

Page 37: Urban Illiois

1.6. AUTONOMOUS EQUATIONS 37

Let us consider the logistic equation

dxdt= kx(M � x),

for some positive k and M. This equation is commonly used to model population if you know thelimiting population M, that is the maximum sustainable population. This scenario leads to lesscatastrophic predictions on world population. Note that in the real world there is no such thing asnegative population, but we will still consider negative x for the purposes of the math.

See Figure 1.7 on the facing page for an example. Note two critical points, x = 0 and x = 5.The critical point at x = 5 is stable. On the other hand the critical point at x = 0 is unstable.

It is not really necessary to find the exact solutions to talk about the long term behavior of thesolutions. For example, from the above we can easily see that

limt!1

x(t) =

8

>

>

>

>

>

<

>

>

>

>

>

:

5 if x(0) > 0,0 if x(0) = 0,DNE or �1 if x(0) < 0.

Where DNE means “does not exist.” From just looking at the slope field we cannot quite decidewhat happens if x(0) < 0. It could be that the solution does not exist t all the way to 1. Thinkof the equation y0 = y2, we have seen that it only exists for some finite period of time. Same canhappen here. In our example equation above it will actually turn out that the solution does not existfor all time, but to see that we would have to solve the equation. In any case, the solution does goto �1, but it may get there rather quickly.

Many times are interested only in the long term behavior of the solution and hence we wouldjust be doing way too much work if we tried to solve the equation exactly. It is easier to justlook at the phase diagram or phase portrait, which is a simple way to visualize the behavior ofautonomous equations. In this case there is one dependent variable x. So draw the x axis, markall the critical points and then draw arrows in between. Mark positive with up and negative withdown.

y = 0

y = 5

Armed with the phase diagram, it is easy to approximately sketch how the solutions are goingto look.

Page 38: Urban Illiois

38 CHAPTER 1. FIRST ORDER ODES

Exercise 1.6.1: Try sketching a few solutions. Check with the graph above if you are getting thesame answers.

Once we draw the phase diagram, we can easily classify critical points as stable or unstable.

unstable stable

Since any mathematical model we cook up will only be an approximation to the real world,unstable points are generally bad news.

Let us think about the logistic equation with harvesting. Logistic equations are commonly usedfor modelling population. Suppose an alien race really likes to eat humans. They keep a planetwith humans on it and harvest the humans at a rate of h million humans per year. Suppose x is thenumber of humans in millions on the planet and t is time in years. Let M be the limiting populationwhen no harvesting is done. k > 0 is some constant depending on how fast humans multiply. Ourequation becomes

dxdt= kx(M � x) � h.

Multiply out and solve for critical pointsdxdt= �kx2 + kMx � h.

Critical points A and B are

A =kM +

p

(kM)2� 4hk

2kB =

kM �p

(kM)2� 4hk

2k.

Exercise 1.6.2: Draw the phase diagram for di↵erent possibilities. Note that these possibilitiesare A > B, or A = B, or A and B both complex (i.e. no real solutions).

It turns out that when h = 1, then A and B are distinct and positive. The graph we willget is given in Figure 1.8 on the next page. As long as the population stays above B which isapproximately 1.55 million, then the population will not die out. If ever the population dropsbelow B, humans will die out, and the fast food restaurant serving them will go out of business.

When h = 1.6, then A = B. There is only one critical point which is unstable. When thepopulation is above 1.6 million it will tend towards this number. If it ever drops below 1.6 million,humans will die out on the planet. This scenario is not one that we (as the human fast foodproprietor) want to be in. A small perturbation of the equilibrium state and we are out of business.There is no room for error. See Figure 1.9 on the facing page

Finally if we are harvesting at 2 million humans per year, the population will always plummettowards zero, no matter how well stocked the planet starts. See Figure 1.10 on the next page.

Page 39: Urban Illiois

1.6. AUTONOMOUS EQUATIONS 39

0 5 10 15 20

0 5 10 15 20

0

2

5

8

10

0

2

5

8

10

Figure 1.8: Slope field and some solutions ofx0 = �0.1x(8 � x) � 1.

0 5 10 15 20

0 5 10 15 20

0

2

5

8

10

0

2

5

8

10

Figure 1.9: Slope field and some solutions ofx0 = �0.1x(8 � x) � 1.6.

0 5 10 15 20

0 5 10 15 20

0

2

5

8

10

0

2

5

8

10

Figure 1.10: Slope field and some solutions of x0 = �0.1x(8 � x) � 2.

1.6.1 ExercisesExercise 1.6.3: Let x0 = x2. a) Draw the phase diagram, find the critical points and mark themstable or unstable. b) Sketch typical solutions of the equation. c) Find limt!1 x(t) for the solutionwith the initial condition x(0) = �1.

Exercise 1.6.4: Let x0 = sin x. a) Draw the phase diagram for �4⇡ x 4⇡. On this intervalmark the critical points stable or unstable. b) Sketch typical solutions of the equation. c) Findlimt!1 x(t) for the solution with the initial condition x(0) = 1.

Exercise 1.6.5: Suppose f (x) is positive for 0 < x < 1 and negative otherwise. a) Draw the phase

Page 40: Urban Illiois

40 CHAPTER 1. FIRST ORDER ODES

diagram for x0 = f (x), find the critical points and mark them stable or unstable. b) Sketch typicalsolutions of the equation. c) Find limt!1 x(t) for the solution with the initial condition x(0) = 0.5.

Exercise 1.6.6: Start with the logistic equation dxdt = kx(M � x). Suppose that we modify our

harvesting. That is we will only harvest only an amount proportional to current population, thatwe harvest hx for some h > 0. a) Construct the di↵erential equation. b) Show that if kM > h, thenthe equation is still logistic. c) What happens when kM < h?

Page 41: Urban Illiois

1.7. NUMERICAL METHODS: EULER’S METHOD 41

1.7 Numerical methods: Euler’s methodNote: 1 lecture, §2.4 in EP

At this point it may be good to first try the Lab II and/or Project II from the IODE website:http://www.math.uiuc.edu/iode/.

The first thing to note is that, as we said before, it is generally very hard if not impossible toget a nice formula for the solution of the problem

y0 = f (x, y) y(x0) = y0.

What if we want to find out the value of the solution at some particular x. Or perhaps we evenwant to produce a graph of the solution to inspect the behavior.

Euler’s method‡: We take x0 and compute the slope k = f (x0, y0). The slope is the change iny per unit change in x. We follow the line for an interval of length h. Hence if y = y0 at x0, thenwe will say that y1 (the approximate value of y at x1 = x0 + h) will be y1 = y0 + hk. Rinse repeat!That is, compute x2 and y2 using x1 and y1. For an example of the first two steps of the method seeFigure 1.11.

-1 0 1 2 3

-1 0 1 2 3

0.0

0.5

1.0

1.5

2.0

2.5

3.0

0.0

0.5

1.0

1.5

2.0

2.5

3.0

-1 0 1 2 3

-1 0 1 2 3

0.0

0.5

1.0

1.5

2.0

2.5

3.0

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Figure 1.11: First two steps of Euler’s method with h = 1 for the equation y0 = y2

3 with initialconditions y(0) = 1.

More abstractly we compute

xi+1 = xi + h, yi+1 = yi + h f (xi, yi).

By connecting the dots we get an approximate graph of the solution. Do note that this is not exactlythe solution. See Figure 1.12 on the next page for the plot of the real solution.‡Named after the Swiss mathematician Leonhard Paul Euler (1707 – 1783). Do note the correct pronunciation of

the name sounds more like "oiler."

Page 42: Urban Illiois

42 CHAPTER 1. FIRST ORDER ODES

-1 0 1 2 3

-1 0 1 2 3

0.0

0.5

1.0

1.5

2.0

2.5

3.0

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Figure 1.12: Two steps of Euler’s method (step size 1) and the exact solution for the equationy0 = y2

3 with initial conditions y(0) = 1.

Let us see what happens with the equation y0 = y2

3 , y(0) = 1. Let us try to approximate y(2)using Euler’s method. In Figures 1.11 and 1.12 we have essentially graphically approximatedy(2) with step size 1. With step size 1 we have y(2) ⇡ 1.926. The real answer is 3. So we areapproximately 1.074 o↵. Let us halve the step size. If you do the computation you will find thaty(2) ⇡ 2.209, so error of about 0.791. Table 1.1 on the facing page gives the values computed forvarious parameters.

Exercise 1.7.1: Solve this equation exactly and show that y(2) = 3.

The di↵erence between the actual solution and the approximate solution we will call the error.We will usually talk about just the size of the error and we do not care much about its sign. Themain point is, that we usually do not know the real solution, so we only have a vague understandingof the error. If we knew the error exactly ... what is the point of doing the approximation.

We notice that except for the first few times, every time we halved the interval the error approx-imately halved. This halving of the error is a general feature of Euler’s method as it is a first ordermethod. In the IODE Project II you are asked to implement a second order method. A secondorder method reduces the error to approximately one quarter every time you halve the interval.

Note that to get the error to be within 0.1 of the answer we had to already do 64 steps. Toget it to within 0.01 we would have to halve another 3 or four times, meaning doing 512 to 1024steps. That is quite a bit to do by hand. The improved Euler method should quarter the error everytime you halve the interval, so you would have to approximately do half as many “halvings” toget the same error. This reduction can be a big deal. With 10 halvings (starting at h = 1) youhave 1024 steps, whereas with 5 halvings you only have to do 32 steps, assuming that the errorwas comparable to start with. A computer may not care between this di↵erence for a problem thissimple, but suppose each step would take a second to compute (the function may be substantially

Page 43: Urban Illiois

1.7. NUMERICAL METHODS: EULER’S METHOD 43

h Approximate y(2) Error ErrorPrevious error

1 1.92592592593 1.0740740740700.5 2.20861152999 0.791388470013 0.736809954840

0.25 2.47249414666 0.527505853335 0.6665574156340.125 2.68033658758 0.319663412423 0.605990266083

0.0625 2.82040079550 0.179599204497 0.5618384760900.03125 2.90412106479 0.095878935207 0.533849442573

0.015625 2.95035498158 0.049645018422 0.5177885873960.0078125 2.97472419486 0.025275805142 0.509130743538

Table 1.1: Euler’s method approximation of y(2) where of y0 = y2

3 , y(0) = 1.

more di�cult to compute than y2/3). Then the di↵erence is 32 seconds versus about 17 minutes.Note: We are not being altogether fair, a second order method would probably double the time todo each step. Even so, it is 1 minute versus 17 minutes. Next, suppose that you have to repeat sucha calculation for di↵erent parameters a thousand times. You get the idea.

Note that we do not know the error! How do you know what is the right step size? Essentiallyyou keep halving the interval and if you are lucky you can estimate the error from a few of thesecalculations and the assumption that the error goes down by a factor of one half each time (if youare using standard Euler).

Exercise 1.7.2: In the table above, suppose you do not know the error. Take the approximatevalues of the function in the last two lines, assume that the error goes down by a factor of 2. Canyou estimate the error in the last time from this? Does it agree with the table? Now do it for thefirst two rows. Does this agree with the table?

Let talk a little bit more about this example y0 = y2

3 , y(0) = 1. Suppose that instead of y(2)we wish to find y(3). Results of this e↵ort are listed in Table 1.2 on the next page for successivehalvings of h. What is going on here? Well, you should solve the equation exactly and you willnotice that the solution does not exist at x = 3. In fact the solution blows up.

Another case when things can go bad is if the solution oscillates wildly near some point. Suchan example is given in IODE Project II. In this case, the solution may exist at all points, but evena better approximation method than Euler would need an insanely small step size to compute thesolution with reasonable precision. And computers might not be able to handle such a small stepsize anyway.

In real applications you would not use a simple method such as Euler’s. The simplest methodthat would probably be used in a real application is the standard Runge-Kutta method (we will notdescribe it here). That is a fourth order method, that means that if you halve the interval, the errorgenerally goes down by a factor of 16.

Page 44: Urban Illiois

44 CHAPTER 1. FIRST ORDER ODES

h Approximate y(3)1 3.16232281664

0.5 4.543289157660.25 6.86078752222

0.125 10.80320641130.0625 17.5989264104

0.03125 29.46004461950.015625 50.4012144477

0.0078125 87.7576927770

Table 1.2: Attempts to use Euler’s to approximate y(3) where of y0 = y2

3 , y(0) = 1.

Choosing the right method to use and the right step size can be very tricky. There are severalcompeting factors to consider.

• Computational time: Each step takes computer time. Even if the function f is simple tocompute, you do it many times over. Large step size means faster computation, but perhapsnot the right precision.

• Roundo↵ errors: Computers only compute with a certain number of significant digits. Errorsintroduced by rounding numbers o↵ during your computations become noticeable when thestep size becomes too small relative to the quantities you are working with. So reducing stepsize may in fact make errors worse.

• Stability: Certain equations may be numerically unstable. Small errors lead to large errorsdown the line. Or in the worst case the numerical computations might be giving you bogusnumbers that look like a correct answer. Just because the numbers have stabilized aftersuccessive halving, does not mean that you must have the right answer. Or what may happenis that the numbers may never stabilize no matter how many times you halve the interval.

You have seen just the beginnings of the challenges that appear in real applications. There isongoing active research by engineers and mathematicians on how to do numerical approximationin the best way. For example, the general purpose method used for the ODE solver in Matlab andOctave (as of this writing) is a method that appeared only in the literature only in the 1980s.

1.7.1 Exercises

Exercise 1.7.3: Considerdxdt= (2t � x)2, x(0) = 2. Use Euler’s method with step size h = 0.5 to

approximate x(1).

Page 45: Urban Illiois

1.7. NUMERICAL METHODS: EULER’S METHOD 45

Exercise 1.7.4: Considerdxdt= t� x, x(0) = 1. a) Use Euler’s method with step sizes h = 1, 1

2 ,14 ,

18

to approximate x(1). b) Solve the equation exactly. c) Describe what happens to the errors for eachh you used. That is, find the factor by which the error changed each time you halved the interval.

Page 46: Urban Illiois

46 CHAPTER 1. FIRST ORDER ODES

Page 47: Urban Illiois

Chapter 2

Higher order linear ODEs

2.1 Second order linear ODEsNote: less than 1 lecture, first part of §3.1 in EP

Let us consider the general second order linear di↵erential equation

A(x)y00 + B(x)y0 +C(x)y = F(x).

We usually divide through by A to get

y00 + p(x)y0 + q(x)y = f (x), (2.1)

where p = B/A, q = C/A, and f = F/A. The word linear means that the equation contains nopowers nor functions of y, y0, and y00.

In the special case when f (x) = 0 we have a homogeneous equation

y00 + p(x)y0 + q(x)y = 0. (2.2)

We have already seen some second order linear homogeneous equations.

y00 + ky = 0 Two solutions are: y1 = cos kx, y2 = sin kx.y00 � ky = 0 Two solutions are: y1 = ekx, y2 = e�kx.

If we know two solutions two a linear homogeneous equation, we know a lot more of them.

Theorem 2.1.1 (Superposition). Suppose y1 and y2 are two solutions of the homogeneous equation(2.2). Then

y(x) = C1y1(x) +C2y2(x),

also solves (2.2) for arbitrary constants C1 and C2.

47

Page 48: Urban Illiois

48 CHAPTER 2. HIGHER ORDER LINEAR ODES

That is, we can add together solutions and multiply by constants to obtain new di↵erent solu-tions. We will prove this theorem because the proof is very enlightening and illustrates how linearequations work.

Proof: Let y = C1y1 +C2y2. Then

y00 + py0 + qy = (C1y1 +C2y2)00 + p(C1y1 +C2y2)0 + q(C1y1 +C2y2)= C1y001 +C2y002 +C1 py01 +C2 py02 +C1qy1 +C2qy2

= C1(y001 + py01 + qy1) +C2(y002 + py02 + qy2)= C1 · 0 +C2 · 0 = 0

The proof becomes even simpler to state if we use the operator notation. An operator is anobject that eats functions and spits out functions (kind of like what a function is, but a function eatsnumbers and spits out numbers). Define the operator L by

Ly = y00 + py0 + qy.

L being linear means that L(C1y1 +C2y2) = C1Ly1 +C2Ly2. Hence the proof simply becomes

Ly = L(C1y1 +C2y2) = C1Ly1 +C2Ly2 = C1 · 0 +C2 · 0 = 0.

Two other solutions to the second equation y00 � ky = 0 are y1 = cosh kx and y2 = sinh kx.Let us remind ourselves of the definition, cosh x = ex+e�x

2 and sinh x = ex�e�x

2 . Therefore, these aresolutions by superposition as they are linear combinations of the two exponential solutions.

As sinh and cosh are sometimes more convenient to use than the exponential, let us reviewsome of their properties.

cosh 0 = 1 sinh 0 = 0ddx

cosh x = sinh xddx

sinh x = cosh x

cosh2 x � sinh2 x = 1

Exercise 2.1.1: Derive these properties from the definitions of sinh and cosh in terms of exponen-tials.

Linear equations have nice and simple answers to the existence and uniqueness question.

Theorem 2.1.2 (Existence and uniqueness). Suppose p, q, f are continuous functions and a, b0, b1

are constants. The equationy00 + p(x)y0 + q(x)y = f (x),

has exactly one solution y(x) satisfying the initial conditions

y(a) = b0 y0(a) = b1.

Page 49: Urban Illiois

2.1. SECOND ORDER LINEAR ODES 49

For example, the equation y00 + y = 0 with y(0) = b0 and y0(0) = b1 has the solution

y(x) = b0 cos x + b1 sin x.

Or the equation y00 � y = 0 with y(0) = b0 and y0(0) = b1 has the solution

y(x) = b0 cosh x + b1 sinh x.

Here note that using cosh and sinh allows us to solve for the initial conditions much more easilythan if we have used the exponentials.

Note that the initial condition for a second order ODE consists of two equations. So if we havetwo arbitrary constants we should be able to solve for the constants and find a solution satisfyingthe initial conditions.

Question: Suppose we find two di↵erent solutions y1 and y2 to the homogeneous equation (2.2).Can every solution be written (using superposition) in the form y = C1y1 +C2y2?

Answer is a�rmative! Provided that y1 and y2 are di↵erent enough in the following sense. Wewill say y1 and y2 are linearly independent if one is not a constant multiple of the other. If you findtwo linearly independent solutions, then every other solution is written in the form

y = C1y1 +C2y2.

In this case y = C1y1 +C2y2 is the general solution.For example, we found the solutions y1 = sin x and y2 = cos x for the equation y00 + y = 0. It

is obvious that sin and cos are not multiples of each other. If sin x = A cos x for some constant A,we let x = 0 and this would imply A = 0 = sin x, which is preposterous. So y1 and y2 are linearlyindependent. Hence

y = C1 cos x +C2 sin x

is the general solution to y00 + y = 0.

2.1.1 ExercisesExercise 2.1.2: Show that y = ex and y = e2x are linearly independent.

Exercise 2.1.3: Take y00 + 5y = 10x + 5. Can you find guess a solution?

Exercise 2.1.4: Prove the superposition principle for nonhomogeneous equations. Suppose thaty1 is a solution to Ly1 = f (x) and y2 is a solution to Ly2 = g(x) (same operator L). Show that ysolves Ly = f (x) + g(x).

Exercise 2.1.5: For the equation x2y00 � xy0 = 0, find two solutions, show that they are linearlyindependent and find the general solution. Hint: Try y = xr.

Page 50: Urban Illiois

50 CHAPTER 2. HIGHER ORDER LINEAR ODES

Note that equations of the form ax2y00 + bxy0 + cy = 0 are called Euler’s equations or Cauchy-Euler equations. They are solved by trying y = xr and solving for r (we can assume that x � 0 forsimplicity).

Exercise 2.1.6: Suppose that (b � a)2� 4ac > 0. a) Find a formula for the general solution

of ax2y00 + bxy0 + cy = 0. Hint: Try y = xr and find a formula for r. b) What happens when(b � a)2

� 4ac = 0 or (b � a)2� 4ac < 0?

We will revisit the case when (b � a)2� 4ac < 0 later.

Exercise 2.1.7: Suppose that (b � a)2� 4ac = 0. Find a formula for the general solution of

ax2y00 + bxy0 + cy = 0. Hint: Try y = xr ln x for the second solution.

If you have one solution to a second order linear homogeneous equation you can find anotherone. This is the reduction of order method.

Exercise 2.1.8: Suppose y1 is a solution to y00 + p(x)y0 + q(x)y = 0. Show that

y2(x) = y1(x)Z

eR

p(x) dx

(y1(x))2 dx

is also a solution.

Let us solve some famous equations.

Exercise 2.1.9 (Chebychev’s equation of order 1): Take (1 � x2)y00 � xy0 + y = 0. a) Show thaty = x is a solution. b) Use reduction of order to find a second linearly independent solution. c)Write down the general solution.

Exercise 2.1.10 (Hermite’s equation of order 2): Take y00 �2xy0+4y = 0. a) Show that y = 1�2x2

is a solution. b) Use reduction of order to find a second linearly independent solution. c) Writedown the general solution.

Page 51: Urban Illiois

2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES 51

2.2 Constant coe�cient second order linear ODEsNote: more than 1 lecture, second part of §3.1 in EP

Suppose we have the problem

y00 � 6y0 + 8y = 0, y(0) = �2, y0(0) = 6.

This is a second order linear homogeneous equation with constant coe�cients. Constant coe�-cients means that the functions in front of y00, y0, and y are constants, not depending on x.

Think about a function that you know that stays essentially the same when you di↵erentiate it,so that we can take the function and its derivatives, add these together, and end up with zero.

Let us try a solution y = erx. Then y0 = rerx and y00 = r2erx. Plug in to get

y00 � 6y0 + 8y = 0,r2erx

� 6rerx + 8erx = 0,r2� 6r + 8 = 0 (divide through by erx),

(r � 2)(r � 4) = 0.

So if r = 2 or r = 4, then erx is a solution. So let y1 = e2x and y2 = e4x.

Exercise 2.2.1: Check that y1 and y2 are solutions.

The functions e2x and e4x are linearly independent. If they were not we could write e4x = Ce2x,which would imply that e2x = C which is clearly not possible. Hence, we can write the generalsolution as

y = C1e2x +C2e4x.

We need to solve for C1 and C2. To apply the initial conditions we first find y0 = 2C1e2x + 4C2e4x.We plug in x = 0 and solve.

�2 = y(0) = C1 +C2,

6 = y0(0) = 2C1 + 4C2.

Either apply some matrix algebra, or just solve these by high school algebra. For example, dividethe second equation by 2 to obtain 3 = C1 + 2C2, and subtract the two equations to get 5 = C2.Then C1 = �7 as �2 = C1 + 5. Hence, the solution we are looking for is

y = �7e2x + 5e4x.

Let us generalize this example into a method. Suppose that we have an equation

ay00 + by0 + cy = 0, (2.3)

Page 52: Urban Illiois

52 CHAPTER 2. HIGHER ORDER LINEAR ODES

where a, b, c are constants. Try the solution y = erx to obtain

ar2erx + brerx + cerx = 0ar2 + br + c = 0.

The equation ar2 + br + c = 0 is called the characteristic equation of the ODE. Solve for the r byusing the quadratic formula.

r1, r2 =�b ±

p

b2� 4ac

2a.

Therefore, we have er1 x and er2 x as solutions. There is still a di�culty if r1 = r2, but it is not hardto overcome.

Theorem 2.2.1. Suppose that r1 and r2 are the roots of the characteristic equation.

(i) If r1 and r2 are distinct and real (b2� 4ac > 0), then (2.3) has the general solution

y = C1er1 x +C2er2 x.

(ii) If r1 = r2 (b2� 4ac = 0), then (2.3) has the general solution

y = (C1 +C2x) er1 x.

For another example of the first case, note the equation y00 � k2y = 0. Here the characteristicequation is r2

� k2 = 0 or (r � k)(r + k) = 0 and hence e�kx and ekx are the two linearly independentsolutions.

Example 2.2.1: Find the general solution of

y00 � 8y0 + 16y = 0.

The characteristic equation is r2� 8r+ 16 = (r� 4)2 = 0. Hence a double root r1 = r2 = 4. The

general solution is, therefore,

y = (C1 +C2x) e4x = C1e4x +C2xe4x.

Exercise 2.2.2: Check that e4x and xe4x are linearly independent.

That e4x solves the equation is clear. If xe4x solves the equation then we know we are done. Letus compute y0 = e4x + 4xe4x and y00 = 8e4x + 16xe4x. Plug in

y00 � 8y0 + 16y = 8e4x + 16xe4x� 8(e4x + 4xe4x) + 16xe4x = 0.

We should note that in practice, doubled root rarely happens. If you pick your coe�cients trulyrandomly you are very unlikely to get a doubled root.

Let us give a short “proof” for why the solution xerx works when the root is doubled. Sincethis case is really a limiting case of when cases the two roots are distinct and very close. Note thater2 x�er1 x

r2�r1is a solution when the roots are distinct. When r1 goes to r2 in the limit this is like taking

derivative of erx using r as a variable. This limit is xerx, and hence this is also a solution in thedoubled root case.

Page 53: Urban Illiois

2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES 53

2.2.1 Complex numbers and Euler’s formulaIt may happen that a polynomial has some complex roots. For example, the equation r2 + 1 = 0has no real roots, but it does have two complex roots. Here we review some properties of complexnumbers.

Complex numbers may seem a strange concept especially because of the terminology. There isnothing imaginary or really complicated about complex numbers. A complex number is really justa pair of real numbers, (a, b). We can think of a complex number as a point in the plane. We addcomplex numbers in the straightforward way. We define a multiplication by

(a, b) ⇥ (c, d) def= (ac � bd, ad + bc).

It turns out that with this multiplication rule, all the standard properties of arithmetic hold. Further,and most importantly (0, 1) ⇥ (0, 1) = (�1, 0).

Generally we just write (a, b) as a + ib, and we treat i as if it were an unknown. You can justdo arithmetic with complex numbers just as you would do with polynomials. The property we justmentioned becomes i2 = �1. So whenever you see i2 you can replace it by �1. Also, for examplei and �i are roots of r2 + 1 = 0.

Note that engineers often use the letter j instead of i for the square root of �1. We will use themathematicians convention and use i.

Exercise 2.2.3: Make sure you understand (that you can justify) the following identities:

• i2 = �1, i3 = �i, i4 = 1,

1i= �i,

• (3 � 7i)(�2 � 9i) = · · · = �69 � 13i,

• (3 � 2i)(3 + 2i) = 32� (2i)2 = 32 + 22 = 13,

13�2i =

13�2i

3+2i3+2i =

3+2i13 =

313 +

213 i.

We can also define the exponential ea+ib of a complex number. We can do this by just writingdown the Taylor series and plugging in the complex number. Because most properties of theexponential can be proved by looking at the Taylor series, we note that many properties still holdfor the complex exponential. For example, ex+y = exey. This means that ea+ib = eaeib and hence ifwe can compute eib easily, we can compute ea+ib. Here we will use the so-called Euler’s formula.

Theorem 2.2.2 (Euler’s formula).

ei✓ = cos ✓ + i sin ✓ and e�i✓ = cos ✓ � i sin ✓.

Page 54: Urban Illiois

54 CHAPTER 2. HIGHER ORDER LINEAR ODES

Exercise 2.2.4: Using Euler’s formula, check the identities:

cos ✓ =ei✓ + e�i✓

2and sin ✓ =

ei✓� e�i✓

2i.

Exercise 2.2.5: Double angle identities: Start with ei(2✓) =⇣

ei✓⌘2

. Use Euler on each side anddeduce:

cos 2✓ = cos2 ✓ � sin2 ✓ and sin 2✓ = 2 sin ✓ cos ✓.

We also will need some notation. For a complex number a + ib we call a the real part and bthe imaginary part of the number. In notation this is

Re(a + bi) = a and Im(a + bi) = b.

2.2.2 Complex rootsSo now suppose that the equation ay00 + by0 + cy = 0 has a characteristic equation ar2 + br + c = 0which has complex roots. That is, by quadratic formula the roots are �b±

p

b2�4ac

2a . These are complexif b2� 4ac < 0. In this case we can see that the roots are

r1, r2 =�b2a± ip

b2� 4ac

2a.

As you can see, you will always get a pair of roots of the form ↵± i�. In this case we can still writethe solution as

y = C1e(↵+i�)x +C2e(↵�i�)x.

However, the exponential is now complex valued. We would need to choose C1 and C2 to becomplex numbers to obtain a real valued solution (which is what we are after). While there isnothing particularly wrong with this, it can make calculations harder and it would be nice to findtwo real valued solutions.

Here we can use Euler’s formula. First let

y1 = e(↵+i�)x and y2 = e(↵�i�)x.

Then note that

y1 = e↵x cos �x + ie↵x sin �x,y2 = e↵x cos �x � ie↵x sin �x.

We note that linear combinations of solutions are also solutions. Hence

y3 =y1 + y2

2= e↵x cos �x,

y4 =y1 � y2

2i= e↵x sin �x,

are also solutions. And furthermore they are real valued. It is not hard to see that they are linearlyindependent (not multiples of each other). Therefore, we have the following theorem.

Page 55: Urban Illiois

2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES 55

Theorem 2.2.3. Take the equation

ay00 + by0 + cy = 0.

If the characteristic equation has the roots ↵ ± i�, then the general solution is

y = C1e↵x cos �x +C2e↵x sin �x.

Example 2.2.2: Find the general solution of y00 + k2y = 0, for a constant k > 0.The characteristic equation is r2 + k2 = 0. Therefore, the roots are r = ±ik and by the theorem

we have the general solutiony = C1 cos kx +C2 sin kx.

Example 2.2.3: Find the solution of y00 � 6y0 + 13y = 0, y(0) = 0, y0(0) = 10.The characteristic equation is r2

�6r+13 = 0. By completing the square we get (r�3)2+22 = 0and hence the roots are r = 3 ± 2i. By the theorem we have the general solution

y = C1e3x cos 2x +C2e3x sin 2x.

To find the solution satisfying the initial conditions, we first plug in zero to get

0 = y(0) = C1e0 cos 0 +C2e0 sin 0 = C1.

Hence C1 = 0 and hence y = C2e3x sin 2x. We di↵erentiate

y0 = 3C2e3x sin 2x + 2C2e3x cos 2x.

We again plug in the initial condition and obtain 10 = y0(0) = 2C2, or C2 = 5. Hence the solutionwe are seeking is

y = 5e3x sin 2x.

2.2.3 ExercisesExercise 2.2.6: Find the general solution of 2y00 + 2y0 � 4y = 0.

Exercise 2.2.7: Find the general solution of y00 + 9y0 � 10y = 0.

Exercise 2.2.8: Solve y00 � 8y0 + 16y = 0 for y(0) = 2, y0(0) = 0.

Exercise 2.2.9: Solve y00 + 9y0 = 0 for y(0) = 1, y0(0) = 1.

Exercise 2.2.10: Find the general solution of 2y00 + 50y = 0.

Exercise 2.2.11: Find the general solution of y00 + 6y0 + 13y = 0.

Page 56: Urban Illiois

56 CHAPTER 2. HIGHER ORDER LINEAR ODES

Exercise 2.2.12: Find the general solution of y00 = 0 using the methods of this section.

Exercise 2.2.13: The method of this section applies to equations of other orders than two. We willsee higher orders later. Try to solve the first order equation 2y0 + 3y = 0 using the methods of thissection.

Exercise 2.2.14: Let us revisit Euler’s equations of Exercise 2.1.6 on page 50. Suppose now that(b � a)2

� 4ac < 0. Find a formula for the general solution of ax2y00 + bxy0 + cy = 0. Hint: Notethat xr = er ln x.

Page 57: Urban Illiois

2.3. HIGHER ORDER LINEAR ODES 57

2.3 Higher order linear ODEsNote: 2 lectures, §3.2 and §3.3 in EP

In general, most equations that appear in applications tend to be second order. Higher orderequations do appear from time to time, but it is a general assumption of modern physics that theworld is “second order.”

The basic results about linear ODEs of higher order are essentially exactly the same as forsecond order equations with 2 replaced by n. The important new concept here is the concept oflinear independence. This concept is used in many other areas of mathematics and even otherplaces in this course, and it is useful to understand this in detail.

For constant coe�cient ODEs, the methods are slightly harder, but we will not dwell on these.You can always use the methods for systems of linear equations we will learn later in the course tosolve higher order constant coe�cient equations.

So let us start with a general homogeneous linear equation

y(n) + pn�1(x)y(n�1) + · · · + p1(x)y0 + p0(x)y = 0. (2.4)

Theorem 2.3.1 (Superposition). Suppose y1, y2, . . . , yn are solutions of the homogeneous equation(2.4). Then

y(x) = C1y1(x) +C2y2(x) + · · · +Cnyn(x),

also solves (2.4) for arbitrary constants C1, . . . , Cn.

We also have the existence and uniqueness theorem for nonhomogeneous linear equations.

Theorem 2.3.2 (Existence and uniqueness). Suppose p0 through pn�1, and f are continuous func-tions and a, b0, b1, . . . , bn�1 are constants. The equation

y(n) + pn�1(x)y(n�1) + · · · + p1(x)y0 + p0(x)y = f (x), .

has exactly one solution y(x) satisfying the initial conditions

y(a) = b0, y0(a) = b1, . . . , y(n�1)(a) = bn�1.

2.3.1 Linear independenceWhen we had two functions y1 and y2 we said they were linearly independent if one was not themultiple of the other. Same idea holds for n functions. In this case it is easier to state as follows.The functions y1, y2, . . . , yn are linearly independent if

c1y1 + c2y2 + · · · + cnyn = 0,

has only the trivial solution c1 = c2 = · · · = cn = 0. If we can write the equation with a nonzeroconstant, say c1 , 0, then we can solve for y1 as a linear combination of the others. If the functionsare not linearly independent, way say they are linearly dependent.

Page 58: Urban Illiois

58 CHAPTER 2. HIGHER ORDER LINEAR ODES

Example 2.3.1: Show ex, e2x, e3x are linearly independent.Let us give several ways to do this. Most textbooks (including [EP] and [F]) introduce Wron-

skians, but that is really not necessary here.Let us write down

c1ex + c2e2x + c3e3x = 0.

Use rules of exponentials and write z = ex. Then we have

c1z + c2z2 + c3z3 = 0.

The left hand side is is a third degree polynomial in z. It can either be identically zero or have atmost 3 zeros. Therefore, it is identically zero and c1 = c2 = c3 = 0 and the functions are linearlyindependent.

Let us try another way. Write

c1ex + c2e2x + c3e3x = 0.

This equation has to hold for all x. What we could do is divide through by e3x to get

c1e�2x + c2e�x + c3 = 0.

This is true for all x, therefore, let x ! 1. After taking the limit we see that c3 = 0. Hence ourequation becomes

c1ex + c2e2x = 0.

Rinse, repeat!How about yet another way. Write

c1ex + c2e2x + c3e3x = 0.

We could evaluate at several di↵erent x to get equations for c1, c2 and c3. That might be a lot ofcomputation. We can also take derivatives of both sides and then evaluate. Let us first divide by ex

for simplicity.c1 + c2ex + c3e2x = 0.

Set x = 0 to get the equation c1 + c2 + c3 = 0. Now di↵erentiate both sides

c2ex + 2c3e2x = 0,

and set x = 0 to get c2 + 2c3 = 0. Finally divide by ex again and di↵erentiate to get 4c3e2x = 0. It isclear that c3 is zero. Then c2 must be zero as c2 = �2c3 and c1 must be zero because c1+c2+c3 = 0.

There is no one good way to do it. All of these methods are perfectly valid.

Example 2.3.2: On the other hand, the functions ex, e�x, and cosh x are linearly dependent. Simplyapply definition of the hyperbolic cosine:

cosh x =ex + e�x

2.

Page 59: Urban Illiois

2.3. HIGHER ORDER LINEAR ODES 59

2.3.2 Constant coe�cient higher order ODEsWhen we have a higher order constant coe�cient homogeneous linear equation. The song anddance is exactly the same as it was for second order. We just need to find more solutions. If theequation is nth order we need to find n linearly independent solutions. It is best seen by example.

Example 2.3.3: Find the general solution to

y000 � 3y00 � y0 + 3y = 0. (2.5)

Try: y = erx. We plug in and get

r3erx� 3r2erx

� rerx + 3erx = 0.

We divide out by erx. Thenr3� 3r2

� r + 3 = 0.

The trick now is to find the roots. There is a formula for degree 3 and 4 equations but it is verycomplicated. There is no formula for higher degree polynomials. That does not mean that the rootsdo not exist. There are always n roots for an nth degree polynomial. They might be repeated andthey might be complex. Computers are pretty good at finding roots approximately for reasonablesize polynomials.

Best place to start is to plot the polynomial and check where it is zero. Or you can try pluggingin. Sometimes it is a good idea to just start plugging in numbers r = �2,�1, 0, 1,�1, 2, . . . andsee if you get a hit. There are some signs that you might have missed a root. For example, if youplug in �2 into our polynomial you get �15. If you plug in 0 you get 3. That means there is a rootbetween �2 and 0 because the sign changed.

A good strategy at first is to look for roots �1, 1, or 0, these are easy to see. When check ourpolynomial we note that r1 = �1 and r2 = 1 are roots. The last root is then reasonably easy tofind. We note that the constant term in a polynomial is the multiple of the negations of all the rootsbecause r3

� 3r2� r + 3 = (r � r1)(r � r2)(r � r3). In our case we see that

3 = (�r1)(�r2)(�r3) = (1)(�1)(�r3) = r3.

You should check that r3 = 3 is a root. Hence we know that e�x, ex and e3x are solutions to (2.5).They are linearly independent as can easily be checked, and there is 3 of them, which happens tobe exactly the number we need. Hence the general solution is

y = C1e�x +C2ex +C3e3x.

Suppose we were given some initial conditions y(0) = 1, y0(0) = 2, and y00(0) = 3. This leadsto

1 = y(0) = C1 +C2 +C3,

2 = y0(0) = �C1 +C2 + 3C3,

3 = y00(0) = C1 +C2 + 9C3.

Page 60: Urban Illiois

60 CHAPTER 2. HIGHER ORDER LINEAR ODES

It is possible to find the solution by high school algebra, but it would be a pain. The only sensibleway to solve a system of equations such as this is to use matrix algebra, see § 3.2. For now we notethat the solution is C1 = �

14 , C2 = 1 and C3 =

14 . With this the specific solution is

y =�14

e�x + ex +14

e3x.

Next, suppose that we have real roots, but they are repeated. Let us say we have a root rrepeated k times. In this case, in the spirit of the second order solution we note the solutions

erx, xerx, x2erx, . . . , xk�1erx.

We take a linear combination of these solutions to find the general solution.

Example 2.3.4: Solvey(4)� 3y000 + 3y00 � y0 = 0.

We note that the characteristic equation is

r4� 3r3 + 3r2

� r = 0.

By inspection we note that r4� 3r3 + 3r2

� r = r(r � 1)3. Hence the roots given with multiplicityare r = 0, 1, 1, 1. Thus the general solution is

y = (c0 + c1x + c2x2) ex| {z }

terms coming from r = 1

+ c4|{z}

from r = 0

.

Similarly to the second order case we can handle complex roots and we really only need to talkabout how to handle repeated complex roots. Complex roots always come in pairs r = ↵ ± i�. Thecorresponding solution is

(c0 + c1x + · · · + ck�1xk) e↵x cos �x + (d0 + d1x + · · · + dk�1xk) e↵x sin �x.

where c0, . . . , ck�1, d0, . . . , dk�1 are arbitrary constants.

Example 2.3.5: Solvey(4)� 4y000 + 8y00 � 8y0 + 4y = 0.

The characteristic equation is

r4� 4r3 + 8r2

� 8r + 4 = 0,(r2� 2 + 2)2 = 0,

(r � 1)2 + 2�2= 0.

Hence the roots are 1 ± i with multiplicity 2. Hence the general solution is

y = (c0 + c1x) ex cos x + (d0 + d1x) ex sin x.

The way we solved the characteristic equation above is really by guessing or by inspection. It isnot so easy in general. You could also have asked a computer or an advanced calculator for theroots.

Page 61: Urban Illiois

2.3. HIGHER ORDER LINEAR ODES 61

2.3.3 ExercisesExercise 2.3.1: Find the general solution for y000 � y00 + y0 � y = 0.

Exercise 2.3.2: Find the general solution for y(4)� 5y000 + 6y00 = 0.

Exercise 2.3.3: Find the general solution for y000 + 2y00 + 2y0 = 0.

Exercise 2.3.4: Suppose that the characteristic equation for an equation is (r � 1)2(r � 2)2 = 0. a)Find such an equation. b) Find its general solution.

Exercise 2.3.5: Suppose that a fourth order equation has the following solution. y = 2e4xx cos x.a) Find such an equation. b) Find the initial conditions which the given solution satisfies.

Exercise 2.3.6: Find the general solution for the equation of Exercise 2.3.5.

Exercise 2.3.7: Let f (x) = ex� cos x, g(x) = ex + cos x, and h(x) = cos x. Are f (x), g(x), and h(x)

linearly independent? If so, show it, if not, find the linear combination that works.

Exercise 2.3.8: Let f (x) = 0, g(x) = cos x, and h(x) = sin x. Are f (x), g(x), and h(x) linearlyindependent? If so, show it, if not, find the linear combination that works.

Exercise 2.3.9: Are x, x2, and x4 linearly independent? If so, show it, if not, find the linearcombination that works.

Exercise 2.3.10: Are ex, xex, and x2ex linearly independent? If so, show it, if not, find the linearcombination that works.

Page 62: Urban Illiois

62 CHAPTER 2. HIGHER ORDER LINEAR ODES

2.4 Mechanical vibrationsNote: 2 lectures, §3.4 in EP

We want to look at some applications of linear second order constant coe�cient equations.

2.4.1 Some examplesOur first example is a mass on a spring. Suppose we have a

damping c

mk F(t)

mass m > 0 (in kilograms for instance) connected by a springwith spring constant k > 0 (in Newtons per meter perhaps) to afixed wall. Furthermore, there is some external force F(t) actingon the mass. Finally, there is some friction in the system and this

is measured by a constant c � 0.Let x be the displacement of the mass (x = 0 is the rest position). With x growing to the right

(away from the wall). The force exerted by the spring is proportional to the compression of thespring by Hooke’s law. Therefore, it is kx in the negative direction. Similarly the amount of forceexerted by friction is proportional to the velocity of the mass. By Newton’s second law we knowthat force equals mass times acceleration and hence

mx00 + cx0 + kx = F(t).

This is a linear second order constant coe�cient ODE. We set up some terminology about thisequation. We say the motion is

(i) forced, if F . 0 (F not identically zero),

(ii) unforced or free, if F ⌘ 0,

(iii) damped, if c > 0, and

(iv) undamped, if c = 0.

This system is appears in lots of applications even if it does not at first seems like it. Manyreal world scenarios can be simplified to a mass on a spring. For example, a bungee jump setup isessentially a spring and mass system (you are the mass). It would be good if someone did the mathbefore you jump o↵ right? Let us just give 2 other examples.

Here is an example for electrical engineers. Suppose that you have the

E LC

R

pictured RLC circuit. There is a resistor with a resistance of R ohms, aninductor with an inductance of L henries, and a capacitor with a capacitanceof C farads. There is also an electric source (such as a battery) giving avoltage of E(t) volts at time t (measured in seconds). Let Q(t) be the charge

in columbs on the capacitor and I(t) be the current in the circuit. The relation between the two is

Page 63: Urban Illiois

2.4. MECHANICAL VIBRATIONS 63

Q0 = I. Furthermore, by elementary principles we have that LI0+RI+Q/C = E. If we di↵erentiatewe get

LI00(t) + RI0(t) +1C

I(t) = E0(t).

This is an nonhomogeneous second order constant coe�cient linear equation. Further, as L,R,and C are all positive, this system behaves just like the mass and spring system. The position ofthe mass is replaced by the current. Mass is replaced by the inductance, damping is replaced byresistance and the spring constant is replaced by one over the capacitance. The change in voltagebecomes the forcing function. Hence for constant voltage this is an unforced motion.

Our next example is going to behave like a mass and spring system only approximately. Sup-pose we have a mass m on a pendulum of length L. We wish to find an equation for the angle ✓(t).Let g be the force of gravity. Elementary physics mandates that the equation is of the form

✓00 +gL

sin ✓ = 0.

This equation can be derived using Newton’s second law, where force

✓L

equals mass times acceleration. Note that acceleration is L✓00 and massis m. This has to be equal to the tangential component of the force givenby the gravity. This is mg sin ✓ in the opposite direction. The m curiouslycancels from the equation.

Now we make our approximation. For small ✓ we have that approxi-mately sin ✓ ⇡ ✓. This can be seen by looking at the graph. In Figure 2.1we can see that for approximately �0.5 < ✓ < 0.5 (in radians) the graphs of sin ✓ and ✓ are almostthe same.

-1.0 -0.5 0.0 0.5 1.0

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

-1.0

-0.5

0.0

0.5

1.0

Figure 2.1: The graphs of sin ✓ and ✓ (in radians).

Page 64: Urban Illiois

64 CHAPTER 2. HIGHER ORDER LINEAR ODES

Therefore, when the swings are small, ✓ is always small and we can model the behavior by thesimpler linear equation

✓00 +gL✓ = 0.

Note that the errors that we get from the approximation build up so over a very long time, thebehavior might change more substantially. Also we will see that in a mass spring system, theamplitude is independent of the period, this is not true for a pendulum. But for reasonably shortperiods of time and small swings (for example if the length of the pendulum is very large), thebehavior is reasonably close.

In real world problems it is very often necessary to make these types of simplifications. There-fore, it is good to understand both the mathematics and the physics of the situation to see if thesimplification is valid in the context of the questions we are trying to answer.

2.4.2 Free undamped motionIn this section we will only consider free or unforced motion, as we cannot yet solve nonhomo-geneous equations. First let us start with undamped motion and hence c = 0, so we have theequation

mx00 + kx = 0.

If we divide out by m and let !0 be a number such that !20 =

km we can write the equation as

x00 + !20x = 0.

The general solution to this equation is

x(t) = A cos!0t + B sin!0t.

First we notice that by a trigonometric identity we have that for two other constants C and � wehave

A cos!0t + B sin!0t = C cos(!0t � �).

It is not hard to compute that C =p

A2 + B2 and tan � = BA . Therefore, we can write x(t) =

C cos(!0t � �), and let C and � be our arbitrary constants.

Exercise 2.4.1: Justify this identity and verify the equations for C and �.

While it is generally easier to use the first form with A and B to find these constants given theinitial conditions, the second form is much more natural. The constants C and � have very niceinterpretation. If we look at the form of the solution

x(t) = C cos(!0t � �)

Page 65: Urban Illiois

2.4. MECHANICAL VIBRATIONS 65

We can see that the amplitude is C, !0 is the (angular) frequency, and � is the so-called phaseshift. It just shifts the graph left or right. We call !0 is called the natural (angular) frequency. Themotion is usually called simple harmonic motion.

A note about the word angular before the frequency. !0 is given in radians per unit time, notin cycles per unit time as is the usual measure of frequency. But because we know one cycle is 2⇡,the usual frequency is given by !0

2⇡ . It is simply a matter of where we put the constant 2⇡, and thatis a matter of taste.

The period of the motion is one over the frequency (in cycles per unit time) and hence 2⇡!0

. Thatis the amount of time it takes to complete one full oscillation.

Example 2.4.1: Suppose that m = 2kg and k = 8N/m. Suppose the whole setup is on a truckwhich was travelling at 1m/s and suddenly crashes and hence stops. The mass was rigged 0.5meters forward from the rest position, and gets loose in the crash and starts oscillating. What isthe frequency of the resulting oscillation and what is the amplitude. The units are the mks units(meters-kilograms-seconds).

Well the setup means that the mass was at half a meter in the positive direction during the crashand relative to the wall the spring is mounted to, the mass was moving forward (in the positivedirection) at 1m/s. This gives us the initial conditions.

So the equation with initial conditions is

2x00 + 8x = 0, x(0) = 0.5, x0(0) = 1.

We can directly compute !0 =q

km =

p

4 = 2. Hence the angular frequency is 2. The usualfrequency in Hertz (cycles per second) is 2

2⇡ =1⇡ ⇡ 0.318

The general solution isx(t) = A cos 2t + B sin 2t.

Letting x(0) = 0 means A = 0.5. Then x0(t) = �0.5 sin 2t + B cos 2t. Letting x0(0) = 1 we getB = 1. Therefore, the amplitude is C =

p

A2 + B2 =p

1.25 ⇡ 1.118. The solution is

x(t) = 0.5 cos 2t + sin 2t.

A plot is shown in Figure 2.2 on the following page.

For the free undamped motion, if the solution is of the form

x(t) = A cos!0t + B sin!0t,

this corresponds to the initial conditions x(0) = A and x0(0) = B.This makes it much easier to figure out A and B, rather than the amplitude and phase shift. In the

example, we have already found C. Let us compute the phase shift. We know that tan � = B/A = 2.We take the arctangent of 2 and get approximately 1.107. Unfortunately if you remember, we still

Page 66: Urban Illiois

66 CHAPTER 2. HIGHER ORDER LINEAR ODES

0.0 2.5 5.0 7.5 10.0

0.0 2.5 5.0 7.5 10.0

-1.0

-0.5

0.0

0.5

1.0

-1.0

-0.5

0.0

0.5

1.0

Figure 2.2: Simple undamped oscillation.

need to check if this � is in the right quadrant. Since both B and A are positive, then � should be inthe first quadrant, and 1.107 radians really is in the first quadrant.

Note: Many calculators and computer software do not only have the atan function for arctan-gent, but also what is sometimes called atan2. This function takes two arguments, B and A andreturns a � in the correct quadrant for you.

2.4.3 Free damped motionLet us now focus on damped motion. Let us rewrite the equation

mx00 + cx0 + kx = 0,

asx00 + 2px0 + !2

0x = 0,

where

!0 =

r

km, p =

c2m.

The characteristic equation isr2 + 2pr + !2

0 = 0.

Using the quadratic formula we get that the roots are

r = �p ±q

p2� !2

0.

The form of the solution depends on whether we get complex or real roots and this depends on thesign of

p2� !2

0 =✓ c2m

◆2�

km=

c2� 4km4m2 .

Page 67: Urban Illiois

2.4. MECHANICAL VIBRATIONS 67

The sign of p2� !2

0 is the same as the sign of c2� 4km.

Overdamping

When c2�4km > 0, we say the system is overdamped. In this case, there are two distinct real roots

r1 and r2. Notice that both are negative, asq

p2� !2

0 is always less than p so �p ±q

p2� !2

0 isalways negative.

Hence the solution is

0 25 50 75 100

0 25 50 75 100

0.0

0.5

1.0

1.5

0.0

0.5

1.0

1.5

Figure 2.3: Overdamped motion for severaldi↵erent initial conditions.

x(t) = C1er1t +C2er2t.

Note that since r1, r2 are negative, x(t) ! 0 ast ! 1. This means that the mass will just tendtowards the rest position as time goes to infinity.For a few sample plots for di↵erent initial condi-tions see Figure 2.3.

Do note that no oscillation happens. In factthe graph will cross the x axis at most once. Tosee this fact, we try to solve 0 = C1er1t + C2er2t.So C1er1t = �C2er2t, and hence

�C1

C2= e(r2�r1)t.

This has at most one solution t � 0.

Example 2.4.2: Suppose the mass is released from from rest. That is x(0) = x0 and x0(0) = 0.Then

x(t) =x0

r1 � r2

r1er2t� r2er1t� .

It is not hard to see that this satisfies the initial conditions.

Critical damping

When c2� 4km = 0, we say the system is critically damped. In this case, there is one root of

multiplicity 2 and this root is �p. Therefore, our solution is

x(t) = C1e�pt +C2te�pt.

The behavior of a critically damped system is very similar to an overdamped system. After all acritically damped system is in some sense a limit of overdamped systems. Since these equationsare really only an approximation to the real world, in reality we are never critically damped, itis only a place you can reach in theory. You are always a little bit underdamped or a little bitoverdamped. It is better not to dwell on critical damping.

Page 68: Urban Illiois

68 CHAPTER 2. HIGHER ORDER LINEAR ODES

Underdamping

When c2� 4km < 0, we say the system is un-

0 5 10 15 20 25 30

0 5 10 15 20 25 30

-1.0

-0.5

0.0

0.5

1.0

-1.0

-0.5

0.0

0.5

1.0

Figure 2.4: Underdamped motion with the en-velope curves shown.

derdamped. In this case, the roots are complex.

r = �p ±q

p2� !2

0

= �p ±p

�1q

!20 � p2

= �p ± i!1,

where !1 =q

!20 � p2. Our solution is

x(t) = e�pt (A cos!1t + B sin!1t) ,

Orx(t) = Ce�pt cos(!1t � �).

An example plot is given in Figure 2.4. Note thatwe still have that x(t)! 0 as t ! 1.

In the figure we also show the envelope curves Ce�pt and �Ce�pt. The solution is the oscillatingplot between the two curves. The envelope curves give the maximum amplitude of the oscillationat any given point in time. For example if you are bungee jumping, you are really interested incomputing the envelope curve so that you do not hit the concrete with your head.

The phase shift � just shifts the graph left or right but within the envelope curves (the envelopecurves do not change of course if � changes).

Finally note that the angular pseudo-frequency (we do not call it a frequency since the solutionis not really a periodic function) !1 becomes lower when the damping c (and hence p) becomeslarger. This makes sense since if we keep changing c at some point the solution should start lookinglike the solution for critical damping or overdamping which do not oscillate at all. When we changethe damping just a little bit, we do not expect the behavior to change dramatically.

On the other hand when c becomes smaller, !1 approaches !0 (it is always smaller) and thesolution looks more and more like the steady periodic motion of the undamped case. The envelopecurves become flatter and flatter as p goes to 0.

2.4.4 ExercisesExercise 2.4.2: Consider a mass and spring system with a mass m = 2, spring constant k = 3,and damping constant c = 1. a) Set up and find the general solution of the system. b) Is the systemunderdamped, overdamped or critically damped? c) If the system is not critically damped, find a cwhich makes the system critically damped.

Exercise 2.4.3: Do Exercise 2.4.2 for m = 3, k = 12, and c = 12.

Page 69: Urban Illiois

2.4. MECHANICAL VIBRATIONS 69

Exercise 2.4.4: Using the mks units (meters-kilograms-seconds) Suppose you have a spring ofwith spring constant 4N/m. You want to use it to weight items. Assume no friction. Suppose youyou place the mass on the spring and put it in motion. a) You count and find that the frequency is0.8 Hz (cycles per second) what is the mass. b) Find a formula for the mass m given the frequency! in Hz.

Exercise 2.4.5: Suppose we add possible friction to Exercise 2.4.4. Further, suppose you do notknow the spring constant, but you have two reference weights 1 kg and 2 kg to calibrate yoursetup. You put each in motion on your spring and measure the frequency. For the 1 kg weight youmeasured 0.8 Hz, for the 2 kg weight you measured 0.39 Hz. a) Find k (spring constant) and c(damping constant). b) Find a formula for the mass in terms of the frequency in Hz. c) For anunknown mass you measured 0.2 Hz, what is the weight?

Page 70: Urban Illiois

70 CHAPTER 2. HIGHER ORDER LINEAR ODES

2.5 Nonhomogeneous equationsNote: 2 lectures, §3.5 in EP

2.5.1 Solving nonhomogeneous equationsYou have seen how to solve the linear constant coe�cient homogeneous equations. Now supposethat we drop the requirement of homogeneity. This usually corresponds to some outside input tothe system we are trying to model, like the forcing function for the mechanical vibrations of lastsection. That is, we have an equation such as

y00 + 5y0 + 6y = 2x + 1. (2.6)

Note that we still say this equation is constant coe�cient equation. We only require constants infront of the y00, y0, and y.

We will generally write Ly = 2x + 1 instead when the operator is not important. The way wesolve (2.6) is as follows. We find the general solution yc to the associated homogeneous equation

y00 + 5y0 + 6y = 0. (2.7)

We also find a single particular solution yp to (2.6) in some way and then we know that

y = yc + yp

is the general solution to (2.6). We call yc the complementary solution.Note that yp can be any solution. Suppose you find a di↵erent particular solution yp. Then write

the di↵erence w = yp � yp. Then plug w into the left hand side of the equation and get

w00 + 5w0 + 6w = (y00p + 5y0p + 6yp) � (y00p + 5y0p + 6yp) = (2x + 1) � (2x + 1) = 0.

In other words, using the operator notation the calculation becomes simpler. Note that L is a linearoperator and so we could just write.

Lw = L(yp � yp) = Lyp � Lyp = (2x + 1) � (2x + 1) = 0.

So w = yp � yp is a solution to (2.7). So any two solutions of (2.6) di↵er by a solution to thehomogeneous equation (2.7). The solution y = yc + yp includes all solutions to (2.6), since yc is thegeneral solution to the homogeneous equation.

Moral of the story is that you can find the particular solution in any old way, and you mightfind a di↵erent one by a di↵erent method (or by guessing) and still get the right general solutionto the whole problem even if it looks di↵erent and the constants you will have to choose given theinitial conditions will be di↵erent.

Page 71: Urban Illiois

2.5. NONHOMOGENEOUS EQUATIONS 71

2.5.2 Undetermined coe�cientsSo the trick is to somehow in a smart way guess a solution to (2.6). Note that 2x+1 is a polynomial,and the left hand side of the equation will be a polynomial if we let y be a polynomial of the samedegree. So we will try

y = Ax + B.

we plug in

y00 + 5y0 + 6y = (Ax + B)00 + 5(Ax + B)0 + 6(Ax + B) = 0 + 5A + 6Ax + 6B = 6Ax + (5A + 6B).

So 6Ax + (5A + 6B) = 2x + 1. So A = 13 and B = �1

9 . That means that yp =13 x � 1

9 =3x�1

9 . Solvingthe complementary problem we get (Exercise!)

yc = C1e�2x +C2e�3x.

Hence the general solution to (2.6) is

y = C1e�2x +C2e�3x +3x � 1

9.

Now suppose we are further given some initial conditions y(0) = 0 and y0(0) = 13 . First find

y0 = �2C1e�2x� 3C2e�3x + 1

3 Then

0 = y(0) = C1 +C2 �19

13= y0(0) = �2C1 � 3C2 +

13

We solve to get C1 =13 and C2 =

�29 . Hence our solution is

y(x) =13

e�2x�

29

e�3x +3x � 1

9=

3e�2x� 2e�3x + 3x � 1

9.

Exercise 2.5.1: Check that y really solves the equation.

Note: A common mistake is to solve for constants using the initial conditions with yc and onlyadding the particular solution yp after that. That will not work. You need to first compute y = yc+yp

and only then solve for the constants using the initial conditions.

Similarly a right hand side consisting of exponentials or sines and cosines can be handled. Forexample:

y00 + 2y0 + 2y = cos 2x

Let us just find yp in this case. We notice that we may have to also guess sin 2x since derivativesof cosine are sines. So we guess

y = A cos 2x + B sin 2x.

Page 72: Urban Illiois

72 CHAPTER 2. HIGHER ORDER LINEAR ODES

Plug in to the equation and we get

�4A cos 2x � 4B sin 2x � 4A sin 2x + 4B cos 2x + 2A cos 2x + 2B sin 2x = cos 2x.

Since the left hand side must equal to right hand side we group terms and we get that �4A + 4B +2A = 1 and �4B�4A+2B = 0. So �2A+4B = 1 and 2A+ B = 0 and hence A = �1

10 and B = 15 . So

yp =� cos 2x + 2 sin 2x

10.

And in a similar way if the right hand side contains exponentials we guess exponentials. Forexample, if the equation is (where L is a linear constant coe�cient operator)

Ly = e3x

we will guess y = Ae3x. We note also that using the multiplication rule for di↵erentiation gives usa way to combine these guesses. Really if you can guess a form for y such that Ly has all the termsneeded to for the right hand side, that is a good place to start. For example for:

Ly = (1 + 3x2) e�x cos ⇡x

we will guessy = (A + Bx +Cx2) e�x cos ⇡x + (D + Ex + Fx2) e�x sin ⇡x.

We will plug in and then hopefully get equations that we can solve for A, B,C,D, E, F. As you cansee this can make for a very long and tedious calculation very quickly. C’est la vie!

There is one hiccup in all this. It could be that our guess actually solves the associated homo-geneous equation. That is, suppose we have

y00 � 9y = e3x.

We would love to guess y = Ae3x, but if we plug this into the left hand side of the equation we get

y00 � 9y = 9Ae3x� 9Ae3x = 0 , e3x.

There is no way we can choose A to make the left hand side be e3x. The trick in this case is tomultiply our guess by x until we get rid of duplication with the complementary solution. That isfirst we compute yc (solution to Ly = 0)

yc = C1e�3x +C2e3x

and we note that the e3x term is a duplicate with our desired guess. We modify our guess toy = Axe3x and notice there is no duplication. Now we can go forward and try it. Note thaty0 = Ae3x + 3Axe3x and y00 = 4Ae3x + 9Axe3x. So

y00 � 9y = 4Ae3x + 9Axe3x� 9Axe3x = 4Ae3x

Page 73: Urban Illiois

2.5. NONHOMOGENEOUS EQUATIONS 73

Then we note that this is supposed to be e3x and hence we find that 4A = 1 and so A = 14 . Thus we

can now write the general solution as

y = yc + yp = C1e�3x +C2e3x +14

xe3x.

Now what about the case when multiplying by x does not get rid of duplication. For example,

y00 � 6y0 + 9 = e3x.

Note that yc = C1e3x +C2xe3x. So guessing y = Axe3x would not get us anywhere. In this case youwant to guess y = Ax2e3x. Basically, you want to multiply your guess by x until all duplication isgone. But no more! Multiplying too many times will also make the process not work.

Finally what if the right hand side is several terms, such as

Ly = e2x + cos x.

In this case find u that solves Lu = e2x and v that solves Lv = cos x (do each terms separately).Then we note that if y = u + v, then Ly = e2x + cos x. This is because L is linear and this is justsuperposition again. We have Ly = L(u + v) = Lu + Lv = e2x + cos x.

See Edwards and Penney [EP] for more detailed and complete information on undeterminedcoe�cients.

2.5.3 Variation of parametersIt turns out that undetermined coe�cients will work for many basic problems that crop up. It doesnot work all the time. Really it only works when the right hand side of the equation Ly = f (x) hasonly finitely many linearly independent derivatives, so that you can write a guess that consists ofthem all. But some equations are a bit tougher. Consider

y00 + y = tan x.

Note that each new derivative of tan x looks completely di↵erent and cannot be written as a linearcombination of the previous derivatives. We get sec2 x, 2 sec2 x tan x, etc. . .

This equation calls for a di↵erent method. We present the method of variation of parameterswhich will handle all the cases Ly = f (x) provided you can solve certain integrals. For simplicitywe will restrict ourselves to second order equations, but the method will work for higher orderequations just as well (but the computations will be more tedious).

Let us try to solve the example.

Ly = y00 + y = tan x.

Page 74: Urban Illiois

74 CHAPTER 2. HIGHER ORDER LINEAR ODES

First we find the complementary solution Ly = 0. This is reasonably simple we get yc = C1y1+C2y2

where y1 = cos x and y2 = sin x. Now to try to find a solution to the nonhomogeneous equation wewill try

yp = y = u1y1 + u2y2,

where u1 and u2 are functions and not constants. We are trying to satisfy Ly = tan x. That gives usone condition on the functions u1 and u2. First compute (note the product rule!)

y0 = (u01y1 + u02y2) + (u1y01 + u2y02).

Since we can still impose at our will to simplify computations (we have two unknown functions,so we are allowed two conditions), we impose that (u01y1 + u02y2) = 0. This makes computing thesecond derivative easier.

y0 = u1y01 + u2y02,y00 = (u01y01 + u02y02) + (u1y001 + u2y002 ).

Now since y1 and y2 are solutions to y00 + y = 0, we know that y001 = �y1 and y002 = �y2. (Note: Ifthe equation was instead y00 + ay0 + by = 0 we would have y00i = �ay0i � byi.)

Soy00 = (u01y01 + u02y02) � (u1y1 + u2y2).

Now note thaty00 = (u01y01 + u02y02) � y,

and hencey00 + y = Ly = u01y01 + u02y02.

For y to satisfy Ly = f (x) we must have f (x) = u01y01 + u02y02.So what we need to solve are the two equations (conditions) we imposed on u1 and u2

u01y1 + u02y2 = 0,u01y01 + u02y02 = f (x).

We can now solve for u01 and u02 in terms of f (x), y1 and y2. You will always get these formulas forany Ly = f (x). There is a general formula for the solution you can just plug into, but it is better tojust repeat what we do below. In our case the two equations become

u01 cos x + u02 sin x = 0,�u01 sin x + u02 cos x = tan x.

Hence

u01 cos x sin x + u02 sin2 x = 0,�u01 sin x cos x + u02 cos2 x = tan x cos x = sin x.

Page 75: Urban Illiois

2.5. NONHOMOGENEOUS EQUATIONS 75

And thus

u02(sin2 x + cos2 x) = sin x,u02 = sin x,

u01 =� sin2 x

cosx = � tan x sin x.

Now we need to integrate u01 and u02 to get u1 and u2.

u1 =

Z

u01 dx =Z

� tan x sin x dx =12

ln�

(sin x) � 1(sin x) + 1

+ sin x,

u2 =

Z

u02 dx =Z

sin x dx = � cos x.

So our particular solution is

yp = u1y1 + u2y2 =12

cos x ln�

(sin x) � 1(sin x) + 1

+ cos x sin x � cos x sin x =

=12

cos x ln�

(sin x) � 1(sin x) + 1

.

The general solution to y00 + y = tan x is

y = C1 cos x +C2 sin x +12

cos x ln�

(sin x) � 1(sin x) + 1

.

2.5.4 ExercisesExercise 2.5.2: Find a particular solution of y00 � y0 � 6y = e2x.

Exercise 2.5.3: Find a particular solution of y00 � 4y0 + 4y = e2x.

Exercise 2.5.4: Solve the initial value problem y00 + 9y = cos 3x + sin 3x for y(0) = 2, y0(0) = 1.

Exercise 2.5.5: Setup the form of the particular solution but do not solve for the coe�cients fory(4)� 2y000 + y00 = ex.

Exercise 2.5.6: Setup the form of the particular solution but do not solve for the coe�cients fory(4)� 2y000 + y00 = ex + x + sin x.

Exercise 2.5.7: a) Using variation of parameters find a particular solution of y00 � 2y0 + y = ex. b)Find a particular solution using undetermined coe�cients. c) Are the two solutions you found thesame? What is going on?

Exercise 2.5.8: Find a particular solution of y00 � 2y0 + y = sin x2. It is OK to leave the answer asa definite integral.

Page 76: Urban Illiois

76 CHAPTER 2. HIGHER ORDER LINEAR ODES

2.6 Forced oscillations and resonanceNote: 2 lectures, §3.6 in EP

Before reading the lecture, it may be good to first try Project III from the IODE website:http://www.math.uiuc.edu/iode/.

Let us return back to the mass on a spring example. We will

damping c

mk F(t)

now consider the case of forced oscillations. That is, we will con-sider the equation

mx00 + cx0 + kx = F(t)

for some nonzero F(t). In the mass on a spring example, the setup is again, m is mass, c is friction,k is the spring constant and F(t) is an external force acting on the mass.

Usually what we are interested in is some periodic forcing, such as noncentered rotating parts,or perhaps even loud sounds or other sources of periodic force. Once we will learn about Fourierseries we will see that we will essentially cover every type of periodic function by consideringF(t) = F0 cos! t (or sin instead of cosine, the calculations will be essentially the same).

2.6.1 Undamped forced motion and resonanceFirst let us consider undamped (c = 0) motion as this is simpler. We have the equation

mx00 + kx = F0 cos! t.

This has the complementary solution (solution to the associated homogeneous equation)

xc = C1 cos!0t +C2 sin!0t,

where !0 =q

km . !0 is said to be the natural frequency (angular). It is essentially the frequency at

which the system “wants to oscillate” without external interference.Let us suppose that !0 , !. Now try the solution xp = A cos! t and solve for A. Note that we

need not have sine in our trial solution as on the left hand side we will only get cosines anyway. Ifyou include a sine it is fine; you will find that its coe�cient will be zero (I cannot find a rhyme).

So we solve as in the method of undetermined coe�cients with the guess above and we findthat

xp =F0

m(!20 � !

2)cos! t.

We leave it as an exercise to do the algebra required here.The general solution is

x = C1 cos!0t +C2 sin!0t +F0

m(!20 � !

2)cos! t.

Page 77: Urban Illiois

2.6. FORCED OSCILLATIONS AND RESONANCE 77

or written another way

x = C cos(!0t � �) +F0

m(!20 � !

2)cos! t.

Hence it is a superposition of two cosine waves at di↵erent frequencies.

Example 2.6.1: Suppose0.5x00 + 8x = 10 cos ⇡t

and let us suppose that x(0) = 0 and x0(0) = 0.Well let us compute. First we read o↵ the parameters: ! = ⇡, !0 =

q

80.5 = 4, F0 = 10, m = 1.

So the general solution is

x = C1 cos 4t +C2 sin 4t +20

16 � ⇡2 cos ⇡t.

Now solve for C1 and C2 using the initial conditions. It is easy to see that C2 = 0 and C1 =�20

16�⇡2 .Hence

x =20

16 � ⇡2 (cos ⇡t � cos 4t).

Notice the “beating” behavior in Figure 2.5.

0 5 10 15 20

0 5 10 15 20

-10

-5

0

5

10

-10

-5

0

5

10

Figure 2.5: Graph of 2016�⇡2 (cos ⇡t � cos 4t).

First use the trigonometric identity

2 sin✓A � B

2

sin✓A + B

2

= cos B � cos A

to get that

x =20

16 � ⇡2

2 sin

4 � ⇡2

t!

sin

4 + ⇡2

t!!

.

Notice that x is now a high frequency wave mod-ulated by a low frequency wave.

Now suppose that !0 = !. Obviously in thiscase we cannot try the solution A cos! t and useundetermined coe�cients. In this case we seethat cos! t solves the homogeneous equation. Therefore, we need to try xp = At cos! t+Bt sin! t.This time we need the sin term since two derivatives of t cos! t do contain sines. We write theequation

x00 + !2x =F0

mcos! t

Then plugging into the left hand side we get

2B! cos! t � 2A! sin! t =F0

mcos! t

Page 78: Urban Illiois

78 CHAPTER 2. HIGHER ORDER LINEAR ODES

Hence A = 0 and B = F02m! . Our particular solution is F0

2m! t sin! t and our general solution is

x = C1 cos! t +C2 sin! t +F0

2m!t sin! t.

The important term is the last one (the par-

0 5 10 15 20

0 5 10 15 20

-5.0

-2.5

0.0

2.5

5.0

-5.0

-2.5

0.0

2.5

5.0

Figure 2.6: Graph of 1⇡ t sin ⇡t.

ticular solution we found). We can see that thisterm grows without bound as t ! 1. In factit oscillates between F0t

2m! and �F0t2m! . The first two

terms only oscillate between ±q

C21 +C2

2, whichbecomes smaller and smaller in proportion to theoscillations of the last term as t gets larger. InFigure 2.6 we see the graph with C1 = C2 = 0,F0 = 2, m = 1, ! = ⇡.

By forcing the system in just the right fre-quency we produce very wild oscillations. Thiskind of behavior is called resonance or some-times pure resonance and is sometimes desired.For example, remember when as a kid you couldstart swinging by just moving back and forth onthe swing seat in the correct “frequency”? You were trying to achieve resonance. The force of eachone of your moves was small but after a while it produced large swings.

On the other hand resonance can be destructive. After an earthquake some buildings are col-lapsed and others may be relatively undamaged. This is due to di↵erent buildings having di↵erentresonance frequencies. So figuring out the resonance frequency can be very important.

A common (but wrong) example of destructive force of resonance is the Tacoma Narrowsbridge failure. It turns out, there was an altogether di↵erent phenomenon at play there⇤.

2.6.2 Damped forced motion and practical resonanceOf course in real life things are not as simple as they were above. There is of course some damping.That is our equation becomes

mx00 + cx0 + kx = F0 cos! t, (2.8)

for some c > 0. We have solved the homogeneous problem before. We let

p =c

2m!0 =

r

km.

⇤K. Billah and R. Scanlan, Resonance, Tacoma Narrows Bridge Failure, and Undergraduate Physics Textbooks,American Journal of Physics, 59(2), 1991, 118–124, http://www.ketchum.org/billah/Billah-Scanlan.pdf

Page 79: Urban Illiois

2.6. FORCED OSCILLATIONS AND RESONANCE 79

We replace equation (2.8) with

x00 + 2px0 + !20x =

F0

mcos! t.

We find the roots of the characteristic equation of the associated homogeneous problem are r1, r2 =

�p±q

p2� !2

0. The form of the general solution of the associated homogeneous equation dependson the sign of p2

� !20, or equivalently on the sign of c2

� 4km, as we have seen before. That is

xc =

8

>

>

>

>

>

<

>

>

>

>

>

:

C1er1t +C2er2t if c2 > 4km,C1e�pt +C2te�pt if c2 = 4km,e�pt(C1 cos!1t +C2 sin!1t) if c2 < 4km .

Here !1 =q

!20 � p2. In any case, we can see that xc(t) ! 0 as t ! 1. Furthermore, there can

be no conflicts when trying to solve for the undetermined coe�cients by trying xp = A cos! t +B sin! t. Let us plug in and solve for A and B. We get (the tedious details are left to reader)

(!20 � !

2)B � 2!pA�

sin! t +�

(!20 � !

2)A + 2!pB�

cos! t =F0

mcos! t.

We get that

A =(!2

0 � !2)F0

m(2!p)2 + m(!20 � !

2)2

B =2!pF0

m(2!p)2 + m(!20 � !

2)2.

We also compute C =p

A2 + B2 to be

C =F0

mq

(2!p)2 + (!20 � !

2)2.

Thus our particular solution is

xp =(!2

0 � !2)F0

m(2!p)2 + m(!20 � !

2)2cos! t +

2!pF0

m(2!p)2 + m(!20 � !

2)2sin! t

Or in the other notation we have amplitude C and phase shift � where (if ! , !0)

tan � =BA=

2!p!2

0 � !2.

Page 80: Urban Illiois

80 CHAPTER 2. HIGHER ORDER LINEAR ODES

Hence we have

xp =F0

mq

(2!p)2 + (!20 � !

2)2cos(! t � �).

If ! = !0 we see that A = 0, B = C = F02m!p and � = ⇡/2.

The exact formula is not as important as the idea. You should not memorize the above formula,you should remember the ideas involved. Even if you change the right hand side a little bit youwill get a di↵erent formula with di↵erent behavior. So there is no point in memorizing this specificformula. You can always recompute it later or look it up if you really need it.

For reasons we will explain in a moment we will call xc the transient solution and denote it byxtr and we will call the xp we found above the steady periodic solution and denote it by xsp. Thegeneral solution to our problem is

x = xc + xp = xtr + xsp.

We note that xc = xtr goes to zero as t ! 1 as

0 5 10 15 20

0 5 10 15 20

-5.0

-2.5

0.0

2.5

5.0

-5.0

-2.5

0.0

2.5

5.0

Figure 2.7: Solutions with di↵erent initial con-ditions for parameters k = 1, m = 1, F0 = 1,c = 0.7, and ! = 1.1.

all the terms involve an exponential with a nega-tive exponent. Hence for large t, the e↵ect of xtr

is negligible and we will essentially only see xsp.Notice that xsp involves no arbitrary constants,and the initial conditions will only a↵ect xtr. Thismeans that the e↵ect of the initial conditions willbe negligible after some period of time. Hencethe name transient. Because of this behavior, wemight as well focus on the steady periodic solu-tion and ignore the transient solution. See Fig-ure 2.7 for a graph of di↵erent initial conditions.

Notice that the speed at which xtr goes to zerodepends on p (and hence c). The bigger p is (thebigger c is), the “faster” xtr becomes negligible.So the smaller the damping, the longer the “tran-sient region.” This agrees with the observation

that when c = 0, the initial conditions a↵ect the behavior for all time (i.e. an infinite “transientregion”).

Let us describe what do we mean by resonance when damping is present. Since there were noconflicts when solving with undetermined coe�cient, there is no term that goes to infinity. Whatwe will look at however is the maximum value of the amplitude of the steady periodic solution. LetC be the amplitude of xsp. If we plot C as a function of ! (with all other parameters fixed) we canfind its maximum. This maximum is said to be practical resonance (we call the ! that achievesthis maximum the practical resonance frequency). A sample plot for three di↵erent values of c

Page 81: Urban Illiois

2.6. FORCED OSCILLATIONS AND RESONANCE 81

is given in Figure 2.8. As you can see the practical resonance amplitude grows as damping getssmaller, and any practical resonance can disappear when damping is large.

0.0 0.5 1.0 1.5 2.0 2.5 3.0

0.0 0.5 1.0 1.5 2.0 2.5 3.0

0.0

0.5

1.0

1.5

2.0

2.5

0.0

0.5

1.0

1.5

2.0

2.5

Figure 2.8: Graph of C(!) showing practical resonance with parameters k = 1, m = 1, F0 = 1.The top line is with c = 0.4, the middle line with c = 0.8, and the bottom line with c = 1.6.

To find the maximum it turns out we need to find the derivative C0(!). This is easily computedto be

C0(!) =�4!(2p2 + !2

� !20)F0

m�

(2!p)2 + (!20 � !

2)2�3/2 .

This is zero either when ! = 0 or when 2p2 + !2� !2

0 = 0. In other words when

! =q

!20 � 2p2 or 0

It can be shown that if !20 � 2p2 is positive then

q

!20 � 2p2 is the practical resonance frequency

(that is the point where C(!) is maximal, note that in this case C0(!) > 0 for small !). If ! = 0 isthe maximum, then essentially there is no practical resonance since we assume that ! > 0 in oursystem. In this case the amplitude gets larger as the forcing frequency gets smaller.

If practical resonance occurs, the frequency is smaller than !0. As damping c (and hence p)becomes smaller, the closer the practical resonance frequency comes to !0. So when dampingis very small, !0 is a good estimate of the resonance frequency. This behavior agrees with theobservation that when c = 0, !0 is the resonance frequency.

The behavior will be more complicated if the forcing function is not an exact cosine wave, butfor example a square wave. It will be good to come back to this section once you have learnedabout the Fourier series.

Page 82: Urban Illiois

82 CHAPTER 2. HIGHER ORDER LINEAR ODES

2.6.3 ExercisesExercise 2.6.1: Derive a formula for xsp if the equation is mx00 + cx0 + kx = F0 sin! t. Assumec > 0.

Exercise 2.6.2: Derive a formula for xsp if the equation is mx00+cx0+kx = F0 cos! t+F1 cos 3! t.Assume c > 0.

Exercise 2.6.3: Take mx00 + cx0 + kx = F0 cos! t. Fix m > 0 and k > 0. Now think of the functionC(!). For what values of c (solve in terms of m, k, and F0 will there be no practical resonance (forwhat values of c is there no maximum of C(!) for ! > 0).

Exercise 2.6.4: Take mx00 + cx0 + kx = F0 cos! t. Fix c > 0 and k > 0. Now think of the functionC(!). For what values of m (solve in terms of c, k, and F0 will there be no practical resonance (forwhat values of m is there no maximum of C(!) for ! > 0).

Exercise 2.6.5: Suppose a water tower in an earthquake acts as a mass-spring system. Assumethat the container on top is full and the water does not move around. The container then acts as amass and the support acts as the spring, where the induced vibrations are horizontal. Suppose thatthe container with water has a mass of m =10,000 kg. It takes a force of 1000 newtons to displacethe container 1 meter. For simplicity assume no friction.

Suppose that an earthquake induces an external force F(t) = mA!2 cos! t.a) What is the natural frequency of the water tower.b) If ! is not the natural frequency, find a formula for the amplitude of the resulting oscillations

of the water container.c) Suppose A = 1 and an earthquake with frequency 0.5 cycles per second comes. What is the

amplitude of the oscillations. Suppose that if the water tower moves more than 1.5 meter, the towercollapses. Will the tower collapse?

Page 83: Urban Illiois

Chapter 3

Systems of ODEs

3.1 Introduction to systems of ODEsNote: 1 lecture, §4.1 in EP

Often we do not have just one dependent variable and one equation. And as we will see, wemay end up with systems of several equations and several dependent variables even if we start witha single equation.

If we have several dependent variables, suppose y1, y2, . . . , yn we can have a di↵erential equa-tion involving all of them and their derivatives. For example, y001 = f (y01, y

0

2, y1, y2, x). Usually,when we have two dependent variables we would have two equations such as

y001 = f1(y01, y0

2, y1, y2, x),y002 = f2(y01, y

0

2, y1, y2, x),

for some functions f1 and f2. We call the above a system of di↵erential equations. More precisely,it is a second order system. Sometimes a system is easy to solve by solving for one variable andthen for the second variable.

Example 3.1.1: Take the first order system

y01 = y1,

y02 = y1 � y2,

with initial conditions of the form y1(0) = 1, y2(0) = 2.We note that y1 = C1ex is the general solution of the first equation. We can then plug this y1

into the second equation and get the equation y02 = C1ex� y2, which is a linear first order equation

that is easily solved for y2. By the method of integrating factor we get

exy2 =C1

2e2x +C2,

83

Page 84: Urban Illiois

84 CHAPTER 3. SYSTEMS OF ODES

or y2 =C12 ex +C2e�x. The general solution to the system is, therefore,

y1 = C1ex,

y2 =C1

2ex +C2e�x.

We can now solve for C1 and C2 given the initial conditions. We substitute x = 0 and find thatC1 = 1 and C2 =

32 .

Generally, we will not be so lucky to be able to solve like in the first example, and we will haveto solve for all variables at once.

As an example application, let us think of mass and spring sys-km2m2

tems again. Suppose we have one spring with constant k but twomasses m1 and m2. We can think of the masses as carts, and wewill suppose that they ride along with no friction. Let x1 be the

displacement of the first cart and x2 be the displacement of the second cart. That is, we put the twocarts somewhere with no tension on the spring, and we mark the position of the first and secondcart and call those the zero position. That is, x1 = 0 is a di↵erent position on the floor than theposition corresponding to x2 = 0. The force exerted by the spring on the first cart is k(x2�x1), sincex2 � x1 is how far the string is stretched (or compressed) from the rest position. The force exertedon the second cart is the opposite, thus the same thing with a negative sign. Using Newton’s secondlaw, we note that force equals mass times acceleration.

m1x001 = k(x2 � x1),m2x002 = �k(x2 � x1).

In this system we cannot solve for the x1 variable separately. That we must solve for both x1

and x2 at once is intuitively obvious, since where the first cart goes depends exactly on where thesecond cart goes and vice versa.

Before we talk about how to handle systems, let us note that in some sense we need onlyconsider first order systems. Take an nth order di↵erential equation

y(n) = F(y(n�1), . . . . , y0, y, x).

Define new variables u1, . . . , un and write the system

u01 = u2

u02 = u3

...

u0n�1 = un

u0n = F(un, un�1, . . . , u2, u1, x).

Page 85: Urban Illiois

3.1. INTRODUCTION TO SYSTEMS OF ODES 85

Now try to solve this system for u1, u2, . . . , un. Once you have solved for the u’s, you can discardu2 through un and let y = u1. We note that this y solves the original equation.

A similar process can be done for a system of higher order di↵erential equations. For example,a system of k di↵erential equations in k unknowns, all of order n, can be transformed into a firstorder system of n ⇥ k equations and n ⇥ k unknowns.

Example 3.1.2: Sometimes we can use this idea in reverse as well. Let us take the system

x0 = 2y � x, y0 = x,

where the independent variable is t. We wish to solve for the initial conditions x(0) = 1, y(0) = 0.We first notice that if we di↵erentiate the first equation once we get y00 = x0 and now we know

what x0 is in terms of x and y.y00 = x0 = 2y � x = 2y � y0.

So we now have an equation y00 + y0 � 2y = 0. We know how to solve this equation and we findthat y = C1e�2t +C2et. Once we have y we can plug in to get x.

x = y0 = �2C1e�2t +C2et.

We solve for the initial conditions 1 = x(0) = �2C1+C2 and 0 = y(0) = C1+C2. Hence, C1 = �C2

and 1 = 3C2. So C1 =�13 and C2 =

13 . Our solution is:

x =2e�2t + et

3, y =

�e�2t + et

3.

Exercise 3.1.1: Plug in and check that this really is the solution.

It is useful to go back and forth between systems and higher order equations for other reasons.For example, the ODE approximation methods are generally only given as solutions for first ordersystems. It is not very hard to adapt the code for the Euler method for a first order equation to firstorder systems. We essentially just treat the dependent variable not as a number but as a vector. Inmany mathematical computer languages there is almost no distinction in syntax.

In fact, this is what IODE was doing when you had it solve a second order equation numericallyin the IODE Project III if you have done that project.

The above example was what we will call a linear first order system, as none of the dependentvariables appear in any functions or with any higher powers than one. It is also autonomous as theequations do not depend on the independent variable t.

For autonomous systems we can easily draw the so-called direction field or vector field. Thatis, a plot similar to a slope field, but instead of giving a slope at each point, we give a direction (anda magnitude). The previous example x0 = 2y � x, y0 = x says that at the point (x, y) the directionin which we should travel to satisfy the equations should be the direction of the vector (2y � x, x)with the speed equal to the magnitude of this vector. So we draw the vector (2y� x, x) based at the

Page 86: Urban Illiois

86 CHAPTER 3. SYSTEMS OF ODES

point (x, y) and we do this for many points on the xy-plane. We may want to scale down the sizeof our vectors to fit many of them on the same direction field. See Figure 3.1.

We can now draw a path of the solution in the plane. That is, suppose the solution is given byx = f (t), y = g(t), then we can pick an interval of t (say 0 t 2 for our example) and plot allthe points ( f (t), g(t)) for t in the selected range. The resulting picture is usually called the phaseportrait (or phase plane portrait). The particular curve obtained we call the trajectory or solutioncurve. An example plot is given in Figure 3.2. In this figure the line starts at (1, 0) and travelsalong the vector field for a distance of 2 units of t. Since we solved this system precisely we cancompute x(2) and y(2). We get that x(2) ⇡ 2.475 and y(2) ⇡ 2.457. This point corresponds to thetop right end of the plotted solution curve in the figure.

-1 0 1 2 3

-1 0 1 2 3

-1

0

1

2

3

-1

0

1

2

3

Figure 3.1: The direction field for x0 = 2y � x,y0 = x.

-1 0 1 2 3

-1 0 1 2 3

-1

0

1

2

3

-1

0

1

2

3

Figure 3.2: The direction field for x0 = 2y � x,y0 = x with the trajectory of the solution start-ing at (1, 0) for 0 t 2.

Notice the similarity to the diagrams we drew for autonomous systems in one dimension. Butnow note how much more complicated things become if we allow just one more dimension.

Also note that we can draw phase portraits and trajectories in the xy-plane even if the system isnot autonomous. In this case however we cannot draw the direction field, since the field changesas t changes. For each t we would get a di↵erent direction field.

3.1.1 ExercisesExercise 3.1.2: Find the general solution of x01 = x2 � x1 + t, x02 = x2.

Exercise 3.1.3: Find the general solution of x01 = 3x1 � x2 + et, x02 = x1.

Exercise 3.1.4: Write ay00 + by0 + cy = f (x) as a first order system of ODEs.

Exercise 3.1.5: Write x00+y2y0 � x3 = sin(t), y00+ (x0+y0)2� x = 0 as a first order system of ODEs.

Page 87: Urban Illiois

3.2. MATRICES AND LINEAR SYSTEMS 87

3.2 Matrices and linear systemsNote: 1 and a half lectures, first part of §5.1 in EP

3.2.1 Matrices and vectorsBefore we can start talking about linear systems of ODEs, we will need to talk about matrices, solet us review these briefly. A matrix is an m ⇥ n array of numbers (m rows and n columns). Forexample, we denote a 3 ⇥ 5 matrix as follows

A =

2

6

6

6

6

6

6

6

6

4

a11 a12 a13 a14 a15

a21 a22 a23 a24 a25

a31 a32 a33 a34 a35

3

7

7

7

7

7

7

7

7

5

.

By a vector we will usually mean a column vector which is an n ⇥ 1 matrix. If we mean a rowvector we will explicitly say so (a row vector is a 1 ⇥ n matrix). We will usually denote matricesby upper case letters and vectors by lower case letters with an arrow such as ~x or ~b. By ~0 we willmean the vector of all zeros.

It is easy to define some operations on matrices. Note that we will want 1⇥ 1 matrices to reallyact like numbers, so our operations will have to be compatible with this viewpoint.

First, we can multiply by a scalar (a number). This means just multiplying each entry by thesame number. For example,

2"

1 2 34 5 6

#

=

"

2 4 68 10 12

#

.

Matrix addition is also easy. We add matrices element by element. For example,"

1 2 34 5 6

#

+

"

1 1 �10 2 4

#

=

"

2 3 24 7 10

#

.

If the sizes do not match, then addition is not defined.If we denote by 0 the matrix of with all zero entries, by c, d some scalars, and by A, B, C some

matrices, we have the following familiar rules.

A + 0 = A = 0 + A,A + B = B + A,(A + B) +C = A + (B +C),c(A + B) = cA + cB,(c + d)A = cA + dA.

Page 88: Urban Illiois

88 CHAPTER 3. SYSTEMS OF ODES

Another operation which is useful for matrices is the so-called transpose. This operation justswaps rows and columns of a matrix. The transpose of A is denoted by AT . Example:

"

1 2 34 5 6

#T

=

2

6

6

6

6

6

6

6

6

4

1 42 53 6

3

7

7

7

7

7

7

7

7

5

3.2.2 Matrix multiplicationNext let us define matrix multiplication. First we define the so-called dot product (or inner product)of two vectors. Usually this will be a row vector multiplied with a column vector of the same size.For the dot product we multiply each pair of entries from the first and the second vector and wesum these products. The result is a single number. For example,

h

a1 a2 a3

i

·

2

6

6

6

6

6

6

6

6

4

b1

b2

b3

3

7

7

7

7

7

7

7

7

5

= a1b1 + a2b2 + a3b3.

And similarly for larger (or smaller) vectors.Armed with the dot product we can define the product of matrices. First let us denote by

rowi(A) the ith row of A and by column j(A) the jth column of A. Now for an m⇥ n matrix A and ann ⇥ p matrix B we can define the product AB. We let AB be an m ⇥ p matrix whose i jth entry is

rowi(A) · column j(B).

Note that the sizes must match. Example:

"

1 2 34 5 6

#

2

6

6

6

6

6

6

6

6

4

1 0 �11 1 11 0 0

3

7

7

7

7

7

7

7

7

5

=

=

"

1 · 1 + 2 · 1 + 3 · 1 1 · 0 + 2 · 1 + 3 · 0 1 · (�1) + 2 · 1 + 3 · 04 · 1 + 5 · 1 + 6 · 1 4 · 0 + 5 · 1 + 6 · 0 4 · (�1) + 5 · 1 + 6 · 0

#

=

"

6 2 115 5 1

#

For multiplication we will want an analogue of a 1. Here we use the so-called identity matrix.The identity matrix is a square matrix with 1s on the main diagonal and zeros everywhere else. Itis usually denoted by I. For each size we have a di↵erent matrix and so sometimes we may denotethe size as a subscript. For example, the I3 would be the 3 ⇥ 3 identity matrix

I = I3 =

2

6

6

6

6

6

6

6

6

4

1 0 00 1 00 0 1

3

7

7

7

7

7

7

7

7

5

.

Page 89: Urban Illiois

3.2. MATRICES AND LINEAR SYSTEMS 89

We have the following rules for matrix multiplication. Suppose that A, B, C are matrices of thecorrect sizes so that the following make sense. c is some scalar (number).

A(BC) = (AB)C,A(B +C) = AB + AC,(B +C)A = BA +CA,c(AB) = (cA)B = A(cB),IA = A = AI.

A few warnings are in order however.

(i) AB , BA in general (it may be true by fluke sometimes). That is, matrices do not commute.

(ii) AB = AC does not necessarily imply B = C even if A is not 0.

(iii) AB = 0 does not necessarily mean that A = 0 or B = 0.

For the last two items to hold we would need to essentially “divide” by a matrix. This is wherematrix inverse comes in. Suppose that A is an n⇥n matrix and that there exists another n⇥n matrixB such that

AB = I = BA.

Then we call B the inverse of A and we denote B by A�1. If the inverse of A exists, then we call Ainvertible. If A is not invertible we say A is singular or just say it is not invertible.

If A is invertible, then AB = AC does imply that B = C (the inverse is unique). We justmultiply both sides by A�1 to get A�1AB = A�1AC or IB = IC or B = C. It is also not hard to seethat (A�1)�1 = A.

3.2.3 The determinantWe can now talk about determinants of square matrices. We define the determinant of a 1 ⇥ 1matrix as the value of its own entry. For a 2 ⇥ 2 matrix we define

det "

a bc d

#!

= ad � bc.

Before trying to compute determinant for larger matrices, let us first note the meaning of thedeterminant. Consider an n ⇥ n matrix as a mapping of Rn to Rn. For example, a 2 ⇥ 2 matrix Ais a mapping of the plane where ~x gets sent to A~x. Then the determinant of A is then the factorby which the volume of objects gets changed. For example, if we take the unit square (square ofsides 1) in the plane, then A takes the square to a parallelogram of area |det(A)|. The sign of det(A)denotes changing of orientation (if the axes got flipped). For example,

A ="

1 1�1 1

#

.

Page 90: Urban Illiois

90 CHAPTER 3. SYSTEMS OF ODES

Then det(A) = 1 + 1 = 2. Now let us see where the square with vertices (0, 0), (1, 0), (0, 1) and(1, 1) gets sent. Obviously (0, 0) gets sent to (0, 0). Now

"

1 1�1 1

# "

10

#

=

"

1�1

#

,

"

1 1�1 1

# "

01

#

=

"

11

#

,

"

1 1�1 1

# "

11

#

=

"

20

#

.

So it turns out that the image of the square is another square. This one has a side of lengthp

2 andis therefore of area 2.

If you think back to high school geometry, you may have seen a formula for computing thearea of a parallelogram with vertices (0, 0), (a, c), (b, d) and (a + b, c + d). And it is precisely

det "

a bc d

#!

.

The vertical lines here mean absolute value. The matrix⇥ a b

c d⇤

carries the unit square to the givenparallelogram.

Now we can define the determinant for larger matrices. We define Ai j as the matrix A with theith row and the jth column deleted. To compute the determinant of a matrix, pick one row, say theith row and compute.

det(A) =n

X

j=1

(�1)i+ jai j det(Ai j).

For example, for the first row we get

det(A) = a11 det(A11) � a12 det(A12) + a13 det(A13) � · · ·

8

>

>

<

>

>

:

+a1n det(A1n) if n is odd,�a1n det(A1n) if n even.

We alternately add and subtract the determinants of the submatrices Ai j for a fixed i and all j. Forexample, for a 3⇥3 matrix, picking the first row, we would get det(A) = a11 det(A11)�a12 det(A12)+a13 det(A13). For example,

det

0

B

B

B

B

B

B

B

B

@

2

6

6

6

6

6

6

6

6

4

1 2 34 5 67 8 9

3

7

7

7

7

7

7

7

7

5

1

C

C

C

C

C

C

C

C

A

= 1 · det "

5 68 9

#!

� 2 · det "

4 67 9

#!

+ 3 · det "

4 57 8

#!

= 1(5 · 9 � 6 · 8) � 2(4 · 9 � 6 · 7) + 3(4 · 8 � 5 · 7) = 0.

The numbers (�1)i+ j det(Ai j) are called cofactors of the matrix and this way of computing thedeterminant is called the cofactor expansion. It is also possible to compute the determinant byexpanding along columns (picking a column instead of a row above).

Note that a common notation for the determinant is a pair of vertical lines.�

a bc d

= det "

a bc d

#!

.

Page 91: Urban Illiois

3.2. MATRICES AND LINEAR SYSTEMS 91

I personally find this notation confusing since vertical lines for me usually mean a positive quantity,while determinants can be negative. So I will not ever use this notation in these notes.

One of the most important properties of determinants (in the context of this course) is thefollowing theorem.

Theorem 3.2.1. An n ⇥ n matrix A is invertible if and only if det(A) , 0.

In fact, we have a formula for the inverse of a 2 ⇥ 2 matrix"

a bc d

#

�1

=1

ad � bc

"

d �b�c a

#

.

Notice the determinant of the matrix in the denominator of the fraction. The formula only worksif the determinant is nonzero, otherwise we are dividing by zero.

3.2.4 Solving linear systemsOne application of matrices we will need is to solve systems of linear equations. This may be bestshown by example. Suppose that we have the following system of linear equations

2x1 + 2x2 + 2x3 = 2,x1 + x2 + 3x3 = 5,x1 + 4x2 + x3 = 10.

Without changing the solution, we note that we could do swap equations in this system, wecould multiply any of the equations by a nonzero number, and we could add a multiple of oneequation to another equation. It turns out these operations always su�ce to find a solution.

It is easier to write this as a matrix equation. Note that the system can be written as2

6

6

6

6

6

6

6

6

4

2 2 21 1 31 4 1

3

7

7

7

7

7

7

7

7

5

2

6

6

6

6

6

6

6

6

4

x1

x2

x3

3

7

7

7

7

7

7

7

7

5

=

2

6

6

6

6

6

6

6

6

4

2510

3

7

7

7

7

7

7

7

7

5

.

To solve the system we put the coe�cient matrix (the matrix on the left hand side of the equation)together with the vector on the right and side and get the so-called augmented matrix

2

6

6

6

6

6

6

6

6

4

2 2 2 21 1 3 51 4 1 10

3

7

7

7

7

7

7

7

7

5

.

We then apply the following three elementary operations.

(i) Swap two rows.

Page 92: Urban Illiois

92 CHAPTER 3. SYSTEMS OF ODES

(ii) Multiply a row by a nonzero number.

(iii) Add a multiple of one row to another row.

We will keep doing these operations until we get into a state where it is easy to read o↵ the answeror until we get into a contradiction indicating no solution, for example if we come up with anequation such as 0 = 1.

Let us work through the example. First multiply the first row by 12 .

2

6

6

6

6

6

6

6

6

4

1 1 1 11 1 3 51 4 1 10

3

7

7

7

7

7

7

7

7

5

Now subtract the first row from the second and third row.2

6

6

6

6

6

6

6

6

4

1 1 1 10 0 2 40 3 0 9

3

7

7

7

7

7

7

7

7

5

Multiply the last row by 13 and the second row by 1

2 .

2

6

6

6

6

6

6

6

6

4

1 1 1 10 0 1 20 1 0 3

3

7

7

7

7

7

7

7

7

5

Swap rows 2 and 3.2

6

6

6

6

6

6

6

6

4

1 1 1 10 1 0 30 0 1 2

3

7

7

7

7

7

7

7

7

5

Subtract the last row from the first, then subtract the second row from the first.2

6

6

6

6

6

6

6

6

4

1 0 0 �40 1 0 30 0 1 2

3

7

7

7

7

7

7

7

7

5

If we think about what equations this augmented matrix represents, we see that x1 = �4, x2 = 3,and x3 = 2. We try these and, voilà. It works.

Exercise 3.2.1: Check that this solution works.

If we write this equation in matrix notation as

A~x = ~b,

Page 93: Urban Illiois

3.2. MATRICES AND LINEAR SYSTEMS 93

where A is the matrix

2 2 21 1 31 4 1

and ~b is the vector

2510

. The solution can be also computed with theinverse,

~x = A�1A~x = A�1~b.

One last note to make about linear systems of equations is that it is possible that the solution isnot unique (or that no solution exists). It is easy to tell if a solution does not exist. If during therow reduction you come up with a row where all the entries except the last one are zero (the lastentry in a row corresponds to the right hand side of the equation) the system is inconsistent andhas no solution. For example if for a system of 3 equations and 3 unknowns you find a row suchas [ 0 0 0 1 ] in the augmented matrix, you know the system is inconsistent.

You generally try to use row operations until the following conditions are satisfied. The firstnonzero entry in each row is called the leading entry.

(i) There is only one leading entry in each column.

(ii) All the entries above and below a leading entry are zero.

(iii) All leading entries are 1.

Such a matrix is said to be in reduced row echelon form. The variables corresponding to columnswith no leading entries are said to be free variables. Free variables mean that we can pick thosevariables to be anything we want and then solve for the rest of the unknowns.

Example 3.2.1: The following augmented matrix is in reduced row echelon form.

2

6

6

6

6

6

6

6

6

4

1 2 0 30 0 1 10 0 0 0

3

7

7

7

7

7

7

7

7

5

If the variables are named x1, x2, and x3, then x2 is the free variable and x1 = 3 � 2x2 and x3 = 1.

On the other hand if during the row reduction process you come up with the matrix

2

6

6

6

6

6

6

6

6

4

1 2 13 30 0 1 10 0 0 3

3

7

7

7

7

7

7

7

7

5

,

there is no need to go further. The last row corresponds to the equation 0x1 + 0x2 + 0x3 = 3 whichis preposterous. Hence, no solution exists.

Page 94: Urban Illiois

94 CHAPTER 3. SYSTEMS OF ODES

3.2.5 Computing the inverseIf the coe�cient matrix is square and there exists a unique solution ~x to A~x = ~b for any ~b, then Ais invertible. In fact by multiplying both sides by A�1 you can see that ~x = A�1~b. So it is useful tocompute the inverse, if you want to solve the equation for many di↵erent right hand sides ~b.

The 2 ⇥ 2 inverse is basically given by a formula, but it is not hard to also compute inverses oflarger matrices. While we will not have too much occasion to compute inverses for larger matricesthan 2 ⇥ 2 by hand, let us touch on how to do it. Finding the inverse of A is actually just solving abunch of linear equations. If you can solve A~xk = ~ek where ~ek is the vector with all zeros except a1 at the kth position, then the inverse is the matrix with the columns ~xk for k = 1, . . . , n (exercise:why?). Therefore, to find the inverse we can write a larger n ⇥ 2n augmented matrix [ A I ], whereI is the identity. If you do row reduction and put the matrix in reduced row echelon form, then thematrix will be of the form [ I A�1 ] if and only if A is invertible, so you can just read o↵ the inverseA�1.

3.2.6 ExercisesExercise 3.2.2: Solve

⇥ 1 23 4

~x =⇥ 5

6⇤

by using matrix inverse.

Exercise 3.2.3: Compute determinant of

9 �2 �6�8 3 610 �2 �6

.

Exercise 3.2.4: Compute determinant of"

1 2 3 14 0 5 06 0 7 08 0 10 1

#

. Hint: expand along the proper row or column

to make the calculations simpler.

Exercise 3.2.5: Compute inverse of

1 2 31 1 10 1 0

.

Exercise 3.2.6: For which h is

1 2 34 5 67 8 h

not invertible? Is there only one such h? Are there several?Infinitely many.

Exercise 3.2.7: For which h is

h 1 10 h 01 1 h

not invertible? Find all such h.

Exercise 3.2.8: Solve

9 �2 �6�8 3 610 �2 �6

~x =

123

.

Exercise 3.2.9: Solve

5 3 78 4 46 3 3

~x =

200

.

Exercise 3.2.10: Solve"

3 2 3 03 3 3 30 2 4 22 3 4 3

#

~x ="

2041

#

.

Page 95: Urban Illiois

3.3. LINEAR SYSTEMS OF ODES 95

3.3 Linear systems of ODEsNote: less than 1 lecture, second part of §5.1 in EP

First let us talk about matrix or vector valued functions. This is essentially just a matrix whoseentries depend on some variable. Let us say the independent variable is t. Then a vector valuedfunction ~x(t) is really something like

~x(t) =

2

6

6

6

6

6

6

6

6

6

6

6

6

6

6

6

4

x1(t)x2(t)...

xn(t)

3

7

7

7

7

7

7

7

7

7

7

7

7

7

7

7

5

.

Similarly a matrix valued function is something such as

A(t) =

2

6

6

6

6

6

6

6

6

6

6

6

6

6

6

6

4

a11(t) a12(t) · · · a1n(t)a21(t) a22(t) · · · a2n(t)...

.... . .

...an1(t) an2(t) · · · ann(t)

3

7

7

7

7

7

7

7

7

7

7

7

7

7

7

7

5

.

We can talk about the derivative A0(t) or dAdt and this is just the matrix valued function whose i jth

entry is a0i j(t).Similar di↵erentiation rules apply here. Let A and B be matrix valued functions. Let c a scalar

and C be a constant matrix. Then

(A + B)0 = A0 + B0

(AB)0 = A0B + AB0

(cA)0 = cA0

(CA)0 = CA0

(AC)0 = A0C

Do note the order in the last two expressions.A first order linear system of ODEs is a system which can be written as

~x 0(t) = P(t)~x(t) + ~f (t).

Where P is a matrix valued function, and ~x and ~f are vector valued functions. We will oftensuppress the dependence on t and only write ~x 0 = P~x + ~f . A solution is of course a vector valuedfunction ~x satisfying the equation.

For example, the equations

x01 = 2tx1 + etx2 + t2,

x02 =x1

t� x2 + et,

Page 96: Urban Illiois

96 CHAPTER 3. SYSTEMS OF ODES

can be written as~x 0 =

"

2t et

1t �1

#

~x +"

t2

et

#

.

We will mostly concentrate on equations that are not just linear, but are in fact constant coe�-cient equations. That is, the matrix P will be a constant and not depend on t.

When ~f = ~0 (the zero vector), then we say the system is homogeneous. For homogeneous linearsystems we still have the principle of superposition, just like for single homogeneous equations.

Theorem 3.3.1 (Superposition). Let ~x 0 = P~x be a linear homogeneous system of ODEs. Supposethat ~x1, . . . , ~xn are n solutions of the equation, then

~x = c1~x1 + c2~x2 + · · · + cn~xn, (3.1)

is also a solution. If furthermore this is a system of n equations (P is n ⇥ n), and ~x1, . . . , ~xn arelinearly independent. Then every solution can be written as (3.1).

Linear independence for vector valued functions is essentially the same as for normal functions.~x1, . . . , ~xn are linearly independent if and only if

c1~x1 + c2~x2 + · · · + cn~xn = ~0

has only the solution c1 = c2 = · · · = cn = 0.The linear combination c1~x1 + c2~x2 + · · · + cn~xn could always be written as

X(t)~c,

where X(t) is the matrix with columns ~x1, . . . , ~xn, and ~c is the column vector with entries c1, . . . , cn.X(t) is called the fundamental matrix, or fundamental matrix solution.

To solve nonhomogeneous first order linear systems. We apply the same technique as we didbefore.

Theorem 3.3.2. Let ~x 0 = P~x+ ~f be a linear system of ODEs. Suppose ~xp is one particular solution.Then every solution can be written as

~x = ~xc + ~xp,

where ~xc is a solution to the associated homogeneous equation (~x 0 = P~x).

So the procedure will be exactly the same. We find a particular solution to the nonhomogeneousequation, then we find the general solution to the associated homogeneous equation and we addthe two.

Alright, suppose you have found the general solution ~x 0 = P~x + ~f . Now you are given aninitial condition of the form ~x(t0) = ~b for some constant vector ~b. Now suppose that X(t) is

Page 97: Urban Illiois

3.3. LINEAR SYSTEMS OF ODES 97

the fundamental matrix solution of the associated homogeneous equation (i.e. columns of X aresolutions). The general solution is written as

~x(t) = X(t)~c + xp(t).

Then we are seeking a vector ~c such that

~b = ~x(t0) = X(t0)~c + xp(t0).

Or in other words we are solving the nonhomogeneous system of linear equations

X(t0)~c = ~b � xp(t0)

for ~c.

Example 3.3.1: In §3.1 we solved the following system

x01 = x1,

x02 = x1 � x2.

with initial conditions x1(0) = 1, x2(0) = 2.This is a homogeneous system, so ~f = ~0. We write the system as

~x 0 ="

1 01 �1

#

~x, ~x(0) ="

12

#

.

We found the general solution was x1 = c1et and x2 =c12 et + c2e�t. Hence in matrix notation,

the fundamental matrix solution isX(t) =

"

et 012et e�t

#

.

It is not hard to see that the columns of this matrix are linearly independent. To see this, just plugin t = 0 and note that the two constant vectors are already linearly independent here.

Hence to solve the initial problem we solve the equation

X(0)~c = ~b,

or in other words,"

1 012 1

#

~c ="

12

#

.

After a single elementary row operation we find that ~c =h

13/2

i

. Hence our solution is

~x(t) = X(t)~c ="

et 012et e�t

# "

132

#

=

"

et

12et + 3

2e�t

#

.

This agrees with our previous solution.

Page 98: Urban Illiois

98 CHAPTER 3. SYSTEMS OF ODES

3.3.1 ExercisesExercise 3.3.1: Write the system x01 = 2x1 � 3tx2 + sin t, x02 = etx1 + 3x2 + cos t as in the form~x 0 = P(t)~x + ~f (t).

Exercise 3.3.2: a) Verify that the system ~x 0 =⇥ 1 3

3 1⇤

~x has the two solutions⇥ 1

1⇤

e4t and⇥ 1�1

e�2t. b)Write down the general solution. c) Write down the general solution in the form x1 =?, x2 =? (i.e.write down a formula for each element of the solution).

Exercise 3.3.3: Verify that⇥ 1

1⇤

et and⇥ 1�1

et are linearly independent. Hint: Just plug in t = 0.

Exercise 3.3.4: Verify that

110

et and

1�11

et and

1�11

e2t are linearly independent. Hint: You mustbe a bit more tricky than in the previous exercise.

Exercise 3.3.5: Verify thath

tt2i

andh

t3t4

i

are linearly independent.

Page 99: Urban Illiois

3.4. EIGENVALUE METHOD 99

3.4 Eigenvalue methodNote: 2 lectures, §5.2 in EP

In this section we will learn how to solve linear homogeneous constant coe�cient systems ofODEs by the eigenvalue method. Suppose you have a linear constant coe�cient homogeneoussystem

~x 0 = P~x.

Now suppose we try to adapt the method for single constant coe�cient equations by trying thefunction e�t. However, ~x is a vector. So we try ~ve�t, where ~v is an arbitrary constant vector. Weplug into the equation to get

�~ve�t = P~ve�t.

We divide by e�t and notice that we are looking for a � and ~v that satisfy the equation

�~v = P~v.

To solve this equation we need a little bit more linear algebra which we review now.

3.4.1 Eigenvalues and eigenvectors of a matrixLet A be a square constant matrix. Suppose there is a scalar � and a nonzero vector ~v such that

A~v = �~v.

We then call � an eigenvalue of A and ~v is called the corresponding eigenvector.

Example 3.4.1: The matrix⇥ 2 1

0 1⇤

has an eigenvalue of � = 2 with the corresponding eigenvector⇥ 1

0⇤

because"

2 10 1

# "

10

#

=

"

20

#

= 2"

10

#

.

If we rewrite the equation for an eigenvalue as

(A � �I)~v = ~0.

We notice that this has a nonzero solution ~v only if A � �I is not invertible. Were it invertible, wecould write (A��I)�1(A��I)~v = (A��I)�1~0 which implies ~v = ~0. Therefore, A has the eigenvalue� if and only if � solves the equation

det(A � �I) = 0.

Note that this means that we will be able to find an eigenvalue without finding the correspond-ing eigenvector. The eigenvector will have to be found later, once � is known.

Page 100: Urban Illiois

100 CHAPTER 3. SYSTEMS OF ODES

Example 3.4.2: Find all eigenvalues of

2 1 11 2 00 0 2

.We write

det

0

B

B

B

B

B

B

B

B

@

2

6

6

6

6

6

6

6

6

4

2 1 11 2 00 0 2

3

7

7

7

7

7

7

7

7

5

� �

2

6

6

6

6

6

6

6

6

4

1 0 00 1 00 0 1

3

7

7

7

7

7

7

7

7

5

1

C

C

C

C

C

C

C

C

A

= det

0

B

B

B

B

B

B

B

B

@

2

6

6

6

6

6

6

6

6

4

2 � � 1 11 2 � � 00 0 2 � �

3

7

7

7

7

7

7

7

7

5

1

C

C

C

C

C

C

C

C

A

=

= (2 � �)2((2 � �)2� 1) = �(� � 1)(� � 2)(� � 3).

and so the eigenvalues are � = 1, � = 2, and � = 3.

Note that for an n⇥n matrix, the polynomial we get by computing det(A��I) will be of degreen, and hence we will in general have n eigenvalues.

To find an eigenvector corresponding to �, we write

(A � �I)~v = ~0,

and solve for a nontrivial (nonzero) vector ~v. If � is an eigenvalue, this will always be possible.

Example 3.4.3: Find the eigenvector of

2 1 11 2 00 0 2

corresponding to the eigenvalue � = 3.We write

(A � �I)~v =

0

B

B

B

B

B

B

B

B

@

2

6

6

6

6

6

6

6

6

4

2 1 11 2 00 0 2

3

7

7

7

7

7

7

7

7

5

� 3

2

6

6

6

6

6

6

6

6

4

1 0 00 1 00 0 1

3

7

7

7

7

7

7

7

7

5

1

C

C

C

C

C

C

C

C

A

2

6

6

6

6

6

6

6

6

4

v1

v2

v3

3

7

7

7

7

7

7

7

7

5

=

2

6

6

6

6

6

6

6

6

4

�1 1 11 �1 00 0 �1

3

7

7

7

7

7

7

7

7

5

2

6

6

6

6

6

6

6

6

4

v1

v2

v3

3

7

7

7

7

7

7

7

7

5

= ~0.

It is easy to solve this system of linear equations. Write down the augmented matrix2

6

6

6

6

6

6

6

6

4

�1 1 1 01 �1 0 00 0 �1 0

3

7

7

7

7

7

7

7

7

5

and perform row operations (exercise: which ones?) until you get2

6

6

6

6

6

6

6

6

4

1 �1 0 00 0 1 00 0 0 0

3

7

7

7

7

7

7

7

7

5

.

The equations the entries of ~v have to satisfy are, therefore, v1 � v2 = 0, v3 = 0, and v2 is a freevariable. We can pick v2 to be arbitrary (but nonzero) and let v1 = v2 and of course v3 = 0. Forexample, ~v =

110

. We try this:2

6

6

6

6

6

6

6

6

4

2 1 11 2 00 0 2

3

7

7

7

7

7

7

7

7

5

2

6

6

6

6

6

6

6

6

4

110

3

7

7

7

7

7

7

7

7

5

=

2

6

6

6

6

6

6

6

6

4

330

3

7

7

7

7

7

7

7

7

5

= 3

2

6

6

6

6

6

6

6

6

4

110

3

7

7

7

7

7

7

7

7

5

.

Yay! It worked.

Page 101: Urban Illiois

3.4. EIGENVALUE METHOD 101

Exercise 3.4.1 (easy): Are the eigenvectors unique? Can you find a di↵erent eigenvector for � = 3in the example above? How does it relate to the other eigenvector?

Exercise 3.4.2: Note that when the matrix is 2 ⇥ 2 you do not need to write down the augmentedmatrix when computing eigenvectors (if you have computed the eigenvalues correctly). Can yousee why? Try it for the matrix

⇥ 2 11 2

.

3.4.2 The eigenvalue method with distinct real eigenvaluesOK. We have the equation

~x 0 = P~x.

We find the eigenvalues �1, �2, . . . , �n of the matrix P, and the corresponding eigenvectors ~v1, ~v2,. . . , ~vn. Now we notice that the functions ~v1e�1t, ~v2e�2t, . . . , ~vne�nt are solutions of the equation andhence ~x = c1~v1e�1t + c2~v2e�2t + · · · + cn~vne�nt is a solution.

Theorem 3.4.1. Take ~x 0 = P~x. If P is n ⇥ n and has n distinct real eigenvalues, �1, . . . , �n thenthere are n linearly independent corresponding eigenvectors ~v1, . . . , ~vn, and the general solutionto the ODE can be written as.

~x = c1~v1e�1t + c2~v2e�2t + · · · + cn~vne�nt.

Example 3.4.4: Suppose we take the system

~x 0 =

2

6

6

6

6

6

6

6

6

4

2 1 11 2 00 0 2

3

7

7

7

7

7

7

7

7

5

~x.

Find the general solution.We have found the eigenvalues 1, 2, 3 earlier. We have found the eigenvector

110

for the eigen-

value 3. In similar fashion we find the eigenvector

1�10

for the eigenvalue 1 and

01�1

for theeigenvalue 2 (exercise: check). Hence our general solution is

~x =

2

6

6

6

6

6

6

6

6

4

1�10

3

7

7

7

7

7

7

7

7

5

et +

2

6

6

6

6

6

6

6

6

4

001

3

7

7

7

7

7

7

7

7

5

e2t +

2

6

6

6

6

6

6

6

6

4

110

3

7

7

7

7

7

7

7

7

5

e3t =

2

6

6

6

6

6

6

6

6

4

et + e3t

�et + e3t

e2t

3

7

7

7

7

7

7

7

7

5

.

Exercise 3.4.3: Check that this really solves the system.

Note: If you write a homogeneous linear constant coe�cient nth order equation as a first ordersystem (as we did in §3.1) then the eigenvalue equation

det(P � �I) = 0.

is essentially the same as the characteristic equation we got in §2.2 and §2.3.

Page 102: Urban Illiois

102 CHAPTER 3. SYSTEMS OF ODES

3.4.3 Complex eigenvaluesA matrix might very well have complex eigenvalues even if all the entries are real. For example,suppose that we have the system

~x 0 ="

1 1�1 1

#

~x.

Let us compute the eigenvalues of the matrix P =⇥ 1 1�1 1

.

det(P � �I) = det "

1 � � 1�1 1 � �

#!

= (1 � �)2 + 1 = �2� 2� + 2 = 0.

From this we note that � = 1 ± i. The corresponding eigenvectors will also be complex

(P � (1 � i)�)~v = ~0"

i 1�1 i

#

~v = ~0.

It is obvious that the equations iv1 + v2 = 0 and �v1 + iv2 = 0 are multiples of each other. So weonly need to consider one of them. After picking v2 = 1, for example, we have the eigenvector~v =

⇥ i1⇤

. In similar fashion we find that⇥

�i1⇤

is an eigenvector corresponding to the eigenvalue 1+ i.We could write the solution as

~x = c1

"

i1

#

e(1�i)t + c2

"

�i1

#

e(1+i)t =

"

c1ie(1�i)t� c2ie(1+i)t

c1e(1�i)t + c2e(1+i)t1

#

.

But then we would need to look for complex values c1 and c2 to solve any initial conditions. Andeven then it is perhaps not completely clear that we get a real solution. We could use Euler’sformula here and do the whole song and dance we did before, but we will do something a bitsmarter first.

We claim that we did not have to look for the second eigenvector (nor for the second eigen-value). All complex eigenvalues come in pairs (because the matrix P is real).

First a small side note. The real part of a complex number z can be computed as z+z2 , where the

bar above z means a + ib = a � ib. This operation is called the complex conjugate. Note that fora real number a, a = a. Similarly we can bar whole vectors or matrices. If a matrix P is real thenP = P. We note that P~x = P ~x = P~x. Or

(P � �I)~v = (P � �I)~v.

So if ~v is an eigenvector corresponding to eigenvalue a+ ib, then ~v is an eigenvector correspondingto eigenvalue a � ib.

Now suppose that a + ib is a complex eigenvalue of P, ~v the corresponding eigenvector andhence

~x1 = ~ve(a+ib)t

Page 103: Urban Illiois

3.4. EIGENVALUE METHOD 103

is a solution (complex valued) of ~x 0 = P~x. Then note that ea+ib = ea�ib and hence

~x2 = ~x1 = ~ve(a�ib)t

is also a solution. Now take the function

~x3 = Re ~x1 = Re~ve(a+ib)t =~x1 + ~x1

2=~x1 + ~x2

2.

Is also a solution. And it is real valued! Similarly as Im z = z�z2i is the imaginary part we find that

~x4 = Im ~x1 =~x1 � ~x2

2i.

is also a real valued solution. It turns out that ~x3 and ~x4 are linearly independent.

Returning to our problem, we take

~x1 =

"

i1

#

e(1�i)t =

"

i1

#

et cos t + iet sin t�

=

"

iet cos t � et sin tet cos t + iet sin t

#

It is easy to see that

Re ~x1 =

"

�et sin tet cos t

#

,

Im ~x1 =

"

et cos tet sin t

#

,

are the solutions we seek.

Exercise 3.4.4: Check that these really are solutions.

The general solution is

~x = c1

"

�et sin tet cos t

#

+ c2

"

et cos tet sin t

#

=

"

�c1et sin t + c2et cos tc1et cos t + c2et sin t

#

.

This solution is real valued for real c1 and c2. Now we can solve for any initial conditions that wehave.

The process is this. When you have complex eigenvalues, you notice that they always come inpairs. You take one � = a + ib from the pair, you find the corresponding eigenvector ~v. You notethat Re~ve(a+ib)t and Im~ve(a+ib)t are also solutions to the equation, are real valued and are linearlyindependent. You go on to the next eigenvalue which is either a real eigenvalue or another complexeigenvalue pair. Hence, you will end up with n linearly independent solutions if you had n distincteigenvalues (real or complex).

You can now find a real valued general solution to any homogeneous system where the matrixhas distinct eigenvalues. When you have repeated eigenvalues, matters get a bit more complicatedand we will look at that situation in §3.7.

Page 104: Urban Illiois

104 CHAPTER 3. SYSTEMS OF ODES

3.4.4 ExercisesExercise 3.4.5: Let A be an 3⇥ 3 matrix with an eigenvalue of 3 and a corresponding eigenvector~v =

1�13

. Find A~v.

Exercise 3.4.6: a) Find the general solution of x01 = 2x1, x02 = 3x2 using the eigenvalue method(first write the system in the form ~x 0 = A~x). b) Solve the system by solving each equation separatelyand verify you get the same general solution.

Exercise 3.4.7: Find the general solution of x01 = 3x1 + x2, x02 = 2x1 + 4x2 using the eigenvaluemethod.

Exercise 3.4.8: Find the general solution of x01 = x1 � 2x2, x02 = 2x1 + x2 using the eigenvaluemethod. Do not use complex exponentials in your solution.

Exercise 3.4.9: a) Compute eigenvalues and eigenvectors of A =

9 �2 �6�8 3 610 �2 �6

. b) Find the general

solution of ~x 0 = A~x.

Exercise 3.4.10: Compute eigenvalues and eigenvectors of

�2 �1 �13 2 1�3 �1 0

.

Page 105: Urban Illiois

3.5. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS 105

3.5 Two dimensional systems and their vector fieldsNote: 1 lecture, should really be in EP §5.2, but is in EP §6.2

Let us take a moment to talk about homogeneous systems in the plane. We want to think abouthow the vector fields look and how this depends on the eigenvalues. So we have a 2 ⇥ 2 matrix Pand the system

"

xy

#

0

= P"

xy

#

. (3.2)

We will be able to visually tell how the vector field looks once we find the eigenvalues and eigen-vectors of the matrix.

Case 1. Suppose that the eigenvalues are real

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 3.3: Eigenvectors of P.

and positive. Find the two eigenvectors and plotthem in the plane. For example, take the matrix⇥ 1 1

0 2⇤

. The eigenvalues are 1 and 2 and the corre-sponding eigenvectors are

⇥ 10⇤

and⇥ 1

1⇤

. See Fig-ure 3.3.

Now suppose that x and y are on the line de-termined by an eigenvector ~v for an eigenvalue �.That is,

⇥ xy⇤

= a~v for some scalar a. Then"

xy

#

0

= P"

xy

#

= P(a~v) = a(P~v) = a�~v.

The derivative is a multiple of ~v and hence pointsalong the line determined by ~v. As � > 0, thederivative points in the direction of ~v when a ispositive and in the opposite direction when a isnegative. Let us draw arrows on the lines to indicate the directions. See Figure 3.4 on the followingpage.

We fill in the rest of the arrows and we also draw a few solutions. See Figure 3.5 on the nextpage. You will notice that the picture looks like a source with arrows coming out from the origin.Hence we call this type of picture a source or sometimes an unstable node.

Case 2. Suppose both eigenvalues were negative. For example, take the negation of the matrixin case 1,

�1 �10 �2

. The eigenvalues are �1 and �2 and the corresponding eigenvectors are the same,⇥ 1

0⇤

and⇥ 1

1⇤

. The calculation and the picture are almost the same. The only di↵erence is that theeigenvalues are negative and hence all arrows are reversed. We get the picture in Figure 3.6 on thefollowing page. We call this kind of picture a sink or sometimes a stable node.

Case 3. Suppose one eigenvalue is positive and one is negative. For example the matrix⇥ 1 1

0 �2⇤

.The eigenvalues are 1 and �2 and the corresponding eigenvectors are the same,

⇥ 10⇤

and⇥ 1�3

. We

Page 106: Urban Illiois

106 CHAPTER 3. SYSTEMS OF ODES

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 3.4: Eigenvectors of P with directions.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 3.5: Example source vector field witheigenvectors and solutions.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 3.6: Example sink vector field witheigenvectors and solutions.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 3.7: Example saddle vector field witheigenvectors and solutions.

reverse the arrows on one line (corresponding to the negative eigenvalue) and we obtain the picturein Figure 3.7. We call this picture a saddle point.

The next three cases we will assume the eigenvalues are complex. In this case the eigenvectorsare also complex and we cannot just plot them on the plane.

Case 4. Suppose the eigenvalues are purely imaginary. That is, suppose the eigenvalues are±ib. For example, let P =

⇥ 0 1�4 0

. The eigenvalues turn out to be ±2i and the eigenvectors are⇥ 1

2i⇤

and⇥ 1�2i

. We take the eigenvalue 2i and its eigenvector⇥ 1

2i⇤

and note that the real an imaginary

Page 107: Urban Illiois

3.5. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS 107

parts of ~vei2t are

Re"

12i

#

ei2t =

"

cos 2t�2 sin 2t

#

Im"

12i

#

ei2t =

"

sin 2t2 cos 2t

#

.

Note that which combination of them we take just depends on the initial conditions. So we mightas well just take the real part. If you notice this is a parametric equation for an ellipse. Same withthe imaginary part and in fact any linear combination of them. It is not di�cult to see that this iswhat happens in general when the eigenvalues are purely imaginary. So when the eigenvalues arepurely imaginary, you get ellipses for your solutions. This type of picture is sometimes called acenter. See Figure 3.8.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 3.8: Example center vector field.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 3.9: Example spiral source vector field.

Case 5. Now the complex eigenvalues have positive real part. That is, suppose the eigenvaluesare a ± ib for some a > 0. For example, let P =

⇥ 1 1�4 1

. The eigenvalues turn out to be 1 ± 2i andthe eigenvectors are

⇥ 12i⇤

and⇥ 1�2i

. We take 1 + 2i and its eigenvector⇥ 1

2i⇤

and find the real andimaginary of ~ve(1+2i)t are

Re"

12i

#

e(1+2i)t = et"

cos 2t�2 sin 2t

#

Im"

12i

#

e(1+2i)t = et"

sin 2t2 cos 2t

#

.

Now note the et in front of the solutions. This means that the solutions grow in magnitude whilespinning around the origin. Hence we get a spiral source. See Figure 3.9.

Page 108: Urban Illiois

108 CHAPTER 3. SYSTEMS OF ODES

Case 6. Finally suppose the complex eigenvalues have negative real part. That is, suppose theeigenvalues are �a ± ib for some a > 0. For example, let P =

�1 �14 �1

. The eigenvalues turn outto be �1 ± 2i and the eigenvectors are

⇥ 1�2i

and⇥ 1

2i⇤

. We take �1 � 2i and its eigenvector⇥ 1

2i⇤

andfind the real and imaginary of ~ve(1+2i)t are

Re"

12i

#

e(�1�2i)t = e�t"

cos 2t2 sin 2t

#

,

Im"

12i

#

e(�1�2i)t = e�t"

� sin 2t2 cos 2t

#

.

Now note the e�t in front of the solutions. This means that the solutions shrink in magnitude whilespinning around the origin. Hence we get a spiral sink. See Figure 3.10.

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 3.10: Example spiral sink vector field.

We summarize the behavior of linear homogeneous two dimensional systems in Table 3.1.

Eigenvalues Behaviorreal and both positive source / unstable nodereal and both negative sink / stable nodereal and opposite signs saddlepurely imaginary center point / ellipsescomplex with positive real part spiral sourcecomplex with negative real part spiral sink

Table 3.1: Summary of behavior of linear homogeneous two dimensional systems.

Page 109: Urban Illiois

3.5. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS 109

3.5.1 ExercisesExercise 3.5.1: Take the equation mx00+cx0+kx = 0, with m > 0, c � 0, k > 0 for the mass-springsystem. a) Convert this to a system of first order equations. b) Classify for what m, c, k do youget which behavior. c) Can you explain from physical intuition why you do not get all the di↵erentkinds of behavior here?

Exercise 3.5.2: Can you find what happens in the case when P =⇥ 1 1

0 1⇤

. In this case the eigenvalueis repeated and there is only one eigenvector. What picture does this look like?

Exercise 3.5.3: Can you find what happens in the case when P =⇥ 1 1

1 1⇤

. Does this look like any ofthe pictures we have drawn?

Page 110: Urban Illiois

110 CHAPTER 3. SYSTEMS OF ODES

3.6 Second order systems and applicationsNote: more than 2 lectures, §5.3 in EP

3.6.1 Undamped mass spring systemsWhile we did say that we will usually only look at first order systems, it is sometimes more con-venient to study the system in the way it arises naturally. For example, suppose we have 3 massesconnected by springs between two walls. We could pick any higher number, and the math wouldbe essentially the same, but for simplicity we pick 3 right now. And let us assume no friction,that is, the system is undamped. The masses are m1, m2, and m3 and the spring constants are k1,k2, k3, and k4. Let x1 be the displacement from rest position of the first mass and, x2 and x3 thedisplacement of the second and third mass. We will make, as usual, positive values go right (as x1

grows, mass 1 is moving right). See Figure 3.11.

k1m1

k2m2

k3m3

k4

Figure 3.11: System of masses and springs.

This simple system turns up in unexpected places. Note for example that our world reallyconsists of small particles of matter interacting together. When we try this system with many moremasses, this is a good approximation to how an elastic material will behave. In fact by somehowtaking a limit of the number of masses going to infinity we obtain the continuous one dimensionalwave equation. But we digress.

Let us set up the equations for the three mass system. By Hooke’s law we have that the forceacting on the mass equals the spring compression times the spring constant. By Newton’s secondlaw we again have that force is mass times acceleration. So if we sum the forces acting on eachmass and put the right sign in front of each depending on the direction in which it is acting, we endup with the system.

m1x001 = �k1x1 + k2(x2 � x1) = �(k1 + k2)x1 + k2x2,

m2x002 = �k2(x2 � x1) + k3(x3 � x2) = k2x1 � (k2 + k3)x2 + k3x3,

m3x003 = �k3(x3 � x2) � k4x3 = k3x2 � (k3 + k4)x3.

We define the matrices

M =

2

6

6

6

6

6

6

6

6

4

m1 0 00 m2 00 0 m3

3

7

7

7

7

7

7

7

7

5

and K =

2

6

6

6

6

6

6

6

6

4

�(k1 + k2) k2 0k2 �(k2 + k3) k3

0 k3 �(k3 + k4)

3

7

7

7

7

7

7

7

7

5

.

Page 111: Urban Illiois

3.6. SECOND ORDER SYSTEMS AND APPLICATIONS 111

We write the equation simply asM~x 00 = K~x.

At this point we could introduce 3 new variables and write out a system of 6 equations. We claimthis simple setup is easier to handle as a second order system. We will call ~x the displacementvector, M the mass matrix, and K the sti↵ness matrix.

Exercise 3.6.1: Do this setup for 4 masses (find the matrix M and K). Do it for 5 masses. Canyou find a prescription to do it for n masses?

As before we will want to “divide by M.” In this case this means computing the inverse of M.All the masses are nonzero and it is easy to compute the inverse, as the matrix is diagonal.

M�1 =

2

6

6

6

6

6

6

6

6

6

4

1m1

0 00 1

m20

0 0 1m3

3

7

7

7

7

7

7

7

7

7

5

.

This fact follows readily by how we multiply diagonal matrices. You should verify that MM�1 =

M�1M = I as an exercise.

We let A = M�1K and we look at the system ~x 00 = M�1K~x, or

~x 00 = A~x.

Many real world systems can be modeled by this equation. For simplicity we will keep the givenmasses-and-springs setup in mind. We try a solution of the form

~x = ~ve↵t.

We note that for this guess, ~x 00 = ↵2~ve↵t. We plug into the equation and get

↵2~ve↵t = A~ve↵t.

We can divide by e↵t to get that ↵2~v = A~v. Hence if ↵2 is an eigenvalue of A and ~v is the corre-sponding eigenvector, we have found a solution.

In our example, and in many others, it turns out that A has negative real eigenvalues (andpossibly a zero eigenvalue). So we will study only this case here. When an eigenvalue � is negative,it means that ↵2 = � is negative. Hence there is some real number ! such that �!2 = �. Then↵ = ±i!. The solution we guessed was

~x = ~v(cos! t + i sin! t).

By again taking real and imaginary parts (note that ~v is real), we again find that ~v cos! t and~v sin! t are linearly independent solutions.

If an eigenvalue was zero, it turns out that ~v and ~vt are solutions if ~v is the correspondingeigenvector.

Page 112: Urban Illiois

112 CHAPTER 3. SYSTEMS OF ODES

Exercise 3.6.2: Show that if A has a zero eigenvalue and ~v is the corresponding eigenvector, then~x = ~v(a + bt) is a solution of ~x 00 = A~x for arbitrary constants a and b.

Theorem 3.6.1. Let A be an n⇥n with n distinct real negative eigenvalues we denote by �!21, �!2

2,. . . , �!2

n, and corresponding eigenvectors ~v1, ~v2, . . . , ~vn. Then

~x(t) =n

X

i=1

~vi(ai cos!it + bi sin!it),

is the general solution of~x 00 = A~x,

for some arbitrary constants ai and bi. If A has a zero eigenvalue and all other eigenvalues aredistinct and negative, that is !1 = 0, then the general solution becomes

~x(t) = ~v1(a1 + b1t) +n

X

i=2

~vi(ai cos!it + bi sin!it).

Now note that we can use this solution and the setup from the introduction of this section evenwhen some of the masses and springs are missing. Simply when there are say 2 masses and only 2springs, take only the equations for the two masses and set all the spring constants that are missingto zero.

3.6.2 ExamplesExample 3.6.1: Suppose we have the system in Figure 3.12, with m1 = 2, m2 = 1, k1 = 4, andk2 = 2.

k1m1

k2m2

Figure 3.12: System of masses and springs.

The equations we write down are"

2 00 1

#

~x 00 ="

�(4 + 2) 22 �2

#

~x.

or~x 00 =

"

�3 12 �2

#

~x.

Page 113: Urban Illiois

3.6. SECOND ORDER SYSTEMS AND APPLICATIONS 113

We find the eigenvalues of A to be � = �1,�4 (exercise). Now we find the eigenvectors to be⇥ 1

2⇤

and⇥ 1�1

respectively (exercise).We check the theorem and note that !1 = 1 and !2 = 2. Hence the general solution is

~x ="

12

#

(a1 cos t + b1 sin t) +"

1�1

#

(a2 cos 2t + b2 sin 2t) .

The two terms in the solution represent the two so-called natural or normal modes of oscilla-tion. And the two (angular) frequencies are the natural frequencies. The two modes are plotted inFigure 3.13.

0.0 2.5 5.0 7.5 10.0

0.0 2.5 5.0 7.5 10.0

-2

-1

0

1

2

-2

-1

0

1

2

0.0 2.5 5.0 7.5 10.0

0.0 2.5 5.0 7.5 10.0

-1.0

-0.5

0.0

0.5

1.0

-1.0

-0.5

0.0

0.5

1.0

Figure 3.13: The two modes of the mass spring system. In the left plot the masses are moving inunison and the right plot are masses moving in the opposite direction.

Let us write the solution as

~x ="

12

#

c1 cos(t � ↵2) +"

1�1

#

c2 cos(2t � ↵1).

The first term,

~x1 =

"

12

#

c1 cos(t � ↵1) ="

c1 cos(t � ↵1)2c1 cos(t � ↵1)

#

,

corresponds to the mode where the masses move synchronously in the same direction.On the other hand the second term,

~x1 =

"

1�1

#

c2 cos(2t � ↵2) ="

c2 cos(2t � ↵2)�c2 cos(2t � ↵2)

#

,

corresponds to the mode where the masses move synchronously but in opposite directions.The general solution is a combination of the two modes. That is, the initial conditions determine

the amplitude and phase shift of each mode.

Page 114: Urban Illiois

114 CHAPTER 3. SYSTEMS OF ODES

Example 3.6.2: Let us do another example. In this example we have two toy rail cars. Car 1 ofmass 2 kg is travelling at 3 m/s towards the second rail car of mass 1 kg. There is a bumper on thesecond rail car which engages one the cars hit (it connects to two cars) and does not let go. Thebumper acts like a spring of spring constant k = 2 N/m. The second Car is 10 meters from a wall.See Figure 3.14.

m1k

m2

10 meters

Figure 3.14: The crash of two rail cars.

We want to ask several question. At what time after the cars link does impact with the wallhappen? What is the speed of car 2 when it hits the wall?

OK, let us first set the system up. Let us assume that time t = 0 is the time when the two carslink up. Let x1 be the displacement of the first car from the position at t = 0, and let x2 be thedisplacement of the second car from its original location. Then the time when x2(t) = 10 is exactlythe time when impact with wall occurs. For this t, x02(t) is the speed at impact. This system actsjust like the system of the previous example but without k1. Hence the equation is

"

2 00 1

#

~x 00 ="

�2 22 �2

#

~x.

or~x 00 =

"

�1 12 �2

#

~x.

We compute the eigenvalues of A. It is not hard to see that the eigenvalues are 0 and �3(exercise). Furthermore, the eigenvectors are

⇥ 11⇤

and⇥ 1�2

respectively (exercise). We note that!2 =

p

3 and we use the second part of the theorem to find our general solution to be

~x ="

11

#

(a1 + b1t) +"

1�2

#

a2 cosp

3 t + b2 sinp

3 t⌘

=

=

"

a1 + b1t + a2 cosp

3 t + b2 sinp

3 ta1 + b1t � 2a2 cos

p

3 t � 2b2 sinp

3 t

#

We now apply the initial conditions. First the cars start at position 0 so x1(0) = 0 and x2(0) = 0.The first car is travelling at 3 m/s, so x01(0) = 3 and the second car starts at rest, so x02(0) = 0. Thefirst conditions says

~0 = ~x(0) ="

a1 + a2

a1 � 2a2

#

.

Page 115: Urban Illiois

3.6. SECOND ORDER SYSTEMS AND APPLICATIONS 115

It is not hard to see that this implies that a1 = a2 = 0. We plug a1 and a2 and di↵erentiate to get

~x0(t) ="

b1 +p

3 b2 cosp

3 tb1 � 2

p

3 b2 cosp

3 t

#

.

So"

30

#

= ~x0(0) ="

b1 +p

3 b2

b1 � 2p

3 b2

#

.

It is not hard to solve these two equations to find b1 = 2 and b2 =1p

3. Hence the position of our

cars is (until the impact with the wall)

~x =

2

6

6

6

6

6

4

2t + 1p

3sinp

3 t2t � 2

p

3sinp

3 t

3

7

7

7

7

7

5

.

Note how the presence of the zero eigenvalue resulted in a term containing t. This means that thecarts will be travelling in the positive direction as time grows, which is what we expect.

What we are really interested in is the second expression, the one for x2. We have x2(t) =2t � 2

p

3sinp

3 t. See Figure 3.15 for the plot of x2 versus time.

0 1 2 3 4 5 6

0 1 2 3 4 5 6

0.0

2.5

5.0

7.5

10.0

12.5

0.0

2.5

5.0

7.5

10.0

12.5

Figure 3.15: Position of the second car in time (ignoring the wall).

Just from the graph we can see that time of impact will be a little more than 5 seconds fromtime zero. For this you have to solve the equation 10 = x2(t) = 2t � 2

p

3sinp

3 t. Using a computer(or even a graphing calculator) we find that timpact ⇡ 5.22 seconds.

As for the speed we note that x02 = 2 � 2 cosp

3 t. At time of impact (5.22 seconds from t = 0)we get that x02(timpact) ⇡ 3.85.

The maximum speed is the maximum of 2 � 2 cosp

3 t which is 4. We are travelling at almostthe maximum speed when we hit the wall.

Page 116: Urban Illiois

116 CHAPTER 3. SYSTEMS OF ODES

Now suppose that Bob is a tiny person sitting on car 2. Bob has a Martini in his hand and wouldlike to not spill it. Let us suppose Bob would not spill his martini when the first car links up withcar 2, but if car 2 hits the wall at any speed greater than zero, Bob will spill his drink. SupposeBob can move the car 2 a few meters back and forth from the wall (he cannot go all the way to thewall, nor can he get out of the way of the first car). Is there a “safe” distance for him to be in? Adistance such that the impact with the wall is at zero speed?

Actually, the answer is yes. From looking at Figure 3.15 on the preceding page, we note the“plateau” between t = 3 and t = 4. There is a point where the speed is zero. We just need tosolve x02(t) = 0. This is when cos

p

3 t = 1 or in other words when t = 2⇡p

3, 4⇡p

3, etc. . . If we plug in

x2

2⇡p

3

= 4⇡p

3⇡ 7.26. So a “safe” distance is about 7 and a quarter meters from the wall.

Alternatively Bob could move away from the wall towards the incoming car 2 where anothersafe distance is 8⇡

p

3⇡ 14.51 and so on, using all the di↵erent t such that x02(t) = 0. Of course t = 0

is always a solution here, corresponding to x2 = 0, but that means standing right at the wall.

3.6.3 Forced oscillationsFinally we move to forced oscillations. Suppose that now our system is

~x 00 = A~x + ~F cos! t. (3.3)

That is, we are adding periodic forcing to the system in the direction of the vector ~F.Just like before this system just requires us to find one particular solution ~xp, add it to the

general solution of the associated homogeneous system ~xc and we will have the general solution to(3.3). Let us suppose that ! is not one of the natural frequencies of ~x 00 = A~x, then we can guess

~xp = ~c cos! t,

where ~c is an unknown constant vector. Note that we do not need to use sine since there are onlysecond derivatives. We solve for ~c to find ~xp. This is really just the method of undeterminedcoe�cients for systems. Let us di↵erentiate ~xp twice to get

~xp00

= �!2~c cos! t.

Now plug into the equation

�!2~c cos! t = A~c cos! t + ~F cos! t

We can cancel the cosine and rearrange to obtain

(A + !2I)~c = � ~F.

So~c = (A + !2I)�1(� ~F).

Page 117: Urban Illiois

3.6. SECOND ORDER SYSTEMS AND APPLICATIONS 117

Of course this means that (A + !2I) = (A � (�!2)I) is invertible. That matrix is invertible if andonly if �!2 is not an eigenvalue of A. That is true if and only if ! is not a natural frequency of thesystem.

Example 3.6.3: Let us take the example in Figure 3.12 on page 112 with the same parameters asbefore: m1 = 2, m2 = 1, k1 = 4, and k2 = 2. Now suppose that there is a force 2 cos 3t acting onthe second cart.

The equation is

~x 00 ="

�3 12 �2

#

~x +"

02

#

cos 3t.

We have solved the associated homogeneous equation before and found the complementary solu-tion to be

~xc =

"

12

#

(a1 cos t + b1 sin t) +"

1�1

#

(a2 cos 2t + b2 sin 2t) .

We note that the natural frequencies were 1 and 2. Hence 3 is not a natural frequency, we cantry ~c cos 3t. We can invert (A + 32I)

"

�3 12 �2

#

+ 32I!

�1

=

"

6 12 7

#

�1

=

" 740

�140

�120

320

#

.

Hence,

~c = (A + !2I)�1(� ~F) =" 7

40�140

�120

320

# "

0�2

#

=

" 120�310

#

.

Combining with what we know the general solution of the associated homogeneous problemto be we get that the general solution to ~x 00 = A~x + ~F cos! t is

~x = ~xc + ~xp =

"

12

#

(a1 cos t + b1 sin t) +"

1�1

#

(a2 cos 2t + b2 sin 2t) +" 1

20�310

#

cos 3t.

The constants a1, a2, b1, and b2 must then be solved for given any initial conditions.

If ! is a natural frequency of the system resonance occurs because you will have to try aparticular solution of the form

~xp = ~c t sin! t + ~d cos! t.

That is assuming that all eigenvalues of the coe�cient matrix are distinct. Note that the amplitudeof this solution grows without bound as t grows.

Page 118: Urban Illiois

118 CHAPTER 3. SYSTEMS OF ODES

3.6.4 ExercisesExercise 3.6.3: Find a particular solution to

~x 00 ="

�3 12 �2

#

~x +"

02

#

cos 2t.

Exercise 3.6.4: Let us take the example in Figure 3.12 on page 112 with the same parameters asbefore: m1 = 2, k1 = 4, and k2 = 2, except for m2 which is unknown. Suppose that there is a forcecos 5t acting on the first mass. Find an m1 such that there exists a particular solution where thefirst mass does not move.

Note: This idea is called dynamic damping. In practice there will be a small amount of damp-ing and so any transient solution will disappear and after long enough time, the first mass willalways come to a stop.

Exercise 3.6.5: Let us take the example 3.6.2 on page 114, but that at time of impact, cart 2 ismoving to the left at the speed of 3m/s. a) Find the behavior of the system after linkup. b) Will thesecond car hit the wall, or will it be moving away from the wall as time goes on. c) at what speedwould the first car have to be travelling for the system to essentially stay in place after linkup.

Exercise 3.6.6: Let us take the example in Figure 3.12 on page 112 with parameters m1 = m2 = 1,k1 = k2 = 1. Does there exist a set of initial conditions for which the first cart moves but the secondcart does not? If so find those conditions, if not argue why not.

Page 119: Urban Illiois

3.7. MULTIPLE EIGENVALUES 119

3.7 Multiple eigenvaluesNote: 1–2 lectures, §5.4 in EP

It may very well happen that a matrix has some “repeated” eigenvalues. That is, the character-istic equation det(A � �I) = 0 may have repeated roots. As we have said before, this is actuallyunlikely to happen for a random matrix. If you take a small perturbation of A (you change theentries of A slightly) you will get a matrix with distinct eigenvalues. As any system you will wantto solve in practice is an approximation to reality anyway, it is not indispensable to know how tosolve these corner cases. But it may happen on occasion that it is easier or desirable to solve sucha system directly.

3.7.1 Geometric multiplicityTake the diagonal matrix

A ="

3 00 3

#

.

A has an eigenvalue 3 of multiplicity 2. We usually call the multiplicity of the eigenvalue in thecharacteristic equation the algebraic multiplicity. In this case, there exist 2 linearly independenteigenvectors,

⇥ 10⇤

and⇥ 0

1⇤

. This means that the so-called geometric multiplicity of this eigenvalueis 2.

In all the theorems where we required a matrix to have n distinct eigenvalues, we only reallyneeded to have n linearly independent eigenvectors. For example, let ~x 0 = A~x has the generalsolution

~x = c1

"

10

#

e3t + c2

"

01

#

e3t.

Let us restate the theorem about real eigenvalues. In the following theorem we will repeat eigen-values according to (algebraic) multiplicity. So for A above we would say that it has eigenvalues 3and 3.

Theorem 3.7.1. Take ~x 0 = P~x. If P is n ⇥ n and has n real eigenvalues (not necessarily distinct),�1, . . . , �n, and if there are n linearly independent corresponding eigenvectors ~v1, . . . , ~vn, and thegeneral solution to the ODE can be written as.

~x = c1~v1e�1t + c2~v2e�2t + · · · + cn~vne�nt.

The geometric multiplicity of an eigenvalue of algebraic multiplicity n is equal to the number oflinearly independent eigenvectors we can find. It is not hard to see that the geometric multiplicity isalways less than or equal to the algebraic multiplicity. Above we, therefore, handled the case whenthese two numbers are equal. If the geometric multiplicity is equal to the algebraic multiplicity wesay the eigenvalue is complete.

Page 120: Urban Illiois

120 CHAPTER 3. SYSTEMS OF ODES

The hypothesis of the theorem could, therefore, be stated as saying that if all the eigenvaluesof P are complete then there are n linearly independent eigenvectors and thus we have the givengeneral solution.

Note that if the geometric multiplicity of an eigenvalue is 2 or greater, then the set of linearlyindependent eigenvectors is not unique up to multiples as it was before. For example, for thediagonal matrix A above we could also pick eigenvectors

⇥ 11⇤

and⇥ 1�1

, or in fact any pair of twolinearly independent vectors.

3.7.2 Defective eigenvaluesIf an n ⇥ n matrix has less than n linearly independent eigenvectors, it is said to be deficient.Then there is at least one eigenvalue with algebraic multiplicity that is higher than the geometricmultiplicity. We call this eigenvalue defective and the di↵erence between the two multiplicities wecall the defect.

Example 3.7.1: The matrix"

3 10 3

#

has an eigenvalue 3 of algebraic multiplicity 2. Let us try to compute the eigenvectors."

0 10 0

# "

v1

v2

#

= ~0.

We must have that v2 = 0. Hence any eigenvector is of the form⇥ v1

0⇤

. Any two such vectors arelinearly dependent, and hence the geometric multiplicity of the eigenvalue is 1. Therefore, thedefect is 1, and we can no longer apply the eigenvalue method directly to a system of ODEs withsuch a coe�cient matrix.

The key observation we will use here is that if � is an eigenvalue of A of algebraic multiplicitym, then we will be able to find m linearly independent vectors solving the equation (A� �I)m~v = ~0.We will call these the generalized eigenvectors.

Let us continue with the example A =⇥ 3 1

0 3⇤

and the equation ~x 0 = A~x. We have an eigenvalue� = 3 of (algebraic) multiplicity 2 and defect 1. We have found one eigenvector ~v1 =

⇥ 10⇤

. We havethe solution

~x1 = ~v1e3t.

In this case, let us try (in the spirit of repeated roots of the characteristic equation for a singleequation) another solution of the form

~x2 = (~v2 + ~v1t) e3t.

Page 121: Urban Illiois

3.7. MULTIPLE EIGENVALUES 121

We di↵erentiate to get

~x20

= ~v1e3t + 3(~v2 + ~v1t) e3t = (3~v2 + ~v1) e3t + 3~v1te3t.

~x20 must equal A~x2, and

A~x2 = A(~v2 + ~v1t) e3t = A~v2e3t + A~v1te3t.

By looking at the coe�cients of e3t and te3t we see 3~v2 + ~v1 = A~v2 and 3~v1 = A~v1. This means that

(A � 3I)~v1 = ~0, and (A � 3I)~v2 = ~v1.

If these two equations are satisfied, then ~x2 is a solution. We know the first of these equations issatisfied because ~v1 is an eigenvector. If we plug the second equation into the first we find that

(A � 3I)(A � 3I)~v2 = ~0, or (A � 3I)2~v2 = ~0.

If we can, therefore, find a ~v2 which solves (A � 3I)2~v2 = ~0, and such that (A � 3I)~v2 = ~v1, we aredone. This is just a bunch of linear equations to solve and we are by now very good at that.

We notice that in this simple case (A�3I)2 is just the zero matrix (exercise). Hence, any vector~v2 solves (A � 3I)2~v2 = ~0. So we just have to make sure that (A � 3I)~v2 = ~v1. Write

"

0 10 0

# "

ab

#

=

"

10

#

.

By inspection we see that letting a = 0 (a could be anything in fact) and b = 1 does the job. Hencewe can take ~v2 =

⇥ 01⇤

. So our general solution to ~x 0 = A~x is

~x = c1

"

10

#

e3t + c2

"

01

#

+

"

10

#

t!

e3t =

"

c1e3t + c2te3t

c2e3t

#

.

Let us check that we really do have the solution. First x01 = c13e3t+c2e3t+3c2te3t = 3x1+ x2, good.Now x02 = 3c2e3t = 3x2, good.

Note that the system ~x 0 = A~x has a simpler solution since A is a triangular matrix. In particular,the equation for x2 does not depend on x1.

Exercise 3.7.1: Solve ~x 0 =⇥ 3 1

0 3⇤

~x by first solving for x2 and then for x1 independently. Now checkthat you got the same solution as we did above.

Let us describe the general algorithm. First for � of multiplicity 2, defect 1. First find aneigenvector ~v1 of �. Now find a vector ~v2 such that Find ~v2 such that

(A � 3I)2~v2 = ~0,(A � 3I)~v2 = ~v1.

Page 122: Urban Illiois

122 CHAPTER 3. SYSTEMS OF ODES

This gives us two linearly independent solutions

~x1 = ~v1e�t,

~x2 =�

~v2 + ~v1t�

e�t.

This machinery can also be generalized to larger matrices and higher defects. We will not goover, but let us just state the ideas. Suppose that A has a multiplicity m eigenvalue �. We findvectors such that

(A � �I)k~v = ~0, but (A � �I)k�1~v , ~0.

Such vectors are called generalized eigenvectors. For every eigenvector ~v1 we find a chain ofgeneralized eigenvectors ~v2 through ~vk such that:

(A � �I)~v1 = ~0,(A � �I)~v2 = ~v1,

...

(A � �I)~vk = ~vk�1.

We form the linearly independent solutions

~x1 = ~v1e�t,

~x2 = (~v2 + ~v1t) e�t,...

~xk =

~vk + ~vk�1t + · · · + ~v2tk�2

(k � 2)!+ ~v1

tk�1

(k � 1)!

!

e�t.

We proceed to find chains until we form m linearly independent solutions (m is the multiplicity).You may need to find several chains for every eigenvalue.

3.7.3 ExercisesExercise 3.7.2: Let A =

⇥ 5 �33 �1

. Solve ~x 0 = A~x.

Exercise 3.7.3: Let A =

5 �4 40 3 0�2 4 �1

. a) What are the eigenvalues? b) What is/are the defect(s) of

the eigenvalue(s)? c) Solve ~x 0 = A~x.

Exercise 3.7.4: Let A =

2 1 00 2 00 0 2

. a) What are the eigenvalues? b) What is/are the defect(s) of the

eigenvalue(s)? c) Solve ~x 0 = A~x in two di↵erent ways and verify you get the same answer.

Page 123: Urban Illiois

3.7. MULTIPLE EIGENVALUES 123

Exercise 3.7.5: Let A =

0 1 2�1 �2 �2�4 4 7

. a) What are the eigenvalues? b) What is/are the defect(s) of

the eigenvalue(s)? c) Solve ~x 0 = A~x.

Exercise 3.7.6: Let A =

0 4 �2�1 �4 10 0 �2

. a) What are the eigenvalues? b) What is/are the defect(s) of

the eigenvalue(s)? c) Solve ~x 0 = A~x.

Exercise 3.7.7: Let A =

2 1 �1�1 0 2�1 �2 4

. a) What are the eigenvalues? b) What is/are the defect(s) of

the eigenvalue(s)? c) Solve ~x 0 = A~x.

Exercise 3.7.8: Suppose that A is a 2 ⇥ 2 matrix with a repeated eigenvalue �. Suppose that thereare two linearly independent eigenvectors. Show that the matrix is diagonal, in particular A = �I.

Page 124: Urban Illiois

124 CHAPTER 3. SYSTEMS OF ODES

3.8 Matrix exponentialsNote: 2 lectures, §5.5 in EP

3.8.1 DefinitionIn this section we present a di↵erent way of finding the fundamental matrix solution of a system.Suppose that we have the constant coe�cient equation

~x 0 = P~x,

as usual. Now suppose that this was one equation (P is a number or a 1 ⇥ 1 matrix). Then thesolution to this would be

~x = ePt.

It turns out the same computation works for matrices when we define ePt properly. First let us writedown the Taylor series for eat for some number a.

eat = 1 + at +(at)2

2+

(at)3

6+

(at)4

24+ · · · =

1

X

k=0

(at)k

k!.

Recall k! = 1 · 2 · 3 · · · k, and 0! = 1. Now if we di↵erentiate this series

a + a2t +a3t2

2+

a4t3

6+ · · · = a

1 + at +(at)2

2+

(at)3

6+ · · ·

!

= aeat.

Maybe we can write try the same trick here. Suppose that for an n ⇥ n matrix A we define thematrix exponential as

eA def= I + A +

12

A2 +16

A3 + · · · +1k!

Ak + · · ·

Let us not worry about convergence. The series really does always converge. We usually write Ptas tP by convention when P is a matrix. With this small change and by the exact same calculationas above we have that

ddt

etP⌘

= PetP.

Now P and hence etP is an n ⇥ n matrix. What we are looking for is a vector. We note that in the1 ⇥ 1 case we would at this point multiply by an arbitrary constant to get the general solution. Inthe matrix case we multiply by a column vector ~c.

Theorem 3.8.1. Let P be an n ⇥ n matrix. Then the general solution to ~x 0 = P~x is

~x = etP~c,

where ~c is an arbitrary constant vector. In fact ~x(0) = ~c.

Page 125: Urban Illiois

3.8. MATRIX EXPONENTIALS 125

Let us check.ddt~x =

ddt

etP~c⌘

= PetP~c = P~x.

Hence etP is the fundamental matrix solution of the homogeneous system. If we find a wayto compute the matrix exponential, we will have another method of solving constant coe�cienthomogeneous systems. It also makes it easy to solve for initial conditions. To solve ~x 0 = A~x,~x(0) = ~b, we take the solution

~x = etA~b.

This equation follows because e0A = I, so ~x(0) = e0A~b = ~b.

We mention a drawback of matrix exponentials. In general eA+B , eAeB. The trouble is thatmatrices do not commute, that is, in general AB , BA. If you try to prove eA+B , eAeB using theTaylor series, you will see why the lack of commutativity becomes a problem. However, it is stilltrue that if AB = BA, that is, if A and B commute, then eA+B = eAeB. We will find this fact useful.Let us restate this as a theorem to make a point.

Theorem 3.8.2. If AB = BA then eA+B = eAeB. Otherwise eA+B , eAeB in general.

3.8.2 Simple casesIn some instances it may work to just plug into the series definition. Suppose the matrix is diagonal.For example, D =

⇥ a 00 b

. Then

Dk =

"

ak 00 bk

#

,

and

eD = I + D +12

D2 +16

D3 + · · · =

"

1 00 1

#

+

"

a 00 b

#

+12

"

a2 00 b2

#

+16

"

a3 00 b3

#

+ · · · =

"

ea 00 eb

#

.

So by this rationale we have that

eI =

"

e 00 e

#

and eaI =

"

ea 00 ea

#

.

This makes exponentials of certain other matrices easy to compute. Notice for example thatthe matrix A =

⇥ 5 �33 �1

can be written as 2I + B where B =⇥ 3 �3

3 �3⇤

. Notice that 2I and B commute,and that B2 =

⇥ 0 00 0

. So Bk = 0 for all k � 2. Therefore, eB = I + B. Suppose we actuallywant to compute etA. 2tI and tB still commute (exercise: check this) and etB = I + tB, since(tB)2 = t2B2 = 0. We write

etA = e2tI+tB = e2tIetB =

"

e2t 00 e2t

#

(I + tB) =

=

"

e2t 00 e2t

# "

1 + 3t �3t3t 1 � 3t

#

=

"

(1 + 3t) e2t�3te2t

3te2t (1 � 3t) e2t

#

.

Page 126: Urban Illiois

126 CHAPTER 3. SYSTEMS OF ODES

So we have found the fundamental matrix solution for the system ~x 0 = A~x. Note that this matrixhas a repeated eigenvalue with a defect; there is only one eigenvector for the eigenvalue 2. So wehave found a perhaps easier way to handle this case. In fact, if a matrix A is 2 ⇥ 2 and has aneigenvalue � of multiplicity 2, then either it is diagonal, or A = �I + B where B2 = 0. This is agood exercise.

Exercise 3.8.1: Suppose that A is 2⇥2 and � is the only eigenvalue. Then show that (A��I)2 = 0.Then we can write A = �I + B, where B2 = 0. Hint: First write down what does it mean for theeigenvalue to be of multiplicity 2. You will get an equation for the entries. Now compute the squareof B.

Matrices B such that Bk = 0 for some k are called nilpotent. Computation of the matrixexponential for nilpotent matrices is easy by just writing down the first k terms of the Taylor series.

3.8.3 General matricesIn general, the exponential is not as easy to compute as above. We cannot usually write any matrixas a sum of commuting matrices where the exponential is simple for each one. But fear not, itis still not too di�cult provided we can find enough eigenvectors. First we need the followinginteresting result about matrix exponentials. For any two square matrices A and B, we have

eBAB�1= BeAB�1.

This can be seen by writing down the Taylor series. First note that

(BAB�1)2 = BAB�1BAB�1 = BAIAB�1 = BA2B�1.

And hence by the same reasoning (BAB�1)k = BAkB�1. So now write down the Taylor series foreBAB�1

eBAB�1= I + BAB�1 +

12

(BAB�1)2 +16

(BAB�1)3 + · · ·

= BB�1 + BAB�1 +12

BA2B�1 +16

BA3B�1 + · · ·

= B�

I + A +12

A2 +16

A3 + · · ·�

B�1

= BeAB�1.

Now we will write a general matrix A as EDE�1, where D is diagonal. This procedure is calleddiagonalization. If we can do that, you can see that the computation of the exponential becomeseasy. Adding t into the mix we see that

etA = EetDE�1.

Page 127: Urban Illiois

3.8. MATRIX EXPONENTIALS 127

Now to do this we will need n linearly independent eigenvectors of A. Otherwise this methoddoes not work and we need to be trickier, but we will not get into such details in this course. Welet E be the matrix with the eigenvectors as columns. Let �1, . . . , �n be the eigenvalues and let ~v1,. . . , ~vn be the eigenvectors, then E = [~v1 ~v2 · · · ~vn ]. Let D be the diagonal matrix with theeigenvalues on the main diagonal. That is

D =

2

6

6

6

6

6

6

6

6

6

6

6

6

6

6

6

4

�1 0 · · · 00 �2 · · · 0....... . .

...0 0 · · · �n

3

7

7

7

7

7

7

7

7

7

7

7

7

7

7

7

5

.

Now we write

AE = A[~v1 ~v2 · · · ~vn ]= [ A~v1 A~v2 · · · A~vn ]= [ �1~v1 �2~v2 · · · �n~vn ]= [~v1 ~v2 · · · ~vn ]D= ED.

Now the columns of E are linearly independent as these are the eigenvectors of A. Hence E isinvertible. Since AE = ED, we right multiply by E�1 and we get

A = EDE�1.

This means that eA = EeDE�1. With t is turns into

etA = EetDE�1 = E

2

6

6

6

6

6

6

6

6

6

6

6

6

6

6

6

4

e�1t 0 · · · 00 e�2t

· · · 0...

.... . .

...0 0 · · · e�nt

3

7

7

7

7

7

7

7

7

7

7

7

7

7

7

7

5

E�1. (3.4)

The formula (3.4), therefore, gives the formula for computing the fundamental matrix solution etA

for the system ~x 0 = A~x, in the case where we have n linearly independent eigenvectors.Notice that this computation still works when the eigenvalues and eigenvectors are complex,

though then you will have to compute with complex numbers. Note that it is clear from the defini-tion that if A is real, then etA is real. So you will only need complex numbers in the computationand you may need to apply Euler’s formula to simplify the result. If simplified properly the finalmatrix will not have any complex numbers in it.

Example 3.8.1: Compute the fundamental matrix solution using the matrix exponentials for thesystem

"

xy

#

0

=

"

1 22 1

# "

xy

#

.

Page 128: Urban Illiois

128 CHAPTER 3. SYSTEMS OF ODES

Then compute the particular solution for the initial conditions x(0) = 4 and y(0) = 2.Let A be the coe�cient matrix

⇥ 1 22 1

. We first compute (exercise) that the eigenvalues are 3 and�1 and the corresponding eigenvectors are

⇥ 11⇤

and⇥ 1�1

. Hence we write

etA =

"

1 11 �1

# "

e3t 00 e�t

# "

1 11 �1

#

�1

=

"

1 11 �1

# "

e3t 00 e�t

#

�12

"

�1 �1�1 1

#

=�12

"

e3t e�t

e3t�e�t

# "

�1 �1�1 1

#

=�12

"

�e3t� e�t

�e3t + e�t

�e3t + e�t�e3t� e�t

#

=

" e3t+e�t

2e3t�e�t

2e3t�e�t

2e3t+e�t

2

#

.

The initial conditions are x(0) = 4 and y(0) = 2. Hence, by the property that e0A = I we findthat the particular solution we are looking for is etA~b where ~b is

⇥ 42⇤

. Then the particular solutionwe are looking for is

"

xy

#

=

" e3t+e�t

2e3t�e�t

2e3t�e�t

2e3t+e�t

2

# "

42

#

=

"

2e3t + 2e�t + e3t� e�t

2e3t� 2e�t + e3t + e�t

#

=

"

3e3t + e�t

3e3t� e�t

#

.

3.8.4 Fundamental matrix solutionsWe note that if you can compute the fundamental matrix solution in a di↵erent way, you can usethis to find the matrix exponential etA. The fundamental matrix solution of a system of ODEs isnot unique. The exponential is the fundamental matrix solution with the property that for t = 0we get the identity matrix. So we must find the right fundamental matrix solution. Let X be anyfundamental matrix solution to ~x 0 = A~x. Then we claim

etA = X(t) [X(0)]�1 .

Obviously if we plug t = 0 into X(t) [X(0)]�1 we get the identity. It is not hard to see that we canmultiply a fundamental matrix solution on the right by any constant invertible matrix and we stillget a fundamental matrix solution. All we are doing is changing what the arbitrary constants are inthe general solution ~x(t) = X(t)~c.

3.8.5 ApproximationsIf you think about it, the computation of any fundamental matrix solution X using the eigenvaluemethod is just as di�cult as computation of etA. So perhaps we did not gain much by this new tool.However, the Taylor series expansion actually gives us a very easy way to approximate solutions,which the eigenvalue method did not.

Page 129: Urban Illiois

3.8. MATRIX EXPONENTIALS 129

The simplest thing we can do is to just compute the series up to a certain number of terms. Thereare better ways to approximate the exponential⇤. In many cases however, few terms of the Taylorseries give a reasonable approximation for the exponential and may su�ce for the application. Forexample, let us compute the first 4 terms of the series for the matrix A =

⇥ 1 22 1

.

etA⇡ I + tA +

t2

2A2 +

t3

6A3 = I + t

"

1 22 1

#

+ t2"5

2 22 5

2

#

+ t3" 13

673

73

136

#

=

=

"

1 + t + 52 t2 + 13

6 t3 2 t + 2 t2 + 73 t3

2 t + 2 t2 + 73 t3 1 + t + 5

2 t2 + 136 t3

#

.

Just like the Taylor series approximation for the scalar version, the approximation will be betterfor small t and worse for larger t. For larger t, you will generally have to compute more terms.Let us see how we stack up against the real solution with t = 0.1. The approximate solution isapproximately (rounded to 8 decimal places)

e0.1 A⇡ I + 0.1 A +

0.12

2A2 +

0.13

6A3 =

"

1.12716667 0.222333330.22233333 1.12716667

#

.

And plugging t = 0.1 into the real solution (rounded to 8 decimal places) we get

e0.1 A =

"

1.12734811 0.222510690.22251069 1.12734811

#

.

This is not bad at all. Although if you take the same approximation for t = 1 you get (using theTaylor series)

"

6.66666667 6.333333336.33333333 6.66666667

#

,

while the real value is (again rounded to 8 decimal places)"

10.22670818 9.858828749.85882874 10.22670818

#

.

So the approximation is not very good once we get up to t = 1. To get a good approximation att = 1 (say up to 2 decimal places) you would need to go up to the 11th power (exercise).

3.8.6 ExercisesExercise 3.8.2: Find a fundamental matrix solution for the system x0 = 3x + y, y0 = x + 3y.

Exercise 3.8.3: Find eAt for the matrix A =⇥ 2 3

0 2⇤

.⇤C. Moler and C.F. Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years

Later, SIAM Review 45 (1), 2003, 3–49

Page 130: Urban Illiois

130 CHAPTER 3. SYSTEMS OF ODES

Exercise 3.8.4: Find a fundamental matrix solution for the system x01 = 7x1 + 4x2 + 12x3, x02 =

x1 + 2x2 + x3, x03 = �3x1 � 2x2 � 5x3. Then find the solution that satisfies ~x =

01�2

.

Exercise 3.8.5: Compute the matrix exponential eA for A =⇥ 1 2

0 1⇤

.

Exercise 3.8.6: Suppose AB = BA (matrices commute). Show that eA+B = eAeB.

Exercise 3.8.7: Use exercise 3.8.6 to show that (eA)�1 = e�A. In particular this means that eA isinvertible even if A is not.

Exercise 3.8.8: Suppose A is a matrix with eigenvalues �1, 1, and corresponding eigenvectors⇥ 1

1⇤

,⇥ 0

1⇤

. a) Find matrix A with these properties. b) Find the fundamental matrix solution to ~x0 = A~x.c) Solve the system in with initial conditions ~x(0) =

⇥ 23⇤

.

Exercise 3.8.9: Suppose that A is an n ⇥ n matrix with a repeated eigenvalue � of multiplicity n.Suppose that there are n linearly independent eigenvectors. Show that the matrix is diagonal, inparticular A = �I. Hint: Use diagonalization and the fact that the identity matrix commutes withevery other matrix.

Page 131: Urban Illiois

3.9. NONHOMOGENEOUS SYSTEMS 131

3.9 Nonhomogeneous systemsNote: 3 lectures (may have to skip a little), somewhat di↵erent from §5.6 in EP

3.9.1 First order constant coe�cientIntegrating factor

Let us first focus on the nonhomogeneous first order equation

~x 0(t) = A~x(t) + ~f (t),

where A is a constant matrix. The first method we will look at is the integrating factor method. Forsimplicity we rewrite the equation as

~x 0(t) + P~x(t) = ~f (t),

where P = �A. We multiply both sides of the equation by etP (being mindful that we are dealingwith matrices which may not commute) to obtain

etP~x 0(t) + etPP~x(t) = etP ~f (t).

We notice that PetP = etPP. This fact follows by writing down the series definition of etP,

PetP = P

I + I + tP +12

(tP)2 + · · ·

!

= P + tP2 +12

t2P3 + · · · =

=

I + I + tP +12

(tP)2 + · · ·

!

P = PetP.

We have already seen that ddt

etP⌘

= PetP. Hence,

ddt

etP~x(t)⌘

= etP ~f (t).

We can now integrate. That is, we integrate each component of the vector separately

etP~x(t) =Z

etP ~f (t) dt + ~c.

Recall from exercise 3.8.7 that (etP)�1 = e�tP. Therefore, we obtain

~x(t) = e�tPZ

etP ~f (t) dt + e�tP~c.

Page 132: Urban Illiois

132 CHAPTER 3. SYSTEMS OF ODES

Perhaps it is better understood as a definite integral. In this case it will be easy to also solve forthe initial conditions as well. Suppose we have the equation with initial conditions

~x 0(t) + P~x(t) = ~f (t), ~x(0) = ~b.

The solution can then be written as

~x(t) = e�tPZ t

0esP ~f (s) ds + e�tP~b. (3.5)

Again, the integration means that each component of the vector esP ~f (s) is integrated separately. Itis not hard to see that (3.5) really does satisfy the initial condition ~x(0) = ~b.

~x(0) = e�0PZ 0

0esP ~f (s) ds + e�0P~b = I~b = ~b.

Example 3.9.1: Suppose that we have the system

x01 + 5x1 � 3x2 = et,

x02 + 3x1 � x2 = 0,

with initial conditions x1(0) = 1, x2(0) = 0.Let us write the system as

~x 0 +"

5 �33 �1

#

~x ="

et

0

#

, ~x(0) ="

10

#

.

We have previously computed etP for P =⇥ 5 �3

3 �1⇤

. We immediately also have e�tP, just by negatingt.

etP =

"

(1 + 3t) e2t�3te2t

3te2t (1 � 3t) e2t

#

, e�tP =

"

(1 � 3t) e�2t 3te�2t

�3te�2t (1 + 3t) e�2t

#

.

Instead of computing the whole formula at once. Let us do it in stages. First

Z t

0esP ~f (s) ds =

Z t

0

"

(1 + 3s) e2s�3se2s

3se2s (1 � 3s) e2s

# "

es

0

#

ds

=

Z t

0

"

(1 + 3s) e3s

3se3s

#

ds

=

"

te3t

(3t�1) e3t+13

#

.

Page 133: Urban Illiois

3.9. NONHOMOGENEOUS SYSTEMS 133

Then

~x(t) = e�tPZ t

0esP ~f (s) ds + e�tP~b

=

"

(1 � 3t) e�2t 3te�2t

�3te�2t (1 + 3t) e�2t

# "

te3t

(3t�1) e3t+13

#

+

"

(1 � 3t) e�2t 3te�2t

�3te�2t (1 + 3t) e�2t

# "

10

#

=

"

te�2t

et

3 +⇣

13 + t

e�2t

#

+

"

(1 � 3t) e�2t

�3te�2t

#

=

"

(1 � 2t) e�2t

et

3 +⇣

13 � 2t

e�2t

#

.

Phew!Let us check that this really works.

x01 + 5x1 � 3x2 = (4te�2t� 4e�2t) + 5(1 � 2t) e�2t + et

� (1 � 6t) e�2t = et.

Similarly (exercise) x02 + 3x1 � x2 = 0. The initial conditions are also satisfied as well (exercise).

For systems, the integrating factor method only works if P does not depend on t, that is, P isconstant. The problem is that in general

ddt

eR

P(t) dt , P(t) eR

P(t) dt,

because matrices generally do not commute.

Eigenvector decomposition

For the next method, we note that the eigenvectors of a matrix give the directions in which thematrix acts like a scalar. If we solve our system along these directions these solutions would besimpler as we can treat the matrix as a scalar. We can put those solutions together to get the generalsolution.

Take the equation~x 0(t) = A~x(t) + ~f (t). (3.6)

Assume that A has n linearly independent eigenvectors ~v1, . . . ,~vn. Let us write

~x(t) = ~v1 ⇠1(t) + ~v2 ⇠2(t) + · · · + ~vn ⇠n(t). (3.7)

That is, we wish to write our solution as a linear combination of the eigenvectors of A. If we cansolve for the scalar functions ⇠1 through ⇠n we have our solution ~x. Let us decompose ~f in termsof the eigenvectors as well. Write

~f (t) = ~v1 g1(t) + ~v2 g2(t) + · · · + ~vn gn(t). (3.8)

Page 134: Urban Illiois

134 CHAPTER 3. SYSTEMS OF ODES

That is, we wish to find g1 through gn that satisfy (3.8). We note that since all the eigenvectors of Aare independent, the matrix E = [~v1 ~v2 · · · ~vn ] is invertible. We see that (3.8) can be writtenas ~f = E~g, where the components of ~g are the functions g1 through gn. Then ~g = E�1 ~f . Hence it isalways possible to find ~g when there are n linearly independent eigenvectors.

We plug (3.7) into (3.6), and note that A~vk = �k~vk.

~x 0 = ~v1 ⇠0

1 + ~v2 ⇠0

2 + · · · + ~vn ⇠0

n

= A�

~v1 ⇠1 + ~v2 ⇠2 + · · · + ~vn ⇠n�

+ ~v1 g1 + ~v2 g2 + · · · + ~vn gn

= A~v1 ⇠1 + A~v2 ⇠2 + · · · + A~vn ⇠n + ~v1 g1 + ~v2 g2 + · · · + ~vn gn

= ~v1 �1 ⇠1 + ~v2 �2 ⇠2 + · · · + ~vn �n ⇠n + ~v1 g1 + ~v2 g2 + · · · + ~vn gn

= ~v1 (�1 ⇠1 + g1) + ~v2 (�2 ⇠2 + g2) + · · · + ~vn (�n ⇠n + gn).

If we identify the coe�cients of the vectors ~v1 through ~vn we get the equations

⇠01 = �1 ⇠1 + g1,

⇠02 = �2 ⇠2 + g2,...

⇠0n = �n ⇠n + gn.

Each one of these equations is independent of the others. They are all linear first order equationsand can easily be solved by the standard integrating factor method for single equations. That is,for example for the kth equation we write

⇠0k(t) � �k ⇠k(t) = gk(t).

We use the integrating factor e��kt to find that

ddx

h

⇠k(t) e��kti

= e��ktgk(t).

Now we integrate and solve for ⇠k to get

⇠k(t) = e�ktZ

e��ktgk(t) dt +Cke�kt.

Note that if you are looking for just any particular solution, you could set Ck to be zero. If we leavethese constants in, we will get the general solution. Write ~x(t) = ~v1 ⇠1(t) + ~v2 ⇠2(t) + · · · + ~vn ⇠n(t),and we are done.

Again, as always, it is perhaps better to write these integrals as definite integrals. Suppose thatwe have an initial condition ~x(0) = ~b. We take ~c = E�1~b and note ~b = ~v1 a1 + · · · + ~vn an, just likebefore. Then if we write

⇠k(t) = e�ktZ t

0e��k sgk(s) dt + ake�kt,

Page 135: Urban Illiois

3.9. NONHOMOGENEOUS SYSTEMS 135

we will actually get the particular solution ~x(t) = ~v1⇠1(t)+~v2⇠2(t)+ · · ·+~vn⇠n(t) satisfying ~x(0) = ~b,because ⇠k(0) = ak.

Example 3.9.2: Let A =⇥ 1 3

3 1⇤

. Solve ~x0 = A~x + ~f where ~f (t) =h

2et

2t

i

for ~x(0) =h

3/16�5/16

i

.The eigenvalues of A are �2 and 4 and the corresponding eigenvectors are

⇥ 1�1

and⇥ 1

1⇤

respec-tively. This calculation is left as an exercise. We write down the matrix E of the eigenvectors andcompute its inverse (using the inverse formula for 2 ⇥ 2 matrices)

E ="

1 1�1 1

#

, E�1 =12

"

1 �11 1

#

.

We are looking for a solution of the form ~x =⇥ 1�1

⇠1 +⇥ 1

1⇤

⇠2. We also wish to write ~f in termsof the eigenvectors. That is we wish to write ~f =

h

2et

2t

i

=⇥ 1�1

g1 +⇥ 1

1⇤

g2. Thus"

g1

g2

#

= E�1"

2et

2t

#

=12

"

1 �11 1

# "

2et

2t

#

=

"

et� t

et + t

#

.

So g1 = et� t and g2 = et + t.

We further want to write ~x(0) in terms of the eigenvectors. That is, we wish to write ~x(0) =h

3/16�5/16

i

=⇥ 1�1

a1 +⇥ 1

1⇤

a2. Hence

"

a1

a2

#

= E�1" 3

16�516

#

=

" 14�116

#

.

So a1 =14 and a2 =

�116 . We plug our ~x into the equation and get that

"

1�1

#

⇠01 +

"

11

#

⇠02 = A"

1�1

#

⇠1 + A"

11

#

⇠2 +

"

1�1

#

g1 +

"

11

#

g2

=

"

1�1

#

(�2⇠1) +"

11

#

4⇠2 +

"

1�1

#

(et� t) +

"

11

#

(et� t).

We get the two equations

⇠01 = �2⇠1 + et� t, where ⇠1(0) = a1 =

14,

⇠02 = 4⇠2 + et + t, where ⇠2(0) = a2 =�116.

We solve with integrating factor. Computation of the integral is left as an exercise to the student.Note that you will need integration by parts.

⇠1 = e�2tZ

e2t (et� t) dt +C1e�2t =

et

3�

t2+

14+C1e�2t.

Page 136: Urban Illiois

136 CHAPTER 3. SYSTEMS OF ODES

C1 is the constant of integration. As ⇠1(0) = 14 then 1

4 =13 +

14 +C1 and hence C1 = �

13 . Similarly

⇠2 = e4tZ

e�4t (et + t) dt +C2e4t = �et

3�

t4�

116+C2e4t.

As ⇠2(0) = 116 we have that �1

16 =�13 �

116 +C2 and hence C2 =

13 . The solution is

~x(t) ="

1�1

#

et� e�2t

3+

1 � 2t4

!

+

"

11

#

e4t� et

3�

4t + 116

!

=

" e4t�e�2t

3 + 3�12t16

e�2t+e4t+2et

3 + 4t�516

#

.

That is, x1 =e4t�e�2t

3 + 3�12t16 and x2 =

e�2t+e4t+2et

3 + 4t�516 .

Exercise 3.9.1: Check that x1 and x2 solve the problem. Check both that they satisfy the di↵erentialequation and that they satisfy the initial conditions.

Undetermined coe�cients

The method of undetermined coe�cients also still works. The only di↵erence here is that wewill have to take unknown vectors rather than just numbers. Same caveats apply to undeterminedcoe�cients for systems as they do for single equations. This method does not always work, fur-thermore if the right hand side is complicated, you will have lots of variables to solve for. In thiscase you can think of each element of an unknown vector as an unknown number. So in system of3 equations if you have say 4 unknown vectors (this would not be uncommon), then you alreadyhave 12 unknowns that you need to solve for. The method can turn into a lot of tedious work. Asthis method is essentially the same as it is for single equations, let us just do an example.

Example 3.9.3: Let A =⇥

�1 0�2 1

. Find a particular solution of ~x0 = A~x + ~f where ~f (t) =h

et

t

i

.Note that we can solve this system in an easier way (can you see how), but for the purposes of

the example, let us use the eigenvalue method plus undetermined coe�cients.The eigenvalues of A are �1 and 1 and the corresponding eigenvectors are

⇥ 11⇤

and⇥ 0

1⇤

respec-tively. Hence our complementary solution is

~xc = ↵1

"

11

#

e�t + ↵2

"

01

#

et,

for some arbitrary constants ↵1 and ↵2.We would want to guess a particular solution of

~x = ~aet + ~bt + ~c.

However, something of the form ~aet appears in the complementary solution. Because we do notyet know the vector if the ~a is a multiple of

⇥ 01⇤

we do not know if a conflict arises. It may very

Page 137: Urban Illiois

3.9. NONHOMOGENEOUS SYSTEMS 137

well not, but to be safe we should also try ~btet. Here we find the crux of the di↵erence for systems.You want to try both ~aet and ~btet in your solution, not just ~btet. Therefore, we try

~x = ~aet + ~btet + ~ct + ~d.

Thus we have 8 unknowns. We write ~a =⇥ a1

a2

, ~b =h

b1b2

i

, ~c =⇥ c1

c2

, and ~d =h

d1d2

i

, We have to plugthis into the equation. First let us compute ~x 0.

~x 0 =⇣

~a + ~b⌘

et + ~btet + ~c.

Now ~x 0 must equal A~x + ~f so

A~x + ~f = A~aet + A~btet + A~ct + A~d + ~f =

=

"

�a1

�2a1 + a2

#

et +

"

�b1

�2b1 + b2

#

tet +

"

�c1

�2c1 + c2

#

t +"

�d1

�2d1 + d2

#

+

"

et

t

#

.

Now we identify the coe�cients of et, tet, t and any constants.

a1 + b1 = �a1 + 1,a2 + b2 = �2a1 + a2,

b1 = �b1,

b2 = �2b1 + b2,

0 = �c1,

0 = �2c1 + c2 + 1,c1 = �d1,

c2 = �2d1 + d2.

We could write this is an 8 ⇥ 9 augmented matrix and start row reduction, but it is easier to just dothis in an ad hoc manner. Immediately we see that b1 = 0, c1 = 0, d1 = 0. Plugging these back inwe get that c2 = �1 and d2 = �1. The remaining equations that tell us something are

a1 = �a1 + 1,a2 + b2 = �2a1 + a2.

So a1 =12 and b2 = �1. a2 can be arbitrary and still satisfy the equation. We are looking for just a

single solution so presumably the simplest one is when a2 = 0. Therefore,

~x = ~aet + ~btet + ~ct + ~d =" 1

20

#

et +

"

0�1

#

tet +

"

0�1

#

t +"

0�1

#

=

" 12 et

�tet� t � 1

#

.

That is, x1 =12 et, x2 = �tet

� t � 1. You would add this to the complementary solution to get thegeneral solution of the problem. Notice also that both ~aet and ~btet really was needed.

Page 138: Urban Illiois

138 CHAPTER 3. SYSTEMS OF ODES

Exercise 3.9.2: Check that x1 and x2 solve the problem. Also try setting a2 = 1 and again checkthese solutions. What is the di↵erence between the two solutions we can obtain in this way?

As you can see, other than the handling of conflicts, undetermined coe�cients works exactlythe same as it did for single equations. However, the computations can get out of hand prettyquickly for systems. The equation we had done was very simple.

3.9.2 First order variable coe�cientJust as for a single equation, there is the method of variation of parameters. In fact for constantcoe�cient systems, this is essentially the same thing as the integrating factor method we discussedearlier. However this method will work for any linear system, even if it is not constant coe�cient,provided you have somehow solved the associated homogeneous problem.

Suppose we have the equation~x 0 = A(t) ~x + ~f (t). (3.9)

Further, suppose that you have solved the associated homogeneous equation ~x 0 = A(t) ~x and foundthe fundamental matrix solution X(t). The general solution to the associated homogeneous equa-tion is X(t)~c for a constant vector ~c. Just like for variation of parameters for single equation we trythe solution to the nonhomogeneous equation of the form

~xp = X(t)~u(t),

where ~u(t) is a vector valued function instead of a constant. Now substitute into (3.9) to obtain

~xp0(t) = X0(t)~u(t) + X(t)~u 0(t) = A(t) X(t)~u(t) + ~f (t).

But X is the fundamental matrix solution to the homogeneous problem so X0(t) = A(t)X(t), andthus

X0(t)~u(t) + X(t)~u 0(t) = X0(t)~u(t) + ~f (t).

Hence X(t)~u 0(t) = ~f (t). If we compute [X(t)]�1, then ~u 0(t) = [X(t)]�1 ~f (t). Now integrate to obtain~u and we have the particular solution ~xp = X(t)~u(t). Let us write this as a formula

~xp = X(t)Z

[X(t)]�1 ~f (t) dt.

Note that if A is constant and you let X(t) = etA, then [X(t)]�1 = e�tA and hence we get asolution ~xp = etA

R

e�tA ~f (t) dt which is precisely what we got using the integrating factor method.

Example 3.9.4: Find a particular solution to

~x 0 =1

t2 + 1

"

t �11 t

#

~x +"

t1

#

(t2 + 1). (3.10)

Page 139: Urban Illiois

3.9. NONHOMOGENEOUS SYSTEMS 139

Here A = 1t2+1

⇥ t �11 t

is most definitely not constant. Perhaps by a lucky guess, you find thatX =

⇥ 1 �tt 1

solves X0(t) = A(t)X(t). Once we know the complementary solution we can easily finda solution to (3.10). First we find

[X(t)]�1 =1

t2 + 1

"

1 t�t 1

#

.

Next we know a particular solution to (3.10) is

~xp = X(t)Z

[X(t)]�1 ~f (t) dt

=

"

1 �tt 1

#

Z

1t2 + 1

"

1 t�t 1

# "

t1

#

(t2 + 1) dt

=

"

1 �tt 1

#

Z

"

2t�t2 + 1

#

dt

=

"

1 �tt 1

# "

t2

13 t3 + t

#

=

" 13 t4

23 t3 + t

#

.

Adding the complementary solution we have that the general solution to (3.10).

~x ="

1 �tt 1

# "

c1

c2

#

+

" 13 t4

23 t3 + t

#

=

"

c1 � c2t + 13 t4

c2 + (c1 + 1) t + 23 t3

#

.

Exercise 3.9.3: Check that x1 =13 t4 and x2 =

23 t3 + t really solve (3.10).

In the variation of parameters, just like in the integrating factor method we can obtain thegeneral solution by adding in constants of integration. That is, we will add X(t)~c for a vector ofarbitrary constants. But that is precisely the complementary solution.

3.9.3 Second order constant coe�cientsUndetermined coe�cients

We have already previously did a simple example of the method of undetermined coe�cients forsecond order systems in § 3.6. This method is essentially the same as undetermined coe�cientsfor first order systems. There are some simplifications that you can make however as we did in§ 3.6. Let the equation be

~x 00 = A~x + ~F(t),

where A is a constant matrix. If ~F(t) is of the form ~F0 cos! t, then you can try a solution of theform

~xp = ~c cos! t,

Page 140: Urban Illiois

140 CHAPTER 3. SYSTEMS OF ODES

and you do not need to introduce sines.If the ~F is a sum of cosines, you note that we still have the superposition principle, so if

~F(t) = ~F0 cos!0t + ~F1 cos!1t, you could try ~a cos!0t for the problem ~x 00 = A~x + ~F0 cos!0t, andyou would try ~b cos!1t for the problem ~x 00 = A~x + ~F0 cos!1t. Then sum the solutions.

However, if there is duplication with the complementary solution, or the equation is of the form~x 00 = A~x 0 + B~x + ~F(t), then you need to do the same thing as you do for first order systems.

Actually you will never go wrong with putting in more terms than needed into your guess. Youwill just find that the extra coe�cients will turn out to be zero. But it is useful to save some timeand e↵ort.

Eigenvector decomposition

If we have the system~x 00 = A~x + ~F(t),

we can do eigenvector decomposition, just like for first order systems.Let �1, . . . , �n be the eigenvalues and ~v1, . . . , ~vn be the eigenvectors. Again form the matrix

E = [~v1 · · ·~vn ]. Write~x(t) = ~v1 ⇠1(t) + ~v2 ⇠2(t) + · · · + ~vn ⇠n(t).

Decompose ~F in terms of the eigenvectors~F(t) = ~v1 g1(t) + ~v2 g2(t) + · · · + ~vn gn(t).

And again ~g = E�1 ~F.Now plug in and doing the same thing as before

~x 00 = ~v1 ⇠00

1 + ~v2 ⇠00

2 + · · · + ~vn ⇠00

n

= A�

~v1 ⇠1 + ~v2 ⇠2 + · · · + ~vn ⇠n�

+ ~v1 g1 + ~v2 g2 + · · · + ~vn gn

= A~v1 ⇠1 + A~v2 ⇠2 + · · · + A~vn ⇠n + ~v1 g1 + ~v2 g2 + · · · + ~vn gn

= ~v1 �1 ⇠1 + ~v2 �2 ⇠2 + · · · + ~vn �n ⇠n + ~v1 g1 + ~v2 g2 + · · · + ~vn gn

= ~v1 (�1 ⇠1 + g1) + ~v2 (�2 ⇠2 + g2) + · · · + ~vn (�n ⇠n + gn).

Identify the coe�cients of the eigenvectors to get the equations

⇠001 = �1 ⇠1 + g1,

⇠002 = �2 ⇠2 + g2,...

⇠00n = �n ⇠n + gn.

Each one of these equations is independent of the others. Now solve each one of these using themethods of chapter 2. Now write ~x(t) = ~v1 ⇠1(t) + · · · + ~vn ⇠n(t), and we are done; we have aparticular solution. If you have found the general solution for ⇠1 through ⇠n, then again ~x(t) =~v1 ⇠1(t) + · · · + ~vn ⇠n(t) is the general solution.

Page 141: Urban Illiois

3.9. NONHOMOGENEOUS SYSTEMS 141

Example 3.9.5: Let us do the example from § 3.6 using this method. The equation is

~x 00 ="

�3 12 �2

#

~x +"

02

#

cos 3t.

The eigenvalues were �1 and �4, with eigenvectors⇥ 1

2⇤

and⇥ 1�1

. So E =⇥ 1 1

2 �1⇤

and So E�1 =13⇥ 1 1

2 �1⇤

. Therefore,"

g1

g2

#

= E�1 ~F(t) =13

"

1 12 �1

# "

02 cos 3t

#

=

" 23 cos 3t�23 cos 3t

#

.

So after the whole song and dance of plugging in, the equations we get are

⇠001 = �⇠1 +23

cos 3t,

⇠002 = �4 ⇠2 �23

cos 3t.

For each we can try the method of undetermined coe�cients and try C1 cos 3t for the first equationand C2 cos 3t for the second equation. We plug in

�9C1 cos 3t = �C1 cos 3t +23

cos 3t,

�9C2 cos 3t = �4C2 cos 3t �23

cos 3t.

Each of these we solve separately: we get �9C1 = �C1 +23 and �9C2 = �4C2 �

23 . And hence

C1 =�112 and C2 =

215 . So our particular solution is

~x ="

12

#

�112

cos 3t!

+

"

1�1

#

215

cos 3t!

=

" 120�310

#

cos 3t.

This matches what we got previously in § 3.6.

3.9.4 ExercisesExercise 3.9.4: Find a particular solution to x0 = x+2y+2t, y0 = 3x+2y�4. a) Using integratingfactor method, b) using eigenvector decomposition, c) using undetermined coe�cients.

Exercise 3.9.5: Find the general solution to x0 = 4x + y � 1, y0 = x + 4y � et. a) Using integratingfactor method, b) using eigenvector decomposition, c) using undetermined coe�cients.

Exercise 3.9.6: Find the general solution to x001 = �6x1 + 3x2 + cos t, x002 = 2x1 � 7x2 + 3 cos t. a)using eigenvector decomposition, b) using undetermined coe�cients.

Page 142: Urban Illiois

142 CHAPTER 3. SYSTEMS OF ODES

Exercise 3.9.7: Find the general solution to x001 = �6x1 + 3x2 + cos 2t, x002 = 2x1 � 7x2 + 3 cos 2t.a) using eigenvector decomposition, b) using undetermined coe�cients.

Exercise 3.9.8: Take the equation

~x 0 ="1

t �11 1

t

#

~x +"

t2

�t

#

.

a) Check that

~xc = c1

"

t sin t�t cos t

#

+ c2

"

t cos tt sin t

#

is the complementary solution. b) Use variation of parameters to find a particular solution.

Page 143: Urban Illiois

Chapter 4

Fourier series and PDEs

4.1 Boundary value problemsNote: 2 lectures, similar to §3.8 in EP

4.1.1 Boundary value problemsBefore we tackle the Fourier series, we need to study the so-called boundary value problems (orendpoint problems). For example, suppose we have

x00 + �x = 0, x(a) = 0, x(b) = 0,

for some constant �, where x(t) is defined for t in the interval [a, b]. Unlike before when wespecified the value of the solution and its derivative at a single point, we now specify the value ofthe solution at two di↵erent points. Note that x = 0 is a solution to this equation, so existence ofsolutions is not an issue here. Uniqueness is another issue. The general solution to x00 + �x = 0will have two arbitrary constants present, so it is natural to think that requiring two conditions willguarantee a unique solution.

Example 4.1.1: However take � = 1, a = 0, b = ⇡. That is,

x00 + x = 0, x(0) = 0, x(⇡) = 0.

Then x = sin t is another solution satisfying both boundary conditions. In fact, write down thegeneral solution of the di↵erential equation, which is x = A cos t + B sin t. The condition x(0) = 0forces A = 0. But letting x(⇡) = 0 does not give us any more information as x = B sin t alreadysatisfies both conditions. Hence there are infinitely many solutions x = B sin t for an arbitraryconstant B.

143

Page 144: Urban Illiois

144 CHAPTER 4. FOURIER SERIES AND PDES

Example 4.1.2: On the other hand, change to � = 2.

x00 + 2x = 0, x(0) = 0, x(⇡) = 0.

Then the general solution is x = A cosp

2 t + B sinp

2 t. Letting x(0) = 0 still forces A = 0. Butnow letting 0 = x(⇡) = B sin

p

2 ⇡. sinp

2 ⇡ , 0 and hence B = 0. So x = 0 is the unique solutionto this problem.

So what is going on? We will be interested in classifying which constants � imply a nonzerosolution, and we will be interested in finding those solutions. This problem is an analogue offinding eigenvalues and eigenvectors of matrices.

4.1.2 Eigenvalue problemsIn general we will consider more equations, but we will postpone this until chapter 5. For the basicFourier series theory we will need only the following three cases.

x00 + �x = 0, x(a) = 0, x(b) = 0, (4.1)

x00 + �x = 0, x0(a) = 0, x0(b) = 0, (4.2)

andx00 + �x = 0, x(a) = x(b), x0(a) = x0(b), (4.3)

A number � will be considered an eigenvalue of (4.1) (resp. (4.2) or (4.3)) if and only if thereexists a nonzero solution to (4.1) (resp. (4.2) or (4.3)) given that specific �. The nonzero solutionwe found will be said to be the corresponding eigenfunction.

Note the similarity to eigenvalues and eigenvectors of matrices. The similarity is not justcoincidental. If we think of the equations as di↵erential operators, then we are doing the sameexact thing. For example, let L = � d2

dt2 , then we are looking for eigenfunctions f satisfying certainendpoint conditions that solve (L � �) f = 0. A lot of the formalism from linear algebra can stillapply here, though we will not pursue this line of reasoning too far.

Example 4.1.3: Let us find the eigenvalues and eigenfunctions of

x00 + �x = 0, x(0) = 0, x(⇡) = 0.

We will have to handle the cases � > 0, � = 0, � < 0 separately.First suppose that � > 0, then the general solution to x00 + �x = 0 is

x = A cosp

� t + B sinp

� t.

The condition x(0) = 0 implies immediately A = 0. Next

0 = x(⇡) = B sinp

� ⇡.

Page 145: Urban Illiois

4.1. BOUNDARY VALUE PROBLEMS 145

If B is zero then x is not a nonzero solution. So to get a nonzero solution we must have thatsinp

� ⇡ = 0. Hence,p

� ⇡ must be an integer multiple of ⇡, orp

� = k for a positive integer k.Hence the positive eigenvalues are k2 for all integers k � 1. The corresponding eigenfunctions canbe taken as x = sin kt. Just like for eigenvectors, we get all the multiples of an eigenfunction, sowe only need to pick one.

Now suppose that � = 0. In this case the equation is x00 = 0 and the general solution isx = At + B. x(0) = 0 implies that B = 0 and then x(⇡) = 0 implies that A = 0. This means that� = 0 is not an eigenvalue.

Finally, let � < 0. In this case we have the general solution

x = A coshp

�� t + B sinhp

�� t.

Letting x(0) = 0 implies that A = 0 (recall cosh 0 = 1 and sinh 0 = 0). So our solution must bex = B sinh

p

�� t and satisfy x(⇡) = 0. This is only possible if B is zero. Why? Because sinh ⇠is only zero for ⇠ = 0, you should plot sinh to see this. Also we can just look at the definition0 = sinh t = et

�e�t

2 . Hence et = e�t which implies t = �t and that is only true if t = 0. So there areno negative eigenvalues.

In summary, the eigenvalues and corresponding eigenfunctions are

�k = k2 with an eigenfunction xk = sin kt for all integers k � 1.

Example 4.1.4: Let us also compute the eigenvalues and eigenfunctions of

x00 + �x = 0, x0(0) = 0, x0(⇡) = 0.

Again we will have to handle the cases � > 0, � = 0, � < 0 separately.First suppose that � > 0, then the general solution to x00+�x = 0 is x = A cos

p

� t+B sinp

� t.So

x0 = �A sinp

� t + B cosp

� t

The condition x0(0) = 0 implies immediately B = 0. Next

0 = x0(⇡) = �A sinp

� ⇡.

Again A should not be zero, and sinp

� ⇡ is only zero ifp

� = k for a positive integer k. Hence thepositive eigenvalues are again k2 for all integers k � 1. And the corresponding eigenfunctions canbe taken as x = cos kt.

Now suppose that � = 0. In this case the equation is x00 = 0 and the general solution isx = At + B so x0 = A. x0(0) = 0 implies that A = 0. Obviously setting x0(⇡) = 0 does not getus anything new. This means that B could be anything (let us take it to be 1). So � = 0 is aneigenvalue and x = 1 is the corresponding eigenfunction.

Finally, let � < 0. In this case we have the general solution x = A coshp

�� t + B sinhp

�� tand hence

x0 = A sinhp

�� t + B coshp

�� t.

Page 146: Urban Illiois

146 CHAPTER 4. FOURIER SERIES AND PDES

We have already seen (with roles of A and B switched) that for this to be zero at t = 0 and t = ⇡ itimplies that A = B = 0. Hence there are no negative eigenvalues.

In summary, the eigenvalues and corresponding eigenfunctions are

�k = k2 with an eigenfunction xk = sin kt for all integers k � 1,

and there is another eigenvalue

�0 = 0 with an eigenfunction x0 = 1.

We could also do this for a little bit more complicated boundary value problem. This problemis be the one that leads to the general Fourier series.

Example 4.1.5: Let us compute the eigenvalues and eigenfunctions of

x00 + �x = 0, x(�⇡) = x(⇡), x0(�⇡) = x0(⇡).

You should notice that we have not specified the values or the derivatives at the endpoints, butrather that they are the same at the beginning and at the end of the interval.

Let us skip � < 0. The computations are the same and again we find that there are no negativeeigenvalues.

For � = 0, the general solution is x = At + B. The condition x(�⇡) = x(⇡) implies that A = 0(A⇡+ B = �A⇡+ B implies A = 0). The second condition x0(�⇡) = x0(⇡) says nothing about B andhence � = 0 is an eigenvalue with a corresponding eigenfunction x = 1.

For � > 0 we get that x = A cosp

� t + B sinp

� t. Now

A cos�p

� ⇡ + B sin�p

� ⇡ = A cosp

� ⇡ + B sinp

� ⇡.

We remember that cos�✓ = cos ✓ and sin�✓ = � sin ✓. Therefore,

A cosp

� ⇡ � B sinp

� ⇡ = A cosp

� ⇡ + B sinp

� ⇡.

and hence either B = 0 or sinp

� ⇡ = 0. Similarly (exercise) if we di↵erentiate x and plug in thesecond condition we find that A = 0 or sin

p

� ⇡ = 0. Therefore, unless we want A and B to bothbe zero (which we do not) we must have sin

p

� ⇡ = 0. Therefore,p

� is an integer and hence theeigenvalues are yet again � = k2 for an integer k � 1. In this case however, x = A cos kt+B sin kt isan eigenfunction for any A and any B. So we have two linearly independent eigenfunctions sin ktand cos kt. Remember that for a matrix we could also have had two eigenvectors corresponding toan eigenvalue if the eigenvalue was repeated.

In summary, the eigenvalues and corresponding eigenfunctions are

�k = k2 with the eigenfunctions cos kt and sin kt for all integers k � 1,�0 = 0 with an eigenfunction x0 = 1.

Page 147: Urban Illiois

4.1. BOUNDARY VALUE PROBLEMS 147

4.1.3 Orthogonality of eigenfunctionsSomething that will be very useful in the next section is the orthogonality property of the eigen-functions. This is an analogue of the following fact about eigenvectors of a matrix. A matrix iscalled symmetric if A = AT . Eigenvectors for two distinct eigenvalues of a symmetric matrix areorthogonal. That symmetry is required. We will not prove this fact here. The di↵erential operatorswe are dealing with act much like a symmetric matrix. We, therefore, get the following theorem.

Theorem 4.1.1. Suppose that x1(t) and x2(t) are two eigenfunctions of the problem (4.1), (4.2) or(4.3) for two di↵erent eigenvalues �1 and �2. Then they are orthogonal in the sense that

Z b

ax1(t)x2(t) dt = 0.

Note that the terminology comes from the fact that the integral is a type of inner product. Wewill expand on this in the next section. The theorem has a very short, elegant, and illuminatingproof so let us give it here. First note that we have the following two equations.

x001 + �1x1 = 0 and x002 + �2x2 = 0.

Multiply the first by x2 and the second by x1 and subtract to get

(�1 � �2)x1x2 = x002 x1 � x2x001 .

Now integrate both sides of the equation.

(�1 � �2)Z b

ax1x2 dt =

Z b

ax002 x1 � x2x001 dt

=

Z b

a

ddt

x02x1 � x2x01�

dt

=h

x02x1 � x2x01ib

t=a= 0.

The last equality holds because of the boundary conditions. For example, if we consider (4.1) wehave x1(a) = x1(b) = x2(a) = x2(b) = 0 and so x02x1 � x2x01 is zero at both a and b. As �1 , �2, thetheorem follows.

Exercise 4.1.1 (easy): Finish the theorem (check the last equality in the proof) for the cases (4.2)and (4.3).

We have seen previously that sin nt was an eigenfunction for the problem x00+�x = 0, x(0) = 0,x(⇡) = 0. Hence we have the integral

Z ⇡

0(sin mt)(sin nt) dt = 0, when m , n.

Page 148: Urban Illiois

148 CHAPTER 4. FOURIER SERIES AND PDES

SimilarlyZ ⇡

0(cos mt)(cos nt) dt = 0, when m , n.

And finally we also getZ ⇡

�⇡

(sin mt)(sin nt) dt = 0, when m , n,

Z ⇡

�⇡

(cos mt)(cos nt) dt = 0, when m , n,

andZ ⇡

�⇡

(cos mt)(sin nt) dt = 0.

4.1.4 Fredholm alternativeWe now touch on a very useful theorem in the theory of di↵erential equations. The theorem holdsin a more general setting than we are going to state it, but for our purposes the following statementis su�cient. We will give a slightly more general version in chapter 5.

Theorem 4.1.2 (Fredholm alternative⇤). Suppose p and q are continuous on [a, b]. Then either

x00 + �x = 0, x(a) = 0, x(b) = 0 (4.4)

has a nonzero solution, or

x00 + �x = f (t), x(a) = 0, x(b) = 0 (4.5)

has a unique solution for every continuous function f .

The theorem is also true for the other types of boundary conditions we considered. The theoremmeans that if � is not an eigenvalue, the nonhomogeneous equation (4.5) has a unique solution forevery right hand side. On the other hand if � is an eigenvalue, then (4.5) need not have a solutionfor every f , and furthermore, even if it happens to have a solution, the solution is not unique.

We also want to reinforce the idea here that linear di↵erential operators have much in commonwith matrices. So it is no surprise that there is a finite dimensional version of Fredholm alternativefor matrices as well. Let A be an n ⇥ n matrix. The Fredholm alternative then states that either(A � �I)~x = ~0 has a nontrivial solution, or (A � �I)~x = ~b has a solution for every ~b.

A lot of intuition from linear algebra can be applied for linear di↵erential operators, but onemust be careful of course. For example, one obvious di↵erence we have already seen is thatin general a di↵erential operator will have infinitely many eigenvalues, while a matrix has onlyfinitely many.

⇤Named after the Swedish mathematicain Erik Ivar Fredholm (1866 – 1927).

Page 149: Urban Illiois

4.1. BOUNDARY VALUE PROBLEMS 149

4.1.5 ApplicationLet us consider a physical application of an endpoint problem. Suppose we have a tightly stretchedquickly spinning elastic string or rope of uniform linear density ⇢. Let us put this problem into thexy-plane. The x axis represents the position on the string. The string rotates at angular velocity!, so we will assume that the whole xy-plane rotates at angular velocity ! along. We will assumethat the string stays in this xy-plane and y will measure its deflection from the equilibrium position,y = 0, on the x axis. Hence, we will find a graph which gives the shape of the string. We willidealize the string to have no volume to just be a mathematical curve. If we take a small segmentand we look at the tension at the endpoints, we see that this force is tangential and we will assumethat the magnitude is the same at both end points. Hence the magnitude is constant everywhere andwe will call its magnitude T . If we assume that the deflection is small then we can use Newton’ssecond law to get an equation

Ty00 + ⇢!2y = 0.

Let L be the length of the string and the string is fixed at the beginning and end points. Hence,y(0) = 0 and y(L) = 0. See Figure 4.1.

L x

y

y

0

Figure 4.1: Whirling string.

We rewrite the equation as y00 + ⇢!2

T y = 0. The setup is similar to Example 4.1.3 on page 144,except for the interval length being L instead of ⇡. We are looking for eigenvalues of y00 + �y =0, y(0) = 0, y(L) = 0 where � = ⇢!2

T . As before there are no nonpositive eigenvalues. With � > 0,the general solution to the equation is y = A cos

p

� x+ B sinp

� x. The condition y(0) = 0 impliesthat A = 0 as before. The condition y(L) = 0 implies that sin

p

� L = 0 and hencep

� L = k⇡ forsome integer k > 0, so

⇢!2

T= � =

k2⇡2

L2 .

What does this say about the shape of the string? It says that for all parameters ⇢, !, T notsatisfying the above equation, the string is in the equilibrium position, y = 0. When ⇢!2

T =k2⇡2

L2 ,then the string will “pop out” some distance B at the midpoint. We cannot compute B with theinformation we have.

Let us assume that ⇢ and T are fixed and we are changing !. For most values of ! the stringis in the equilibrium state. When the angular velocity ! hits a value ! = k⇡

p

TLp⇢ , then the string will

Page 150: Urban Illiois

150 CHAPTER 4. FOURIER SERIES AND PDES

pop out and will have the shape of a sin wave crossing the x axis k times. When ! changes again,the string returns to the equilibrium position. You can see that the higher the angular velocity themore times it crosses the x axis when it is popped out.

4.1.6 ExercisesHint for the following exercises: Note that cos

p

� (t � a) and sinp

� (t � a) are also solutions ofthe homogeneous equation.

Exercise 4.1.2: Compute all eigenvalues and eigenfunctions of x00 + �x = 0, x(a) = 0, x(b) = 0.

Exercise 4.1.3: Compute all eigenvalues and eigenfunctions of x00 + �x = 0, x0(a) = 0, x0(b) = 0.

Exercise 4.1.4: Compute all eigenvalues and eigenfunctions of x00 + �x = 0, x0(a) = 0, x(b) = 0.

Exercise 4.1.5: Compute all eigenvalues and eigenfunctions of x00 + �x = 0, x(a) = x(b), x0(a) =x0(b).

Exercise 4.1.6: We have skipped the case of � < 0 for the boundary value problem x00 + �x =0, x(�⇡) = x(⇡), x0(�⇡) = x0(⇡). So finish the calculation and show that there are no negativeeigenvalues.

Page 151: Urban Illiois

4.2. THE TRIGONOMETRIC SERIES 151

4.2 The trigonometric seriesNote: 2 lectures, §9.1 in EP

4.2.1 Periodic functions and motivationAs motivation for studying Fourier series, suppose we have the problem

x00 + !20x = f (t), (4.6)

for some periodic function f (t). We have already solved

x00 + !20x = F0 cos! t. (4.7)

One way to solve (4.6) is to decompose f (t) as a sum of of cosines (and sines) and then solve manyproblems of the form (4.7). We then use the principle of superposition, to sum up all the solutionswe got to get a solution to (4.6).

Before we proceed, let us talk a little bit more in detail about periodic functions. A functionis said to be periodic with period P if f (t) = f (t + P) for all t. For brevity we will say f (t) is P-periodic. Note that a P-periodic function is also 2P-periodic, 3P-periodic and so on. For example,cos t and sin t are 2⇡-periodic. So are cos kt and sin kt for all integers k. The constant functions arean extreme example. They are periodic for any period (exercise).

Normally we will start with a function f (t) defined on some interval [�L, L] and we will wantto extend periodically to make it a 2L-periodic function. We do this extension by defining a newfunction F(t) such that for t in [�L, L], F(t) = f (t). For t in [L, 3L], we define F(t) = f (t � 2L), fort in [�3L,�L], F(t) = f (t + 2L), and so on.

Example 4.2.1: Defined f (t) = 1� t2 on [�1, 1]. Now extend periodically to a 2-periodic function.See Figure 4.2 on the following page.

You should be careful to distinguish between f (t) and its extension. A common mistake is toassume that a formula for f (t) holds for its extension. It can be confusing when the formula forf (t) is periodic, but with perhaps a di↵erent period.

Exercise 4.2.1: Define f (t) = cos t on [�⇡/2, ⇡/2]. Now take the ⇡-periodic extension and sketchits graph. How does it compare to the graph of cos t.

4.2.2 Inner product and eigenvector decompositionSuppose we have a symmetric matrix, that is AT = A. We have said before that the eigenvectors ofA are then orthogonal. Here the word orthogonal means that if~v and ~w are two distinct eigenvectorsof A, then h~v, ~wi = 0. In this case the inner product h~v, ~wi is the dot product, which can be computedas ~vT ~w.

Page 152: Urban Illiois

152 CHAPTER 4. FOURIER SERIES AND PDES

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-0.5

0.0

0.5

1.0

1.5

-0.5

0.0

0.5

1.0

1.5

Figure 4.2: Periodic extension of the function 1 � t2.

To decompose a vector ~v in terms of mutually orthogonal vectors ~w1 and ~w2 we write

~v = a1~w1 + a2~w2.

Let us find the formula for a1 and a2. First let us compute

h~v, ~w1i = ha1~w1 + a2~w2, ~w1i = a1h~w1, ~w1i + a2h~w2, ~w1i = a1h~w1, ~w1i.

Therefore,

a1 =h~v, ~w1i

h~w1, ~w1i.

Similarly

a2 =h~v, ~w2i

h~w2, ~w2i.

You probably remember this formula from vector calculus.

Example 4.2.2: Write ~v =⇥ 2

3⇤

as a linear combination of ~w1 =⇥ 1�1

and ~w2 =⇥ 1

1⇤

.First note that ~w1 and ~w2 are orthogonal as h~w1, ~w2i = 1(1) + (�1)1 = 0. Then

a1 =h~v, ~w1i

h~w1, ~w1i=

2(1) + 3(�1)1(1) + (�1)(�1)

=�12.

a2 =h~v, ~w2i

h~w2, ~w2i=

2 + 31 + 1

=52.

Hence"

23

#

=�12

"

1�1

#

+52

"

11

#

.

Page 153: Urban Illiois

4.2. THE TRIGONOMETRIC SERIES 153

4.2.3 The trigonometric seriesNow instead of decomposing a vector in terms of the eigenvectors of a matrix, we will decomposea function in terms of eigenfunctions of a certain eigenvalue problem. In particular, the eigenvalueproblem we will use for the Fourier series is the following

x00 + �x = 0, x(�⇡) = x(⇡), x0(�⇡) = x0(⇡).

We have previously computed that the eigenfunctions are 1, cos kt, sin kt. That is, we will want tofind a representation of a 2⇡-periodic function f (t) as

f (t) =a0

2+

1

X

n=1

an cos nt + bn sin nt.

This series is called the Fourier series† or trigonometric series for f (t). Note that here we haveused the eigenfunction 1

2 instead of 1. This is for convenience. We could also think of 1 = cos 0t,so that we only need to look at cos kt and sin kt.

Just like for matrices we will want to find a projection of f (t) onto the subspace generated bythe eigenfunctions. So we will want to define an inner product of functions. For example, to findan we want to compute h f (t) , cos nt i. We define the inner product as

h f (t) , g(t) i def=

1⇡

Z ⇡

�⇡

f (t)g(t) dt.

With this definition of the inner product, we have seen in the previous section that the eigenfunc-tions cos kt (this includes the constant eigenfunction), and sin kt are orthogonal in the sense that

h cos mt , cos nt i = 0 for m , n,h sin mt , sin nt i = 0 for m , n,h sin mt , cos nt i = 0 for all m and n.

By elementary calculus we have that h cos nt , cos nt i = 1 (except for n = 0) and h sin nt , sin nt i =1. For the constant we get that h 1 , 1 i = 2. The coe�cients are given by

an =h f (t) , cos nt ih cos nt , cos nt i

=1⇡

Z ⇡

�⇡

f (t) cos nt dt,

bn =h f (t) , sin nt ih sin nt , sin nt i

=1⇡

Z ⇡

�⇡

f (t) sin nt dt.

Compare these expressions with the finite dimensional example. The formula above also worksfor n = 0, or more simply

a0 =1⇡

Z ⇡

�⇡

f (t) dt.

†Named after the French mathematician Jean Baptiste Joseph Fourier (1768 – 1830).

Page 154: Urban Illiois

154 CHAPTER 4. FOURIER SERIES AND PDES

Let us check the formulas using the orthogonality properties. Suppose for a moment that

f (t) =a0

2+

1

X

n=1

an cos nt + bn sin nt.

Then for m � 1 we have

h f (t) , cos mt i =D a0

2+

1

X

n=1

an cos nt + bn sin nt , cos mtE

=a0

2h 1 , cos mt i +

1

X

n=1

anh cos nt , cos mt i + bnh sin nt , cos mt i

= amh cos mt , cos mt i.

And hence am =h f (t) , cos mt ih cos mt , cos mt i .

Exercise 4.2.2: Carry out the calculation for a0 and bm.

Example 4.2.3: Take the functionf (t) = t

for t in (�⇡, ⇡]. Extend f (t) periodically and write it as a Fourier series. This function is called thesawtooth.

-5.0 -2.5 0.0 2.5 5.0

-5.0 -2.5 0.0 2.5 5.0

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 4.3: The graph of the sawtooth function.

The plot of the extended periodic function is given in Figure 4.3. Now we compute the coe�-cients. Let us start with a0

a0 =1⇡

Z ⇡

�⇡

t dt = 0.

Page 155: Urban Illiois

4.2. THE TRIGONOMETRIC SERIES 155

We will often use the result from calculus that the integral of an odd function over a symmetricinterval is zero. Recall that an odd function is a function '(t) such that '(�t) = �'(t). For examplethe function t, the function sin t, or more to the point the function t cos mt are all odd.

am =1⇡

Z ⇡

�⇡

t cos mt dt = 0.

Let us move to bm. Another useful fact from calculus is that the integral of an even function over asymmetric interval is twice the integral of the same function over half the interval. Recall an evenfunction is a function '(t) such that '(�t) = '(t). For example t sin mt is even.

bm =1⇡

Z ⇡

�⇡

t sin mt dt

=2⇡

Z ⇡

0t sin mt dt

=2⇡

�t cos mtm

�⇡

t=0+

1m

Z ⇡

0cos mt dt

!

=2⇡

�⇡ cos m⇡m

+ 0◆

=�2 cos m⇡

m=

2 (�1)m+1

m.

We have used the fact that

cos m⇡ = (�1)m =

8

>

>

<

>

>

:

1 if m even,�1 if m odd.

The series, therefore, is

f (t) =1

X

n=1

2 (�1)n+1

nsin nt.

Let us write out the first 3 harmonics of the series for f (t).

f (t) = 2 sin t � sin 2 t +23

sin 3 t + · · ·

The plot of these first three terms of the series, along with a plot of the first 20 terms is given inFigure 4.4 on the following page.

Example 4.2.4: Take the function

f (t) =

8

>

>

<

>

>

:

0 if �⇡ < t 0,⇡ if 0 < t ⇡.

Extend f (t) periodically and write it as a Fourier series. This function or its variants appear oftenin applications and the function is called the square wave.

Page 156: Urban Illiois

156 CHAPTER 4. FOURIER SERIES AND PDES

-5.0 -2.5 0.0 2.5 5.0

-5.0 -2.5 0.0 2.5 5.0

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

-5.0 -2.5 0.0 2.5 5.0

-5.0 -2.5 0.0 2.5 5.0

-3

-2

-1

0

1

2

3

-3

-2

-1

0

1

2

3

Figure 4.4: First 3 (left graph) and 20 (right graph) harmonics of the sawtooth function.

-5.0 -2.5 0.0 2.5 5.0

-5.0 -2.5 0.0 2.5 5.0

0

1

2

3

0

1

2

3

Figure 4.5: The graph of the square wave function.

The plot of the extended periodic function is given in Figure 4.5. Now we compute the coe�-cients. Let us start with a0

a0 =1⇡

Z ⇡

�⇡

f (t) dt =1⇡

Z ⇡

0⇡ dt = ⇡.

Next,

am =1⇡

Z ⇡

�⇡

f (t) cos mt dt =1⇡

Z ⇡

0⇡ cos mt dt = 0.

Page 157: Urban Illiois

4.2. THE TRIGONOMETRIC SERIES 157

And finally

bm =1⇡

Z ⇡

�⇡

f (t) sin mt dt

=1⇡

Z ⇡

0⇡ sin mt dt

=

� cos mtm

�⇡

t=0

=1 � cos ⇡m

m=

1 � (�1)m

m=

8

>

>

<

>

>

:

2m if m is odd,0 if m is even.

The series, therefore, is

f (t) =⇡

2+

1

X

n=1n odd

2n

sin nt =⇡

2+

1

X

k=1

22k � 1

sin (2k � 1) t.

Let us write out the first 3 harmonics of the series for f (t).

f (t) =⇡

2+ 2 sin t +

23

sin 3t + · · ·

The plot of these first three terms of the series, along with a plot of the first 20 harmonics is givenin Figure 4.6.

-5.0 -2.5 0.0 2.5 5.0

-5.0 -2.5 0.0 2.5 5.0

0

1

2

3

0

1

2

3

-5.0 -2.5 0.0 2.5 5.0

-5.0 -2.5 0.0 2.5 5.0

0

1

2

3

0

1

2

3

Figure 4.6: First 3 (left graph) and 20 (right graph) harmonics of the square wave function.

We have so far skirted the issue of convergence. It turns out that for example for the sawtoothfunction f (t), the equation

f (t) =1

X

n=1

2 (�1)n+1

nsin nt.

Page 158: Urban Illiois

158 CHAPTER 4. FOURIER SERIES AND PDES

is only an equality for t where the sawtooth is continuous. That is, we do not get an equality fort = �⇡, ⇡ and all the other discontinuities of f (t). It is not hard to see that when t is an integermultiple of ⇡ (which includes all the discontinuities), then

1

X

n=1

2 (�1)n+1

nsin nt = 0.

If we redefine f (t) on [�⇡, ⇡] as

f (t) =

8

>

>

<

>

>

:

0 if t = �⇡ or t = ⇡,t otherwise.

and extend periodically, then the series equals the extended f (t) everywhere, including the dis-continuities. We will generally not worry about changing the function at several (finitely many)points.

We will say more about convergence in the next section. Let us however mention briefly ane↵ect of the discontinuity. Let us zoom in near the discontinuity in the square wave. Further, letus plot the first 100 harmonics, see Figure 4.7. You will notice that while the series is a very goodapproximation away from the discontinuities, the error (the overshoot) near the discontinuity att = ⇡ does not seem to be getting any smaller. This behavior is known as the Gibbs phenomenon.The region where the error is large gets smaller and smaller, however, the more terms in the seriesyou take.

1.75 2.00 2.25 2.50 2.75 3.00 3.25

1.75 2.00 2.25 2.50 2.75 3.00 3.25

2.75

3.00

3.25

3.50

2.75

3.00

3.25

3.50

Figure 4.7: Gibbs phenomenon in action.

We can think of a periodic function as a “signal” being a superposition of many signals of purefrequency. That is, we could think of say the square wave as a tone of certain frequency. It will bein fact a superposition of many di↵erent pure tones of frequency which are multiples of the basefrequency. On the other hand a simple sine wave is only the pure tone. The simplest way to make

Page 159: Urban Illiois

4.2. THE TRIGONOMETRIC SERIES 159

sound using a computer is the square wave, and the sound will be a very di↵erent from nice puretones. If you have played video games from the 1980s or so you have heard what square wavessound like.

4.2.4 ExercisesExercise 4.2.3: Suppose f (t) is defined on [�⇡, ⇡] as sin 5t + cos 3t. Extend periodically andcompute the Fourier series of f (t).

Exercise 4.2.4: Suppose f (t) is defined on [�⇡, ⇡] as |t|. Extend periodically and compute theFourier series of f (t).

Exercise 4.2.5: Suppose f (t) is defined on [�⇡, ⇡] as |t|3. Extend periodically and compute theFourier series of f (t).

Exercise 4.2.6: Suppose f (t) is defined on [�⇡, ⇡] as

f (t) =

8

>

>

<

>

>

:

�1 if �⇡ < t 0,1 if 0 < t ⇡.

Extend periodically and compute the Fourier series of f (t).

Exercise 4.2.7: Suppose f (t) is defined on [�⇡, ⇡] as t3. Extend periodically and compute theFourier series of f (t).

Exercise 4.2.8: Suppose f (t) is defined on [�⇡, ⇡] as t2. Extend periodically and compute theFourier series of f (t).

There is another form of the Fourier series using complex exponentials that is sometimes easierto work with.

Exercise 4.2.9: Let

f (t) =a0

2+

1

X

n=1

an cos nt + bn sin nt.

Use Euler’s formula ei✓ = cos ✓ + i sin ✓, show that there exist complex numbers cm such that

f (t) =1

X

m=�1

cmeimt.

Note that the sum now ranges over all the integers including negative ones. Do not worry aboutconvergence in this calculation. Hint: It may be better to start from the complex exponential formand write the series as

c0 +

1

X

m=1

cmeimt + c�me�imt.

Page 160: Urban Illiois

160 CHAPTER 4. FOURIER SERIES AND PDES

4.3 More on the Fourier seriesNote: 2 lectures, §9.2 – §9.3 in EP

Before reading the lecture, it may be good to first try Project IV (Fourier series) from theIODE website: http://www.math.uiuc.edu/iode/. After reading the lecture it may be goodto continue with Project V (Fourier series again).

4.3.1 2L-periodic functionsWe have computed the Fourier series for a 2⇡-periodic function, but what about functions of dif-ferent periods. Well, fear not, the computation is a simple case of change of variables. We can justrescale the independent axis. Suppose that you have the 2L-periodic function f (t) (L is called thehalf period). Let s = ⇡

L t, then the function

g(s) = f✓L⇡

s◆

is 2⇡-periodic. We want to also rescale all our sines and cosines. We will want to write

f (t) =a0

2+

1

X

n=1

an cosn⇡L

t + bn sinn⇡L

t.

If we change variables to s we see that

g(s) =a0

2+

1

X

n=1

an cos ns + bn sin ns.

So we can compute an and bn as before. After we write down the integrals we change variablesback to t.

a0 =1⇡

Z ⇡

�⇡

g(s) ds =1L

Z L

�Lf (t) dt,

an =1⇡

Z ⇡

�⇡

g(s) cos ns ds =1L

Z L

�Lf (t) cos

n⇡L

t dt,

bn =1⇡

Z ⇡

�⇡

g(s) sin ns ds =1L

Z L

�Lf (t) sin

n⇡L

t dt.

The two most common half periods that show up in examples are ⇡ and 1 because of the sim-plicity. We should stress that we have done no new mathematics, we have only changed variables.If you understand the Fourier series for 2⇡-periodic functions, you understand it for 2L-periodicfunctions. All that we are doing is moving some constants around, but all the mathematics is thesame.

Page 161: Urban Illiois

4.3. MORE ON THE FOURIER SERIES 161

Example 4.3.1: Letf (t) = |t| for �1 < t < 1,

extended periodically. The plot of the periodic extension is given in Figure 4.8. Compute theFourier series of f (t).

-2 -1 0 1 2

-2 -1 0 1 2

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

Figure 4.8: Periodic extension of the function f (t).

We will write f (t) = a02 +

P

1

n=1 an cos n⇡t + bn sin n⇡t. For n � 1 we note that |t| cos n⇡t is evenand hence

an =

Z 1

�1f (t) cos n⇡t dt

= 2Z 1

0t cos n⇡t dt

= 2 tn⇡

sin n⇡t�1

t=0� 2

Z 1

0

1n⇡

sin n⇡t dt

= 0 +1

n2⇡2

h

cos n⇡ti1

t=0=

2�

(�1)n� 1

n2⇡2 =

8

>

>

<

>

>

:

0 if n is even,�4

n2⇡2 if n is odd.

Next we find a0

a0 =

Z 1

�1|t| dt = 1.

Note: You should be able to find this integral by thinking about the integral as the area under thegraph without doing any computation at all. Finally we can find bn. Here, we notice that |t| sin n⇡tis odd and, therefore,

bn =

Z 1

�1f (t) sin n⇡t dt = 0.

Page 162: Urban Illiois

162 CHAPTER 4. FOURIER SERIES AND PDES

Hence, the series isf (t) =

12+

X

n=1n odd

�4n2⇡2 cos n⇡t.

Let us explicitly write down the first few terms of the series up to the 3rd harmonic.

f (t) ⇡12�

4⇡2 cos ⇡t �

49⇡2 cos 3⇡t � · · ·

The plot of these few terms and also a plot up to the 20th harmonic is given in Figure 4.9. Youshould notice how close the graph is to the real function. You should also notice that there is no“Gibbs phenomenon” present as there are no discontinuities.

-2 -1 0 1 2

-2 -1 0 1 2

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

-2 -1 0 1 2

-2 -1 0 1 2

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

Figure 4.9: Fourier series of f (t) up to the 3rd harmonic (left graph) and up to the 20th harmonic(right graph).

4.3.2 ConvergenceWe will need the one sided limits of functions. We will use the following notation

f (c�) = limt"c

f (t), and f (c+) = limt#c

f (t).

If you are unfamiliar with this notation, limt"c f (t) means we are taking a limit of as t approaches cfrom below (i.e. t < c) and limt#c f (t) means we are taking a limit of as t approaches c from above(i.e. t > c). For example, for the square wave function

f (t) =

8

>

>

<

>

>

:

0 if �⇡ < t 0,⇡ if 0 < t ⇡,

(4.8)

Page 163: Urban Illiois

4.3. MORE ON THE FOURIER SERIES 163

we have f (0�) = 0 and f (0+) = ⇡.Let f (t) be a function defined on an interval [a, b]. Suppose that we find finitely many points

a = t0, t1, t2, . . . , tk = b in the interval, such that f (t) is continuous on the intervals (t0, t1), (t1, t2),. . . , (tk�1, tk). Also suppose that f (tk�) and f (tk+) exists for each of these points. Then we say f (t)is piecewise continuous.

If moreover, f (t) is di↵erentiable at all but finitely many points, and f 0(t) is piecewise contin-uous, then f (t) is said to be piecewise smooth.

Example 4.3.2: The square wave function (4.8) is piecewise smooth on [�⇡, ⇡] or any other inter-val. In such a case we just say that the function is just piecewise smooth.

Example 4.3.3: The function f (t) = |t| is piecewise smooth.

Example 4.3.4: The function f (t) = 1t is not piecewise smooth on [�1, 1] (or any other interval

containing zero). In fact, it is not even piecewise continuous.

Example 4.3.5: The function f (t) = 3pt is not piecewise smooth on [�1, 1] (or any other intervalcontaining zero). f (t) is continuous, but the derivative of f (t) is unbounded near zero and hencenot piecewise continuous.

Piecewise smooth functions have an easy answer on the convergence of the Fourier series.

Theorem 4.3.1. Suppose f (t) is a 2L-periodic piecewise smooth function. Let

a0

2+

1

X

n=1

an cosn⇡L

t + bn sinn⇡L

t

be the Fourier series for f (t). Then the series converges for all t. If f (t) is continuous near t, then

f (t) =a0

2+

1

X

n=1

an cosn⇡L

t + bn sinn⇡L

t.

Otherwisef (t�) + f (t+)

2=

a0

2+

1

X

n=1

an cosn⇡L

t + bn sinn⇡L

t.

If we happen to have that f (t) = f (t�)+ f (t+)2 at all the discontinuities, the Fourier series converges

to f (t) everywhere. We can always just redefine f (t) by changing the value at each discontinuityappropriately. Then we can write an equals sign between f (t) and the series without any worry.We mentioned this fact briefly at the end last section.

Note that the theorem does not say how fast the series converges. Think back the discussion ofthe Gibbs phenomenon in last section. The closer you get to the discontinuity, the more terms youneed to take to get an accurate approximation to the function.

Page 164: Urban Illiois

164 CHAPTER 4. FOURIER SERIES AND PDES

4.3.3 Di↵erentiation and integration of Fourier seriesNot only does Fourier series converge nicely, but it is easy to di↵erentiate and integrate the series.We can do this just by di↵erentiating or integrating term by term.

Theorem 4.3.2. Suppose

f (t) =a0

2+

1

X

n=1

an cosn⇡L

t + bn sinn⇡L

t,

is a piecewise smooth continuous function and the derivative f 0(t) is piecewise smooth. Then thederivative can be obtained by di↵erentiating term by term.

f 0(t) =1

X

n=1

�ann⇡L

sinn⇡L

t +bnn⇡

Lcos

n⇡L

t.

It is important that the function is continuous. It can have corners, but no jumps. Otherwise thedi↵erentiated series will fail to converge. For an exercise, take the series obtained for the squarewave and try to di↵erentiate the series. Similarly, we can also integrate a Fourier series.

Theorem 4.3.3. Suppose

f (t) =a0

2+

1

X

n=1

an cosn⇡L

t + bn sinn⇡L

t,

is a piecewise smooth function. Then the antiderivative is obtained by antidi↵erentiating term byterm and so

F(t) =a0t2+C +

1

X

n=1

anLn⇡

sinn⇡L

t +�bnL

n⇡cos

n⇡L

t.

where F0(t) = f (t) and C is an arbitrary constant.

Note that the series for F(t) is no longer a Fourier series as it contains the a0t2 term. The

antiderivative of a periodic function need no longer be periodic and so we should not expect aFourier series.

4.3.4 Rates of convergence and smoothnessLet us do an example of a periodic function with one derivative everywhere.

Example 4.3.6: Take the function

f (t) =

8

>

>

<

>

>

:

(1 � t) t if 0 < t < 1,(t + 1) t if �1 < t < 0,

and extend to a 2-periodic function. The plot is given in Figure 4.10 on the facing page.Note that this function has a derivative everywhere, but it does not have two derivatives at all

the integers.

Page 165: Urban Illiois

4.3. MORE ON THE FOURIER SERIES 165

-2 -1 0 1 2

-2 -1 0 1 2

-0.50

-0.25

0.00

0.25

0.50

-0.50

-0.25

0.00

0.25

0.50

Figure 4.10: Smooth 2-periodic function.

Exercise 4.3.1: Compute f 00(0+) and f 00(0�).

Let us compute the Fourier series coe�cients. The actual computation involves several inte-gration by parts and is left to student.

a0 =

Z 1

�1f (t) dt =

Z 0

�1(t + 1) t dt +

Z 1

0(1 � t) t dt = 0,

an =

Z 1

�1f (t) cos n⇡t dt =

Z 0

�1(t + 1) t cos n⇡t dt +

Z 1

0(1 � t) t cos n⇡t dt = 0

bn =

Z 1

�1f (t) sin n⇡t dt =

Z 0

�1(t + 1) t sin n⇡t dt +

Z 1

0(1 � t) t sin n⇡t dt

=4(1 � (�1)n)

⇡3n3 =

8

>

>

<

>

>

:

8⇡3n3 if n is odd,0 if n is even.

This series converges very fast. If you plot up to the third harmonic, that is the function

8⇡3 sin ⇡t +

827⇡3 sin 3⇡t,

it is almost indistinguishable from the plot of f (t) in Figure 4.10. In fact, the coe�cient 827⇡3 is

already just 0.0096 (approximately). The reason for this behavior is the n3 term in the denominator.The coe�cients bn in this case go to zero as fast as 1

n3 goes to zero.

It is a general fact that if you have one derivative, the Fourier coe�cients will go to zero ap-proximately like 1

n3 . If you have only a continuous function, then the Fourier coe�cients will goto zero as 1

n2 , and if you have discontinuities then the Fourier coe�cients will go to zero approx-imately as 1

n . Therefore, we can tell a lot about the smoothness of a function by looking at itsFourier coe�cients.

Page 166: Urban Illiois

166 CHAPTER 4. FOURIER SERIES AND PDES

To justify this behavior take for example the function defined by the Fourier series

f (t) =1

X

n=1

1n3 sin nt.

When we di↵erentiate term by term we notice

f 0(t) =1

X

n=1

1n2 cos nt.

Therefore, the coe�cients now go down like 1n2 , which we said means that we have a continuous

function. That is, the derivative of f 0(t) may be defined at most points, but at least at some pointsit is not defined. If we di↵erentiate again we find that f 00(t) really is not defined at some points aswe get a piecewise di↵erentiable function

f 00(t) =1

X

n=1

�1n

sin nt.

This function is similar to the sawtooth. If we tried to di↵erentiate again we would obtain

1

X

n=1

� cos nt,

which does not converge!

Exercise 4.3.2: Use a computer to plot f (t), f 0(t) and f 00(t). That is, plot say the first 5 harmonicsof the functions. At what points does f 00(t) have the discontinuities.

4.3.5 ExercisesExercise 4.3.3: Let

f (t) =

8

>

>

<

>

>

:

0 if �1 < t < 0,t if 0 t < 1,

extended periodically. a) Compute the Fourier series for f (t). b) Write out the series explicitly upto the 3rd harmonic.

Exercise 4.3.4: Let

f (t) =

8

>

>

<

>

>

:

�t if �1 < t < 0,t2 if 0 t < 1,

extended periodically. a) Compute the Fourier series for f (t). b) Write out the series explicitly upto the 3rd harmonic.

Page 167: Urban Illiois

4.3. MORE ON THE FOURIER SERIES 167

Exercise 4.3.5: Let

f (t) =

8

>

>

<

>

>

:

�t10 if �10 < t < 0,t

10 if 0 t < 10,

extended periodically (period is 20). a) Compute the Fourier series for f (t). b) Write out the seriesexplicitly up to the 3rd harmonic.

Exercise 4.3.6: Let f (t) =P

1

n=11n3 cos nt. Is f (t) continuous and di↵erentiable everywhere? Find

the derivative (if it exists) or justify if it does not exist.

Exercise 4.3.7: Let f (t) =P

1

n=1(�1)n

n sin nt. Is f (t) di↵erentiable everywhere? Find the derivative(if it exists) or justify if it does not exist.

Page 168: Urban Illiois

168 CHAPTER 4. FOURIER SERIES AND PDES

4.4 Sine and cosine seriesNote: 2 lectures, §9.3 in EP

4.4.1 Odd and even periodic functionsYou may have noticed by now that an odd function has no cosine terms in the Fourier series andan even function has no sine terms in the Fourier series. This observation is not a coincidence. Letus look at even and odd periodic function in more detail.

Recall a function f (t) is odd if f (�t) = � f (t). A function f (t) is even if f (�t) = f (t). Forexample, cos nt is even and sin nt is odd. Similarly the function tk is even if k is even and odd whenk is odd.

Exercise 4.4.1: Take two functions f (t) and g(t) and define their product h(t) = f (t)g(t). a) Sup-pose both are odd, is h(t) odd or even? b) Suppose one is even and one is odd, is h(t) odd or even?c) Suppose both are even, is h(t) odd or even.

If f (t) is odd and g(t) we cannot in general say anything about the sum f (t) + g(t). In fact, theFourier series of a function is really a sum of an odd (the sine terms) and an even (the cosine terms)function.

In this section we are of course interested in odd and even periodic functions. We have previ-ously defined the 2L-periodic extension of a function defined on the interval [�L, L]. Sometimeswe are only interested in the function in the range [0, L] and it would be convenient to have an odd(resp. even) function. If the function is odd, all the sine (resp. cosine) terms will disappear. Whatwe can do is take the odd (resp. even) extension of the function to [�L, L] and then we can extendperiodically to a 2L-periodic function.

Take a function f (t) defined on [0, L]. On (�L, L] define the functions

Fodd(t) def=

8

>

>

<

>

>

:

f (t) if 0 t L,� f (�t) if �L < t < 0,

Feven(t) def=

8

>

>

<

>

>

:

f (t) if 0 t L,f (�t) if �L < t < 0.

And extend Fodd(t) and Feven(t) to be 2L-periodic. Then Fodd(t) is called the odd periodic extensionof f (t), and Feven(t) is called the even periodic extension of f .

Exercise 4.4.2: Check that Fodd(t) is odd and that Feven(t) is even.

Example 4.4.1: Take the function f (t) = t(1 � t) defined on [0, 1]. Figure 4.11 on the facing pageshows the plots of the odd and even extensions of f (t).

Page 169: Urban Illiois

4.4. SINE AND COSINE SERIES 169

-2 -1 0 1 2

-2 -1 0 1 2

-0.3

-0.2

-0.1

0.0

0.0

0.2

0.3

-0.3

-0.2

-0.1

0.0

0.0

0.2

0.3

-2 -1 0 1 2

-2 -1 0 1 2

-0.3

-0.2

-0.1

0.0

0.0

0.2

0.3

-0.3

-0.2

-0.1

0.0

0.0

0.2

0.3

Figure 4.11: Odd and even 2-periodic extension of f (t) = t(1 � t), 0 t 1.

4.4.2 Sine and cosine seriesLet f (t) be an odd 2L-periodic function. We write the Fourier series for f (t), we compute thecoe�cients an (including n = 0) and get

an =1L

Z L

�Lf (t) cos

n⇡L

t dt = 0.

That is, there are no cosine terms in a Fourier series of an odd function. The integral is zero becausef (t) cos n⇡L t is an odd function (product of an odd and an even function is odd) and the integralof an odd function over a symmetric interval is always zero. Furthermore, the integral of an evenfunction over a symmetric interval [�L, L] is twice the integral of the function over the interval[0, L]. The function f (t) sin n⇡

L t is the product of two odd functions and hence even.

bn =1L

Z L

�Lf (t) sin

n⇡L

t dt =2L

Z L

0f (t) sin

n⇡L

t dt.

We can now write the Fourier series of f (t) as1

X

n=1

bn sinn⇡L

t.

Similarly, if f (t) is an even 2L-periodic function. For the same exact reasons as above, we findthat bn = 0 and

an =2L

Z L

0f (t) cos

n⇡L

t dt.

The formula still works for n = 0 in which case it becomes

a0 =2L

Z L

0f (t) dt.

Page 170: Urban Illiois

170 CHAPTER 4. FOURIER SERIES AND PDES

The Fourier series is thena0

2

1

X

n=1

an cosn⇡L

t.

An interesting consequence is that the coe�cients of the Fourier series of an odd (or even)function can be computed by just integrating over the half interval. Therefore, we can compute theodd (or even) extension of a function as a Fourier series by computing certain integrals over theinterval where the original function is defined.

Theorem 4.4.1. Let f (t) be a piecewise smooth function defined on [0, L]. Then the odd extensionof f (t) has the Fourier series

Fodd(t) =1

X

n=1

bn sinn⇡L

t,

where

bn =2L

Z L

0f (t) sin

n⇡L

t dt.

The even extension of f (t) has the Fourier series

Feven(t) =a0

2+

1

X

n=1

an cosn⇡L

t,

where

an =2L

Z L

0f (t) cos

n⇡L

t dt.

The seriesP

1

n=1 bn sin n⇡L t is called the sine series of f (t) and the series a0

2 +P

1

n=1 an cos n⇡L t

is called the cosine series of f (t). It is often the case that we do not actually care what happensoutside of [0, L]. In this case, we can pick whichever series fits our problem better.

It is not necessary to start with the full Fourier series to obtain the sine and cosine series. Thesine series is really the eigenfunction expansion of f (t) using the eigenfunctions of the eigenvalueproblem x00 + �x = 0, x(0) = 0, x(L) = L. The cosine series is the eigenfunction expansion of f (t)using the eigenfunctions of the eigenvalue problem x00 + �x = 0, x0(0) = 0, x0(L) = L. We couldhave, therefore, have gotten the same formulas by defining the inner product

h f (t), g(t)i =Z L

0f (t)g(t) dt,

and following the procedure of § 4.2. This point of view is useful because many times we usea specific series because our underlying question will lead to a certain eigenvalue problem. Infact, if the eigenvalue value problem is not one of the three we covered so far, you can still doan eigenfunction expansion, generalizing the results of this chapter. We will deal with such ageneralization in chapter 5.

Page 171: Urban Illiois

4.4. SINE AND COSINE SERIES 171

Example 4.4.2: Find the Fourier series of the even periodic extension of the function f (t) = t2 for0 t ⇡.

We will write

f (t) =a0

2+

1

X

n=1

an cos nt,

wherea0 =

2⇡

Z ⇡

0t2 dt =

2⇡2

3,

and

an =2⇡

Z ⇡

0t2 cos nt dt =

2⇡

"

t2 1n

sin nt#⇡

0�

4n⇡

Z ⇡

0t sin nt dt

=4

n2⇡

h

t cos nti⇡

0+

4n2⇡

Z ⇡

0cos nt dt =

4(�1)n

n2 .

Note that we have detected the “continuity” of the extension since the coe�cients decay as 1n2 . That

is, the even extension of t2 has no jump discontinuities. Although it will have corners, since thederivative (which will be on odd function and a sine series) will have a series whose coe�cientsdecay only as 1

n so it will have jumps.Explicitly, the first few terms of the series are

⇡2

3� 4 cos t + cos 2t �

49

cos 3t + · · ·

Exercise 4.4.3: a) Compute the derivative of the even extension of f (t) above and verify it hasjump discontinuities. Use the actual definition of f (t), not its cosine series! b) Why is it that thederivative of the even extension of f (t) is the odd extension of f 0(t).

4.4.3 ApplicationWe have said that Fourier series ties in to the boundary value problems we studied earlier. Let ussee this connection in more detail.

Suppose we have the boundary value problem for 0 < t < L,

x00(t) + � x(t) = f (t),

for the Dirichlet boundary conditions x(0) = 0, x(L) = 0. By using the Fredholm alternative(Theorem 4.1.2 on page 148) we note that as long as � is not an eigenvalue of the underlyinghomogeneous problem, there will exist a unique solution. Note that the eigenfunctions of thiseigenvalue problem were the functions sin n⇡

L t. Therefore, to find the solution, we first find f (t) interms of the Fourier sine series. We write x as a sine series as well with unknown coe�cients. Wesubstitute into the equation and solve for the Fourier coe�cients of x.

If on the other hand we have the Neumann boundary conditions x0(0) = 0, x0(L) = 0. We dothe same procedure using the cosine series. These methods are best seen by examples.

Page 172: Urban Illiois

172 CHAPTER 4. FOURIER SERIES AND PDES

Example 4.4.3: Take the boundary value problem for 0 < t < 1,

x00(t) + 2x(t) = f (t),

where f (t) = t on 0 < t < 1. We want to look for a solution x satisfying the Dirichlet conditionsx(0) = 0, x(1) = 0. We write f (t) as a sine series

f (t) =1

X

n=1

cn sin n⇡t,

where

cn = 2Z 1

0t sin n⇡t dt =

2 (�1)n+1

n⇡.

We write x(t) as

x(t) =1

X

n=1

bn sin n⇡t.

We plug in to obtain

x00(t) + 2x(t) =1

X

n=1

�bnn2⇡2 sin n⇡t + 21

X

n=1

bn sin n⇡t

=

1

X

n=1

bn(2 � n2⇡2) sin n⇡t

= f (t) =1

X

n=1

2 (�1)n+1

n⇡sin n⇡t.

Therefore,

bn(2 � n2⇡2) =2 (�1)n+1

n⇡or

bn =2 (�1)n+1

n⇡(2 � n2⇡2).

We have thus obtained a Fourier series for the solution

x(t) =1

X

n=1

2 (�1)n+1

n⇡ (2 � n2⇡2)sin n⇡t.

Example 4.4.4: Similarly we handle the Neumann conditions. Take the same boundary valueproblem for 0 < t < 1,

x00(t) + 2x(t) = f (t),

Page 173: Urban Illiois

4.4. SINE AND COSINE SERIES 173

where f (t) = t on 0 < t < 1. However, let us now consider the Neumann conditions x0(0) = 0,x0(1) = 0. We write f (t) as a cosine series

f (t) =c0

2+

1

X

n=1

cn cos n⇡t,

where

c0 = 2Z 1

0t dt = 1,

and

cn = 2Z 1

0t cos n⇡t dt =

2((�1)n� 1)

⇡2n2 =

8

>

>

<

>

>

:

�4⇡2n2 if n odd,0 if n even.

We write x(t) as a cosine series

x(t) =a0

2+

1

X

n=1

an cos n⇡t.

We plug in to obtain

x00(t) + 2x(t) =1

X

n=1

h

�ann2⇡2 cos n⇡ti

+ a0 + 21

X

n=1

h

an cos n⇡ti

= a0 +

1

X

n=1

an(2 � n2⇡2) cos n⇡t

= f (t) =12+

1

X

n=1n odd

�2⇡2n2 cos n⇡t.

Therefore, a0 =12 , an = 0 for n even and for n odd (n � 1)

an(2 � n2⇡2) =�4⇡2n2

oran =

�4n2⇡2(2 � n2⇡2)

.

We have thus obtained a Fourier series for the solution

x(t) =1

X

n=1n odd

�4n2⇡2(2 � n2⇡2)

cos n⇡t.

Page 174: Urban Illiois

174 CHAPTER 4. FOURIER SERIES AND PDES

4.4.4 ExercisesExercise 4.4.4: Take f (t) = (t � 1)2 defined on 0 t 1. a) Sketch the plot of the even periodicextension of f . b) Sketch the plot of the odd periodic extension of f .

Exercise 4.4.5: Find the Fourier series of both the odd and even periodic extension of the functionf (t) = (t � 1)2 for 0 t 1. Can you tell which extension is continuous from the Fourier seriescoe�cients?

Exercise 4.4.6: Find the Fourier series of both the odd and even periodic extension of the functionf (t) = t for 0 t ⇡.

Exercise 4.4.7: Find the Fourier series of the even periodic extension of the function f (t) = sin tfor 0 t ⇡.

Exercise 4.4.8: Letx00(t) + 4x(t) = f (t),

where f (t) = 1 on 0 < t < 1. a) Solve for the Dirichlet conditions x(0) = 0, x(1) = 0. b) Solve forthe Neumann conditions x0(0) = 0, x0(1) = 0.

Exercise 4.4.9: Letx00(t) + 9x(t) = f (t),

for f (t) = sin 2⇡t on 0 < t < 1. a) Solve for the Dirichlet conditions x(0) = 0, x(1) = 0. b) Solvefor the Neumann conditions x0(0) = 0, x0(1) = 0.

Exercise 4.4.10: Letx00(t) + 3x(t) = f (t), x(0) = 0, x(1) = 0,

where f (t) =P

1

n=1 bn sin n⇡t. Write the solution x(t) as a Fourier series, where the coe�cients aregiven in terms of bn.

Page 175: Urban Illiois

4.5. APPLICATIONS OF FOURIER SERIES 175

4.5 Applications of Fourier seriesNote: 2 lectures, §9.4 in EP

4.5.1 Periodically forced oscillationLet us return to the forced oscillations. We have a mass spring

damping c

mk F(t)

system as before, where we have a mass m on a spring with springconstant k, with damping c, and a force F(t) applied to the mass.Now suppose that the forcing function F(t) is 2L-periodic forsome L > 0. We have already seen this problem in chapter 2with a simple F(t). The equation that governs this particular setup is

mx00(t) + cx0(t) + kx(t) = F(t). (4.9)

We know that the general solution will consist of xc which solves the associated homogeneousequation mx00 + cx0 + kx = 0, and a particular solution of (4.9) we will call xp. Since the comple-mentary solution xc will decay as time goes on, we are mostly interested in the part of xp whichdoes not decay. We call this xp the steady periodic solution as before. The di↵erence in what wewill do now is that we consider an arbitrary forcing function F(t).

For simplicity, let us suppose that c = 0. The problem with c > 0 is very similar. The equation

mx00 + kx = 0

has the general solutionx(t) = A cos!0t + B sin!0t.

where !0 =q

km . So any solution to mx00(t)+kx(t) = F(t) will be of the form A cos!0t+B sin!0t+

xsp, where xsp is the particular steady periodic solution. The steady periodic solution will alwayshave the same period as F(t).

In the spirit of the last section and the idea of undetermined coe�cients we will first write

F(t) =c0

2+

1

X

n=1

cn cosn⇡L

t + dn sinn⇡L

t.

Then we write

x(t) =a0

2+

1

X

n=1

an cosn⇡L

t + bn sinn⇡L

t,

and we plug in x into the di↵erential equation and solve for an and bn in terms of cn and dn. This isperhaps best seen by example.Example 4.5.1: Suppose that k = 2, and m = 1. The units are the mks units (meters-kilograms-seconds) again. There is a jetpack strapped to the mass, which fires with a force of 1 Newtons for1 second and then is o↵ for 1 second. We want to find the steady periodic solution.

Page 176: Urban Illiois

176 CHAPTER 4. FOURIER SERIES AND PDES

The equation is, therefore,x00 + 2x = F(t),

where F(t) is the step function

F(t) =

8

>

>

<

>

>

:

1 if 0 < t < 1,0 if � 1 < t < 0,

extended periodically.We write

F(t) =c0

2+

1

X

n=1

cn cos n⇡ t + dn sin n⇡ t.

It is not hard to see that cn = 0 for n � 1:

cn =

Z 1

�1F(t) cos n⇡t dt =

Z 1

0cos n⇡t dt = 0.

On the other hand

c0 =

Z 1

�1F(t) dt =

Z 1

0dt = 1.

And

dn =

Z 1

�1F(t) sin n⇡t dt

=

Z 1

0sin n⇡t dt

=

� cos n⇡tn⇡

�1

t=0

=1 � (�1)n

⇡n=

8

>

>

<

>

>

:

2⇡n if n odd,0 if n even.

So

F(t) =12+

1

X

n=1n odd

2⇡n

sin n⇡t.

We want to try

x(t) =a0

2+

1

X

n=1

an cos n⇡t + bn sin n⇡t.

Page 177: Urban Illiois

4.5. APPLICATIONS OF FOURIER SERIES 177

We notice that once we plug into the di↵erential equation x00 + 2x = F(t) it is clear that an = 0 forn � 1 as there are no corresponding terms in the series for F(t). Similarly bn = 0 for n even. Hencewe try

x(t) =a0

2+

1

X

n=1n odd

bn sin n⇡t.

We plug into the di↵erential equation and obtain

x00 + 2x =1

X

n=1n odd

h

�bnn2⇡2 sin n⇡ti

+ a0 + 21

X

n=1n odd

h

bn sin n⇡ti

= a0 +

1

X

n=1n odd

bn(2 � n2⇡2) sin n⇡t

= F(t) =12+

1

X

n=1n odd

2⇡n

sin n⇡t.

So a0 =12 and

bn =2

⇡n(2 � n2⇡2).

The steady periodic solution has the Fourier series

xsp(t) =14+

1

X

n=1n odd

2⇡n(2 � n2⇡2)

sin n⇡t.

We know this is the steady periodic solution as it contains no terms of the complementary solutionand is periodic with the same period as F(t) itself. See Figure 4.12 on the following page for theplot of this solution.

4.5.2 ResonanceJust like when the forcing function was a simple cosine, resonance could still happen. Let usassume c = 0 and we will discuss only pure resonance. Again, take the equation

mx00(t) + kx(t) = F(t).

When we expand F(t) and find that some of its terms coincide with the complementary solution tomx00 + kx = 0, we cannot use those terms in the guess. Just like before, they will disappear whenwe plug into the left hand side and we will get a contradictory equation (such as 0 = 1). That is,suppose

xc = A cos!0t + B sin!0t,

Page 178: Urban Illiois

178 CHAPTER 4. FOURIER SERIES AND PDES

0.0 2.5 5.0 7.5 10.0

0.0 2.5 5.0 7.5 10.0

0.0

0.1

0.2

0.3

0.4

0.5

0.0

0.1

0.2

0.3

0.4

0.5

Figure 4.12: Plot of the steady periodic solution xsp of Example 4.5.1.

where !0 =N⇡L for some positive integer N. In this case we have to modify our guess and try

x(t) =a0

2+ t

aN cosN⇡L

t + bN sinN⇡L

t◆

+

1

X

n=1n,N

an cosn⇡L

t + bn sinn⇡L

t.

In other words, we multiply the o↵ending term by t. From then on, we proceed as before.Of course, the solution will not be a Fourier series (it will not even be periodic) since it contains

these terms multiplied by t. Further, the terms t⇣

aN cos N⇡L t + bN sin N⇡

L t⌘

will eventually dominateand lead to wild oscillations. As before, this behavior is called pure resonance or just resonance.

Note that there now may be infinitely many resonance frequencies to hit. That is, as we changethe frequency of F (we change L), di↵erent terms from the Fourier series of F may interferewith the complementary solution and will cause resonance. However, we should note that sinceeverything is an approximation and in particular c is never actually zero but something very closeto zero, only the first few resonance frequencies will matter.

Example 4.5.2: Find the steady periodic solution to the equation

2x00 + 18⇡2x = F(t),

where

F(t) =

8

>

>

<

>

>

:

1 if 0 < t < 1,�1 if � 1 < t < 0,

extended periodically. We note that

F(t) =1

X

n=1n odd

4⇡n

sin n⇡t.

Page 179: Urban Illiois

4.5. APPLICATIONS OF FOURIER SERIES 179

Exercise 4.5.1: Compute the Fourier series of F to verify.

The solution must look like

x(t) = c1 cos 3⇡t + c2 sin 3⇡t + xp(t)

for some particular solution xp.We note that if we just tried a Fourier series with sin n⇡t as usual, we would get duplication

when n = 3. Therefore, we pull out that term and multiply by t. And we have add a cosine term toget everything right. That is, we must try

xp(t) = a3t cos 3⇡t + b3t sin 3⇡t +1

X

n=1n oddn,3

bn sin n⇡t.

Let us compute the second derivative.

x00p (t) = �6a3⇡ sin 3⇡t � 9⇡2a3 t cos 3⇡t + 6b3⇡ cos 3⇡t � 9⇡2b3 t sin 3⇡t+

+

1

X

n=1n oddn,3

(�n2⇡2bn) sin n⇡t.

We now plug into the di↵erential equation.

2x00p + 18⇡2x = � 12a3⇡ sin 3⇡t � 18⇡2a3t cos 3⇡t + 12b3⇡ cos 3⇡t � 18⇡2b3t sin 3⇡t+

+ 18⇡2a3t cos 3⇡t + 18⇡2b3t sin 3⇡t+

+

1

X

n=1n oddn,3

(�2n2⇡2bn + 18⇡2bn) sin n⇡t.

If we simplify we obtain

2x00p + 18⇡2x = �12a3⇡ sin 3⇡t + 12b3⇡ cos 3⇡t +1

X

n=1n oddn,3

(�2n2⇡2bn + 18⇡2bn) sin n⇡t.

This series has to equal to the series for F(t). We equate the coe�cients and solve for a3 and bn.

a3 =4/(3⇡)�12⇡

=�19⇡2

b3 = 0

bn =4

n⇡(18⇡2� 2n2⇡2)

=2

⇡3n(9 � n2)for n odd and n , 3.

Page 180: Urban Illiois

180 CHAPTER 4. FOURIER SERIES AND PDES

That is,

xp(t) =�19⇡2 t cos 3⇡t +

1

X

n=1n oddn,3

2⇡3n(9 � n2)

sin n⇡t.

When c > 0, you will not have to worry about pure resonance. That is, there will never beany conflicts and you do not need to multiply any terms by t. There is a corresponding concept ofpractical resonance and it is very similar to the ideas we already explored in chapter 2. We will notgo into details here.

4.5.3 ExercisesExercise 4.5.2: Let F(t) = 1

2 +P

1

n=11n2 cos n⇡t. Find the steady periodic solution to x00+2x = F(t).

Express your solution as a Fourier series.

Exercise 4.5.3: Let F(t) =P

1

n=11n3 sin n⇡t. Find the steady periodic solution to x00 + x0 + x = F(t).

Express your solution as a Fourier series.

Exercise 4.5.4: Let F(t) =P

1

n=11n2 cos n⇡t. Find the steady periodic solution to x00 + 4x = F(t).

Express your solution as a Fourier series.

Exercise 4.5.5: Let F(t) = t for �1 < t < 1 and extended periodically. Find the steady periodicsolution to x00 + x = F(t). Express your solution as a Fourier series.

Exercise 4.5.6: Let F(t) = t for �1 < t < 1 and extended periodically. Find the steady periodicsolution to x00 + ⇡2x = F(t). Express your solution as a Fourier series.

Page 181: Urban Illiois

4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 181

4.6 PDEs, separation of variables, and the heat equationNote: 2 lectures, §9.5 in EP

Let us recall that a partial di↵erential equation or PDE is an equation containing the partialderivatives with respect to several independent variables. Solving PDEs will be our main applica-tion of Fourier series.

A PDE is said to be linear if the dependent variable and its derivatives appear only to the firstpower and in no functions. We will only talk about linear PDEs here. Together with a PDE, weusually have specified some boundary conditions, where the value of the solution or its derivativesis specified along the boundary of a region, and/or some initial conditions where the value of thesolution or its derivatives is specified for some initial time. Sometimes such conditions are mixedtogether and we will refer to them simply as side conditions.

We will study three partial di↵erential equations, each one representing a more general class ofequations. First, we will study the heat equation, which is an example of a parabolic PDE. Next,we will study the wave equation, which is an example of a hyperbolic PDE. Finally, we will studythe Laplace equation, which is an example of an elliptic PDE. Each of our examples will illustratebehaviour that is typical for the whole class.

4.6.1 Heat on an insulated wireLet us first study the heat equation. Suppose that we have a wire (or a thin metal rod) that isinsulated except at the endpoints. Let x denote the position along the wire and let t denote time.See Figure 4.13.

0 L xinsulation

temperature u

Figure 4.13: Insulated wire.

Now let u(x, t) denote the temperature at point x at time t. It turns out that the equation govern-ing the this system is the so-called one-dimensional heat equation:

@u@t= k

@2u@x2 ,

for some k > 0. That is, the change in heat at a specific point is proportional to the second derivativeof the heat along the wire. This makes sense. You would expect that if the heat distribution had amaximum (was concave down), then heat would flow away from the maximum. And vice versa.

Page 182: Urban Illiois

182 CHAPTER 4. FOURIER SERIES AND PDES

We will generally use a more convenient notation for partial derivatives. We will write ut

instead of @u@t and we will write uxx instead of @2u

@x2 . With this notation the equation becomes

ut = kuxx.

For the heat equation, we must also have some boundary conditions. We assume that the wireis of length L and the ends are either exposed and touching some body of constant heat, or the endsare insulated. If the ends of the wire are for example kept at temperature 0, then we must have theconditions

u(0, t) = 0 and u(L, t) = 0.

If on the other hand the ends are also insulated we get the conditions

ux(0, t) = 0 and ux(L, t) = 0.

In other words, heat is not flowing in nor out of the wire at the ends. Note that we always have twoconditions along the x axis as there are two derivatives in the x direction. These side conditionsare called homogeneous.

Furthermore, we will suppose we know the initial temperature distribution.

u(x, 0) = f (x),

for some known function f (x). This initial condition is not a homogeneous side condition.

4.6.2 Separation of variablesFirst we must note the principle of superposition still applies. The heat equation is still calledlinear, since u and its derivatives do not appear to any powers or in any functions. If u1 and u2 aresolutions and c1, c2 are constants, then u = c1u1 + c2u2 is still a solution.

Exercise 4.6.1: Verify the principle of superposition for the heat equation.

Superposition also preserves some of the side conditions. In particular, if u1 and u2 are solutionsthat satisfy u(0, t) = 0 and u(L, t) = 0, and c1, c2 are constants, then u = c1u1 + c2u2 is still asolution that satisfies u(0, t) = 0 and u(L, t) = 0. Similarly for the side conditions ux(0, t) = 0 andux(L, t) = 0. In general, superposition preserves all homogeneous side conditions.

The method of separation of variables is to try to find solutions that are sums or products offunctions of one variable. For example, for the heat equation, we try to find solutions of the form

u(x, t) = X(x)T (t).

That the desired solution we are looking for is of this form is too much to hope for. However, whatis perfectly reasonable to ask is to find enough “building-block” solutions u(x, t) = X(x)T (t) using

Page 183: Urban Illiois

4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 183

this procedure so that the desired solution to the PDE is somehow constructed from these buildingblocks by the use of superposition.

Let us try to solve the heat equation

ut = kuxx with u(0, t) = 0 and u(L, t) = 0 and u(x, 0) = f (x).

Let us guess u(x, t) = X(x)T (t). We plug into the heat equation to obtain

X(x)T 0(t) = kX00(x)T (t).

We rewrite asT 0(t)kT (t)

=X00(x)X(x)

.

As this equation is supposed to hold for all x and t. But the left hand side does not depend on t andthe right hand side does not depend on x. Therefore, each side must be a constant. Let us call thisconstant �� (the minus sign is for convenience later). Thus, we have two equations

T 0(t)kT (t)

= �� =X00(x)X(x)

.

Or in other words

X00(x) + �X(x) = 0,T 0(t) + �kT (t) = 0.

The boundary condition u(0, t) = 0 implies X(0)T (t) = 0. We are looking for a nontrivial solutionand so we can assume that T (t) is not identically zero. Hence X(0) = 0. Similarly, u(L, t) = 0implies X(L) = 0. We are looking for nontrivial solutions X of the eigenvalue problem X00+�X = 0,X(0) = 0, X(L) = 0. We have previously found that the only eigenvalues are �n =

n2⇡2

L2 , for integersn � 1, where eigenfunctions are sin n⇡

L x. Hence, let us pick the solutions

Xn(x) = sinn⇡L

x.

The corresponding Tn must satisfy the equation

T 0n(t) +n2⇡2

L2 kTn(t) = 0.

By the method of integrating factor, the solution of this problem is easily seen to be

Tn(t) = e�n2⇡2

L2 kt.

It will be useful to note that Tn(0) = 1. Our building-block solutions are

un(x, t) = Xn(x)Tn(t) =✓

sinn⇡L

x◆

e�n2⇡2

L2 kt.

Page 184: Urban Illiois

184 CHAPTER 4. FOURIER SERIES AND PDES

We now note that un(x, 0) = sin n⇡L x. Let us write f (x) using the sine series

f (x) =1

X

n=1

bn sinn⇡L

x.

That is, we find the Fourier series of the odd periodic extension of f (x). We used the sine series asit corresponds to the eigenvalue problem for X(x) above. Finally, we use superposition to write thesolution as

u(x, t) =1

X

n=1

bnun(x, t) =1

X

n=1

bn

sinn⇡L

x◆

e�n2⇡2

L2 kt.

Why does this solution work? First note that it is a solution to the heat equation by superpo-sition. It satisfies u(0, t) = 0 and u(L, t) = 0 because x = 0 or x = L makes all the sines vanish.Finally, plugging in t = 0, we notice that Tn(0) = 1 and so

u(x, 0) =1

X

n=1

bnun(x, 0) =1

X

n=1

bn sinn⇡L

x = f (x).

Example 4.6.1: Suppose that we have an insulated wire of length 1, such that the ends of the wireare embedded in ice (temperature 0). Let k = 0.003. Then suppose that initial heat distribution isu(x, 0) = 50 x (1 � x). See Figure 4.14.

0.00 0.25 0.50 0.75 1.00

0.00 0.25 0.50 0.75 1.00

0.0

2.5

5.0

7.5

10.0

12.5

0.0

2.5

5.0

7.5

10.0

12.5

Figure 4.14: Initial distribution of temperature in the wire.

We want to find the temperature function u(x, t). Let us suppose we also want to find when (atwhat t) does the maximum temperature in the wire drop to one half of the initial maximum 12.5.

We are solving the following PDE problem

ut = 0.003 uxx,

u(0, t) = u(1, t) = 0,u(x, 0) = 50 x (1 � x) for 0 < x < 1.

Page 185: Urban Illiois

4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 185

We find the sine series for f (x) = 50 x (1� x) for 0 < x < 1. That is, f (x) =P

1

n=1 bn sin n⇡x, where

bn = 2Z 1

050 x (1 � x) sin n⇡x dx =

200⇡3n3 �

200 (�1)n

⇡3n3 =

8

>

>

<

>

>

:

0 if n even,400⇡3n3 if n odd.

0.25

0.50

0.75

1.00

x

0

20

40

60

80

100t

0

20

40

60

80

100

t

0.0

2.5

5.0

7.5

10.0

12.5

0.0

2.5

5.0

7.5

10.0

12.5

0.00

0.25

0.50

0.75

1.00

x

u(x,t)

11.70010.4009.1007.8006.5005.2003.9002.6001.3000.000

Figure 4.15: Plot of the temperature of the wire at position x at time t.

The solution u(x, t), plotted in Figure 4.15 for 0 t 100, is given by the following series.

u(x, t) =1

X

n=1n odd

400⇡3n3 (sin n⇡x) e�n2⇡2 0.003 t.

Finally, let us answer the question about the maximum temperature. It is relatively obvious thatthe maximum temperature will always be at x = 0.5, in the middle of the wire. The plot of u(x, t)confirms this intuition.

Page 186: Urban Illiois

186 CHAPTER 4. FOURIER SERIES AND PDES

If we plug in x = 0.5 we get

u(0.5, t) =1

X

n=1n odd

400⇡3n3 (sin n⇡ 0.5) e�n2⇡2 0.003 t.

For n = 3 and higher (remember we are taking only odd n), the terms of the series are insignificantcompared to the first term. The first term in the series is already a very good approximation of thefunction and hence

u(0.5, t) ⇡400⇡3 e�⇡

2 0.003 t.

The approximation gets better and better as t gets larger as the other terms decay much faster. Letus plot the function u(0.5, t), the temperature at the midpoint of the wire at time t, in Figure 4.16.The figure also plots the approximation by the first term.

0 25 50 75 100

0 25 50 75 100

2.5

5.0

7.5

10.0

12.5

2.5

5.0

7.5

10.0

12.5

Figure 4.16: Temperature at the midpoint of the wire (the bottom curve), and the approximation ofthis temperature by using only the first term in the series (top curve).

It would be hard to tell the di↵erence after t = 5 or so between the first term of the seriesrepresentation of u(x, t) and the real solution. This behavior is a general feature of solving the heatequation. If you are interested in behavior for large enough t, only the first one or two terms maybe necessary.

Getting back to the question of when is the maximum temperature one half of the initial maxi-mum temperature. That is, when is the temperature at the midpoint 12.5/2 = 6.25. We notice fromthe graph that if we use the approximation by the first term we will be close enough. Therefore,we solve

6.25 =400⇡3 e�⇡

2 0.003 t.

That is,

t =ln 6.25 ⇡3

400

�⇡20.003⇡ 24.5.

Page 187: Urban Illiois

4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 187

So the maximum temperature drops to half at about t = 24.5.

4.6.3 Insulated endsNow suppose the ends of the wire are insulated. In this case, we are solving the equation

ut = kuxx with ux(0, t) = 0 and ux(L, t) = 0 and u(x, 0) = f (x).

Yet again we try a solution of the form u(x, t) = X(x)T (t). By the same procedure as before weplug into the heat equation and arrive at the following two equations

X00(x) + �X(x) = 0,T 0(t) + �kT (t) = 0.

At this point the story changes slightly. The boundary condition ux(0, t) = 0 implies X0(0)T (t) = 0.Hence X0(0) = 0. Similarly, ux(L, t) = 0 implies X0(L) = 0. We are looking for nontrivial solutionsX of the eigenvalue problem X00+�X = 0, X0(0) = 0, X0(L) = 0. We have previously found that theonly eigenvalues are �n =

n2⇡2

L2 , for integers n � 0, where eigenfunctions are cos n⇡L x (we include

the constant eigenfunction). Hence, let us pick solutions

Xn(x) = cosn⇡L

x.

The corresponding Tn must satisfy the equation

T 0n(t) +n2⇡2

L2 kTn(t) = 0.

For n � 1, as before,Tn(t) = e

�n2⇡2

L2 kt.

For n = 0, we have T 00(t) = 0 and hence T0(t) = 1. Our building-block solutions will be

un(x, t) = Xn(x)Tn(t) =✓

cosn⇡L

x◆

e�n2⇡2

L2 kt.

andu0(x, t) = 1.

We now note that un(x, 0) = cos n⇡L x. So let us write f using the cosine series

f (x) =a0

2+

1

X

n=1

an cosn⇡L

x.

That is, we find the Fourier series of the even periodic extension of f (x).We use superposition to write the solution as

u(x, t) =a0

2+

1

X

n=1

anun(x, t) =a0

2+

1

X

n=1

an

cosn⇡L

x◆

e�n2⇡2

L2 kt.

Page 188: Urban Illiois

188 CHAPTER 4. FOURIER SERIES AND PDES

Example 4.6.2: Let us try the same example as before, but for insulated ends. We are solving thefollowing PDE problem

ut = 0.003 uxx,

ux(0, t) = ux(1, t) = 0,u(x, 0) = 50 x (1 � x) for 0 < x < 1.

For this problem, we must find the cosine series of u(x, 0). For 0 < x < 1 we have

50 x (1 � x) =253+

1

X

n=2n even

�200⇡2n2

!

cos n⇡x.

The calculation is left to the reader. Hence, the solution to the PDE problem, plotted in Figure 4.17on the next page, is given by the series

u(x, t) =253+

1

X

n=2n even

�200⇡2n2

!

(cos n⇡x) e�n2⇡2 0.003 t.

Note in the graph that the temperature evens out across the wire. Eventually all the terms exceptthe constant die out, and you will be left with a uniform temperature of 25

3 ⇡ 8.33 along the entirelength of the wire.

4.6.4 ExercisesExercise 4.6.2: Suppose you have a wire of length 2, with k = 0.001 and an initial temperaturedistribution of u(x, 0) = 50x. Suppose that both the ends are embedded in ice (temperature 0).Find the solution as a series.

Exercise 4.6.3: Find a series solution of

ut = uxx,

u(0, t) = u(1, t) = 0,u(x, 0) = 100 for 0 < x < 1.

Exercise 4.6.4: Find a series solution of

ut = uxx,

ux(0, t) = ux(⇡, t) = 0,u(x, 0) = 3 sin x + sin 3⇡ for 0 < x < ⇡.

Page 189: Urban Illiois

4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 189

0.00

0.25

0.50

0.75

1.00

x

0

5

10

15

20

25

30

t

0

5

10

15

20

25

30

t

0.0

2.5

5.0

7.5

10.0

12.5

0.0

2.5

5.0

7.5

10.0

12.5

0.00

0.25

0.50

0.75

1.00

x

u(x,t)

11.70010.4009.1007.8006.5005.2003.9002.6001.3000.000

Figure 4.17: Plot of the temperature of the insulated wire at position x at time t.

Exercise 4.6.5: Find a series solution of

ut = uxx,

ux(0, t) = ux(⇡, t) = 0,u(x, 0) = cos x for 0 < x < ⇡.

Exercise 4.6.6: Find a series solution of

ut = uxx,

u(0, t) = 0, u(1, t) = 100,u(x, 0) = sin ⇡x for 0 < x < 1.

Hint: Use the fact that u(x, t) = 100x is a solution satisfying ut = uxx, u(0, t) = 0, u(1, t) = 100.Then use superposition.

Exercise 4.6.7: Find the steady state temperature solution as a function of x alone, by lettingt ! 1 in the solution from exercises 4.6.5 and 4.6.6. Verify that satisfies the equation uxx = 0.

Page 190: Urban Illiois

190 CHAPTER 4. FOURIER SERIES AND PDES

Exercise 4.6.8: Use separation variables to find a nontrivial solution to uxx + uyy = 0, whereu(x, 0) = 0 and u(0, y) = 0. Hint: Try u(x, y) = X(x)Y(y).

Exercise 4.6.9 (challenging): Suppose that one end of the wire is insulated (say at x = 0) and theother end is kept at zero temperature. That is, find a series solution of

ut = kuxx,

ux(0, t) = u(L, t) = 0,u(x, 0) = f (x) for 0 < x < L.

Express any coe�cients in the series by integrals of f (x).

Page 191: Urban Illiois

4.7. ONE DIMENSIONAL WAVE EQUATION 191

4.7 One dimensional wave equationNote: 1 lecture, §9.6 in EP

Suppose we have a string such as on a guitar of length L. Suppose we only consider vibrationsin one direction. That is let x denote the position along the string, let t denote time and let y denotethe displacement of the string from the rest position. See Figure 4.18.

L x

y

y

0

Figure 4.18: Vibrating string.

The equation that governs this setup is the so-called one-dimensional wave equation:

ytt = a2yxx,

for some a > 0. We will assume that the ends of the string are fixed and hence we get

y(0, t) = 0 and y(L, t) = 0.

Note that we always have two conditions along the x axis as there are two derivatives in the xdirection.

There are also two derivatives along the t direction and hence we will need two further condi-tions here. We will need to know the initial position and the initial velocity of the string.

y(x, 0) = f (x) and yt(x, 0) = g(x),

for some known functions f (x) and g(x).As the equation is again linear, superposition works just as it did for the heat equation. And

again we will use separation of variables to find enough building-block solutions to get the overallsolution. There is one change however. It will be easier to solve two separate problems and addtheir solutions.

The two problems we will solve are

wtt = a2wxx,w(0, t) = w(L, t) = 0,w(x, 0) = 0 for 0 < x < L,wt(x, 0) = g(x) for 0 < x < L.

(4.10)

Page 192: Urban Illiois

192 CHAPTER 4. FOURIER SERIES AND PDES

andztt = a2zxx,z(0, t) = z(L, t) = 0,z(x, 0) = f (x) for 0 < x < L,zt(x, 0) = 0 for 0 < x < L.

(4.11)

The principle of superposition will then imply that y = w + z solves the wave equation andfurthermore y(x, 0) = w(x, 0) + z(x, 0) = f (x) and yt(x, 0) = wt(x, 0) + zt(x, 0) = g(x). Hence, y is asolution to

ytt = a2yxx,y(0, t) = y(L, t) = 0,y(x, 0) = f (x) for 0 < x < L,yt(x, 0) = g(x) for 0 < x < L.

(4.12)

The reason for all this complexity is that superposition only works for homogeneous conditionssuch as y(0, t) = y(L, t) = 0, y(x, 0) = 0, or yt(x, 0) = 0. Therefore, we will be able to use theidea of separation of variables to find many building-block solutions solving all the homogeneousconditions. We can then use them to construct a solution solving the remaining nonhomogeneouscondition.

Let us start with (4.10). We try a solution of the form w(x, t) = X(x)T (t) again. We plug intothe wave equation to obtain

X(x)T 00(t) = a2X00(x)T (t).

Rewriting we getT 00(t)a2T (t)

=X00(x)X(x)

.

Again, left hand side depends only on t and the right hand side depends only on x. Therefore, bothequal a constant which we will denote by ��.

T 00(t)a2T (t)

= �� =X00(x)X(x)

.

We solve to get two ordinary di↵erential equations

X00(x) + �X(x) = 0,T 00(t) + �a2T (t) = 0.

The conditions 0 = w(0, t) = X(0)T (t) implies X(0) = 0 and w(L, t) = 0 implies that X(L) = 0.Therefore, the only nontrivial solutions for the first equation are when � = �n =

n2⇡2

L2 and they are

Xn(x) = sinn⇡L

x.

The general solution for T for this particular �n is

Tn(t) = A cosn⇡aL

t + B sinn⇡aL

t.

Page 193: Urban Illiois

4.7. ONE DIMENSIONAL WAVE EQUATION 193

We also have the condition that w(x, 0) = 0 or X(x)T 0(0) = 0. This implies that T 0(0) = 0, whichin turn forces A = 0. It will be convenient to pick B = L

n⇡a and hence

Tn(t) =L

n⇡asin

n⇡aL

t.

Our building-block solution will be

wn(x, t) =L

n⇡a

sinn⇡L

x◆ ✓

sinn⇡aL

t◆

.

We di↵erentiate in t, that is

(wn)t(x, t) =✓

sinn⇡L

x◆ ✓

cosn⇡aL

t◆

.

Hence,(wn)t(x, 0) = sin

n⇡L

x.

We expand g(x) in terms of these sines as

g(x) =1

X

n=1

bn sinn⇡L

x.

Now we can just write down the solution to (4.10) as a series

w(x, t) =1

X

n=1

bnwn(x, t) =1

X

n=1

bnL

n⇡a

sinn⇡L

x◆ ✓

sinn⇡aL

t◆

.

Exercise 4.7.1: Check that w(x, 0) = 0 and wt(x, 0) = g(x).

Similarly we proceed to solve (4.11). We again try z(x, y) = X(x)T (t). The procedure worksexactly the same at first. We obtain

X00(x) + �X(x) = 0,T 00(t) + �a2T (t) = 0.

and the conditions X(0) = 0, X(L) = 0. So again � = �n =n2⇡2

L2 and

Xn(x) = sinn⇡L

x.

The condition for T however becomes T (0) = 0. Thus instead of A = 0 we get that B = 0 and wecan take

Tn(t) = sinn⇡aL

t.

Page 194: Urban Illiois

194 CHAPTER 4. FOURIER SERIES AND PDES

Our building-block solution will be

zn(x, t) =✓

sinn⇡L

x◆ ✓

cosn⇡aL

t◆

.

We expand f (x) in terms of these sines as

f (x) =1

X

n=1

cn sinn⇡L

x.

And we write down the solution to (4.11) as a series

z(x, t) =1

X

n=1

cnzn(x, t) =1

X

n=1

cn

sinn⇡L

x◆ ✓

cosn⇡aL

t◆

.

Exercise 4.7.2: Fill in the details in the derivation of the solution of (4.11). Check that the solutionsatisfies all the side conditions.

Putting these two solutions together we will state the result as a theorem.

Theorem 4.7.1. Take the equation

ytt = a2yxx,y(0, t) = y(L, t) = 0,y(x, 0) = f (x) for 0 < x < L,yt(x, 0) = g(x) for 0 < x < L,

(4.13)

where

f (x) =1

X

n=1

cn sinn⇡L

x.

and

g(x) =1

X

n=1

bn sinn⇡L

x.

Then the solution y(x, t) can be written as a sum of the solutions of (4.10) and (4.11). In otherwords,

y(x, t) =1

X

n=1

bnL

n⇡a

sinn⇡L

x◆ ✓

sinn⇡aL

t◆

+ cn

sinn⇡L

x◆ ✓

cosn⇡aL

t◆

=

1

X

n=1

sinn⇡L

x◆

bnL

n⇡a

sinn⇡aL

t◆

+ cn

cosn⇡aL

t◆�

.

Page 195: Urban Illiois

4.7. ONE DIMENSIONAL WAVE EQUATION 195

2 x

y

0

0.1

Figure 4.19: Plucked string.

Example 4.7.1: Let us try a simple example of a plucked string. Suppose that the string of length2 is plucked in the middle such that it has the initial shape

f (x) =

8

>

>

<

>

>

:

0.1 x if 0 x 1,0.1 (2 � x) if 1 x 2.

See Figure 4.19. Further, suppose that a = 1 in the wave equation for simplicity.We leave it to the reader to compute the sine series of f (x). The series will be

f (x) =1

X

n=1

0.8n2⇡2

sinn⇡2

sinn⇡2

x.

Note that sin n⇡2 is the sequence 1, 0,�1, 0, 1, 0,�1, . . . for n = 1, 2, 3, 4, . . .. Therefore,

f (x) =0.8⇡2 sin

2x �

0.89⇡2 sin

3⇡2

x +0.8

25⇡2 sin5⇡2

x � · · ·

The solution y(x, t) is given by

y(x, t) =1

X

n=1

0.8n2⇡2

sinn⇡2

◆ ✓

sinn⇡2

x◆ ✓

cosn⇡2

t◆

=0.8⇡2

sin⇡

2x◆ ✓

cos⇡

2t◆

0.89⇡2

sin3⇡2

x!

cos3⇡2

t!

+0.8

25⇡2

sin5⇡2

x!

cos5⇡2

t!

� · · ·

A plot for 0 < t < 3 is given in Figure 4.20 on the following page. Notice that unlike theheat equation, the solution does not become “smoother.” In fact the edges remain. We will see thereason for this behavior in the next section where we derive the solution to the wave equation in adi↵erent way.

Make sure you understand what the plot such as the one in the figure is telling you. For eachfixed t, you can think of the function x 7! u(x, t) as just a function of x. This function gives youthe shape of the string at time t.

Page 196: Urban Illiois

196 CHAPTER 4. FOURIER SERIES AND PDES

0.0

0.5

1.0

1.5

2.0

x

0

1

2

3

t

0

1

2

3

t

-0.10

-0.05

0.00

0.05

0.10

y

-0.10

-0.05

0.00

0.05

0.10

y0.0

0.5

1.0

1.5

2.0

xy(x,t)

0.1100.0880.0660.0440.0220.000-0.022-0.044-0.066-0.088-0.110

Figure 4.20: Shape of the plucked string for 0 < t < 3.

4.7.1 ExercisesExercise 4.7.3: Solve

ytt = 9yxx,y(0, t) = y(1, t) = 0,y(x, 0) = sin 3⇡x + 1

4 sin 6⇡x for 0 < x < 1,yt(x, 0) = 0 for 0 < x < 1.

Exercise 4.7.4: Solve

ytt = 4yxx,y(0, t) = y(1, t) = 0,y(x, 0) = sin 3⇡x + 1

4 sin 6⇡x for 0 < x < 1,yt(x, 0) = sin 9⇡x for 0 < x < 1.

Exercise 4.7.5: Derive the solution for a general plucked string of length L, where we raise thestring some distance b at the midpoint and let go, and for any constant a.

Page 197: Urban Illiois

4.7. ONE DIMENSIONAL WAVE EQUATION 197

Exercise 4.7.6: Suppose that a stringed musical instrument falls on the floor. Suppose that thelength of the string is 1 and a = 1. When the musical instrument hits the ground the string wasin rest position and hence y(x, 0) = 0. However, the string was moving at some velocity at impact(t = 0), say yt(x, 0) = �1. Find the solution y(x, t) for the shape of the string at time t.

Exercise 4.7.7 (challenging): Suppose that you have a vibrating string and that there is air resis-tance proportional to the velocity. That is, you have

ytt = a2yxx � kyt,y(0, t) = y(1, t) = 0,y(x, 0) = f (x) for 0 < x < 1,yt(x, 0) = 0 for 0 < x < 1.

Suppose that 0 < k < 2⇡a. Derive a series solution to the problem. Any coe�cients in the seriesshould be expressed as integrals of f (x).

Page 198: Urban Illiois

198 CHAPTER 4. FOURIER SERIES AND PDES

4.8 D’Alembert solution of the wave equationNote: 1 lecture, di↵erent from §9.6 in EP

We have solved the wave equation by using Fourier series. But it is often more convenient touse the so-called d’Alembert solution to the wave equation‡. This solution can be derived usingFourier series as well, but it is really an awkward use of those concepts. It is much easier to derivethis solution by making a correct change of variables to get an equation which can be solved bysimple integration.

Suppose we have the wave equation

ytt = a2yxx. (4.14)

And we wish to solve the equation (4.14) given the conditions

y(0, t) = y(L, t) = 0 for all t,y(x, 0) = f (x) 0 < x < L,yt(x, 0) = g(x) 0 < x < L.

(4.15)

4.8.1 Change of variablesWe will transform the equation into a simpler form where it can be solved by simple integration.We change variables to ⇠ = x � at, ⌘ = x + at and we use the chain rule:

@

@x=@⇠

@x@

@⇠+@⌘

@x@

@⌘=

@

@⇠+@

@⌘,

@

@t=@⇠

@t@

@⇠+@⌘

@t@

@⌘= �a2 @

@⇠+ a2 @

@⌘.

We compute

yxx =@2y@x2 =

@

@⇠+@

@⌘

!

@y@⇠+@y@⌘

!

=@2y@⇠2 + 2

@2y@⇠@⌘

+@2y@⌘2 ,

ytt =@2y@t2 =

�a2 @

@⇠+ a2 @

@⌘

!

�a2@y@⇠+ a2 @y

@⌘

!

= a2@2y@⇠2 � 2a2 @

2y@⇠@⌘

+ a2 @2y@⌘2 .

In the above computations, we have used the fact from calculus that @2y@⇠@⌘ =

@2y@⌘@⇠ . Then we plug into

the wave equation,

0 = a2yxx � ytt = 4a2 @2y

@⇠@⌘= 4a2y⇠⌘.

‡Named after the french mathematician Jean le Rond d’Alembert (1717 – 1783).

Page 199: Urban Illiois

4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 199

Therefore, the wave equation (4.14) transforms into y⇠⌘ = 0. It is easy to find the general solutionto this equation by integration twice. Let us integrate with respect to ⌘ first§ and notice that theconstant of integration depends on ⇠ to get y⇠ = C(⇠). Next, we integrate with respect to ⇠ andnotice that the constant of integration must depend on ⌘. Thus, y =

R

C(⇠) d⇠ + B(⌘). The solutionmust then be of the following form for some functions A(⇠) and B(⌘).

y = A(⇠) + B(⌘) = A(x � at) + B(x + at).

4.8.2 The formulaWe know what any solution must look like, but we need to solve for the given side conditions. Wewill just give the formula and see that it works. First let F(x) denote the odd extension of f (x), andlet G(x) denote the odd extension of g(x). Now define

A(x) =12

F(x) �12a

Z x

0G(s) ds, B(x) =

12

F(x) +12a

Z x

0G(s) ds.

We claim this A(x) and B(x) give the solution. Explicitly, the solution is y(x, t) = A(x�at)+B(x+at)or in other words:

y(x, t) =12

F(x � at) �12a

Z x�at

0G(s) ds +

12

F(x + at) +12a

Z x+at

0G(s) ds

=F(x � at) + F(x + at)

2+

12a

Z x+at

x�atG(s) ds.

(4.16)

Let us check that the d’Alembert formula really works.

y(x, 0) =12

F(x) �12a

Z x

0G(s) ds +

12

F(x) +12a

Z x

0G(s) ds = F(x).

So far so good. Assume for simplicity F is di↵erentiable. By the fundamental theorem of calculuswe have

yt(x, t) =�a2

F0(x � at) +12

G(x � at) +a2

F0(x + at) +12

G(x + at).

Soyt(x, 0) =

�a2

F0(x) +12

G(x) +a2

F0(x) +12

G(x) = G(x).

Yay! We’re smoking now. OK, now the boundary conditions. Note that F(x) and G(x) are odd.Also

R x0 G(s) ds is an even function of x because G(x) is odd (to see this fact, do the substitution

§We can just as well integrate with ⇠ first, if we wish.

Page 200: Urban Illiois

200 CHAPTER 4. FOURIER SERIES AND PDES

s = �v). So

y(0, t) =12

F(�at) �12a

Z

�at

0G(s) ds +

12

F(at) +12a

Z at

0G(s) ds

=�12

F(at) �12a

Z at

0G(s) ds +

12

F(at) +12a

Z at

0G(s) ds = 0.

Now F(x) and G(x) are 2L periodic as well. Furthermore,

y(L, t) =12

F(L � at) �12a

Z L�at

0G(s) ds +

12

F(L + at) +12a

Z L+at

0G(s) ds

=12

F(�L � at) �12a

Z L

0G(s) ds �

12a

Z

�at

0G(s) ds +

+12

F(L + at) +12a

Z L

0G(s) ds +

12a

Z at

0G(s) ds

=�12

F(L + at) �12a

Z at

0G(s) ds +

12

F(L + at) +12a

Z at

0G(s) ds = 0.

And voilà, it works.

Example 4.8.1: What the d’Alembert solution says is that the solution is a superposition of twofunctions (waves) moving in the opposite direction at “speed” a. To get an idea of how it works,let us do an example. Suppose that we have the simpler setup

ytt = yxx,

y(0, t) = y(1, t) = 0,y(x, 0) = f (x),yt(x, 0) = 0.

Here f (x) is an impulse of height 1 centered at x = 0.5:

f (x) =

8

>

>

>

>

>

>

>

>

<

>

>

>

>

>

>

>

>

:

0 if 0 x < 0.45,20 (x � 0.45) if 0 x < 0.45,20 (0.55 � x) if 0.45 x < 0.55,0 if 0.55 x 1.

The graph of this pulse is the top left plot in Figure 4.21 on the next page.Let F(x) be the odd periodic extension of f (x). Then from (4.16) we know that the solution is

given as

y(x, t) =F(x � t) + F(x + t)

2.

Page 201: Urban Illiois

4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 201

It is not hard to compute specific values of y(x, t). For example, to compute y(0.1, 0.6) we noticex � t = �0.5 and x + t = 0.7. Now F(�0.5) = � f (0.5) = �20 (0.55 � 0.5) = �1 and F(0.7) =f (0.7) = 0. Hence y(0.1, 0.6) = �1+0

2 = �0.5. As you can see the d’Alembert solution is mucheasier to actually compute and to plot than the Fourier series solution. See Figure 4.21 for plots ofthe solution y for several di↵erent t.

0.00 0.25 0.50 0.75 1.00

0.00 0.25 0.50 0.75 1.00

-1.0

-0.5

0.0

0.5

1.0

-1.0

-0.5

0.0

0.5

1.0

0.00 0.25 0.50 0.75 1.00

0.00 0.25 0.50 0.75 1.00

-1.0

-0.5

0.0

0.5

1.0

-1.0

-0.5

0.0

0.5

1.0

0.00 0.25 0.50 0.75 1.00

0.00 0.25 0.50 0.75 1.00

-1.0

-0.5

0.0

0.5

1.0

-1.0

-0.5

0.0

0.5

1.0

0.00 0.25 0.50 0.75 1.00

0.00 0.25 0.50 0.75 1.00

-1.0

-0.5

0.0

0.5

1.0

-1.0

-0.5

0.0

0.5

1.0

Figure 4.21: Plot of the d’Alembert solution for t = 0, t = 0.2, t = 0.4, and t = 0.6.

4.8.3 NotesIt is perhaps easier and more useful to memorize the procedure rather than the formula itself. Theimportant thing to remember is that a solution to the wave equation is a superposition of two wavestraveling in opposite directions. That is,

y(x, t) = A(x � at) + B(x + at).

Page 202: Urban Illiois

202 CHAPTER 4. FOURIER SERIES AND PDES

If you think about it, the exact formulas for A and B are not hard to guess once you realize whatkind of side conditions y(x, t) is supposed to satisfy. Let us give the formula again, but slightlydi↵erently. Best approach is to do this in stages. When g(x) = 0 (and hence G(x) = 0) we have thesolution

F(x � at) + F(x + at)2

.

On the other hand, when f (x) = 0 (and hence F(x) = 0), we let

H(x) =Z x

0G(s) ds.

The solution in this case is

12a

Z x+at

x�atG(s) ds =

�H(x � at) + H(x + at)2a

.

By superposition we get a solution for the general side conditions (4.15) (when neither f (x) norg(x) are identically zero).

y(x, t) =F(x � at) + F(x + at)

2+�H(x � at) + H(x + at)

2a. (4.17)

Do note the minus sign before the H.

Exercise 4.8.1: Check that the new formula (4.17) satisfies the side conditions (4.15).

Warning: Make sure you use the odd extensions F(x) and G(x), when you have formulas forf (x) and g(x). The thing is, those formulas in general hold only for 0 < x < L, and are not usuallyequal to F(x) and G(x) for other x.

4.8.4 ExercisesExercise 4.8.2: Using the d’Alembert solution solve ytt = 4yxx, 0 < x < ⇡, t > 0, y(0, t) = y(⇡, t) =0, y(x, 0) = sin x, and yt(x, 0) = sin x. Hint: note that sin x is the odd extension of y(x, 0) andyt(x, 0).

Exercise 4.8.3: Using the d’Alembert solution solve ytt = 2yxx, 0 < x < 1, t > 0, y(0, t) = y(1, t) =0, y(x, 0) = sin5 ⇡x, and yt(x, 0) = sin3 ⇡x.

Exercise 4.8.4: Take ytt = 4yxx, 0 < x < ⇡, t > 0, y(0, t) = y(⇡, t) = 0, y(x, 0) = x(⇡ � x), andyt(x, 0) = 0. a) Solve using the d’Alembert formula (Hint: You can use the sine series for y(x, 0).)b) Find the solution as a function of x for a fixed t = 0.5, t = 1, and t = 2. Do not use the sineseries here.

Page 203: Urban Illiois

4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 203

Exercise 4.8.5: Derive the d’Alembert solution for ytt = a2yxx, 0 < x < ⇡, t > 0, y(0, t) = y(⇡, t) =0, y(x, 0) = f (x), and yt(x, 0) = 0, using the Fourier series solution of the wave equation, byapplying an appropriate trigonometric identity.

Exercise 4.8.6: The d’Alembert solution still works if there are no boundary conditions and theinitial condition is defined on the whole real line. Suppose that ytt = yxx (for all x on the real lineand t � 0), y(x, 0) = f (x), and yt(x, 0) = 0, where

f (x) =

8

>

>

>

>

>

>

>

>

<

>

>

>

>

>

>

>

>

:

0 if x < �1,x + 1 if �1 x < 0,�x + 1 if 0 x < 1,0 if x > 1.

Solve using the d’Alembert solution. That is, write down a piecewise definition for the solution.Then sketch the solution for t = 0, t = 1

2 , t = 1, and t = 2.

Page 204: Urban Illiois

204 CHAPTER 4. FOURIER SERIES AND PDES

4.9 Steady state temperature, Laplacian, and Dirichlet prob-lems

Note: 1 lecture, §9.7 in EP

Suppose we have an insulated wire, a plate, or a 3-dimensional object. We apply certain fixedtemperatures on the ends of the wire, the edges of the plate or on all sides of the 3-dimensionalobject. We wish to find out what is the steady state temperature distribution. That is, we wish toknow what will be the temperature after long enough period of time.

We are really looking for a solution to the heat equation that is not dependent on time. Let usfirst do this in one space variable. We are looking for a function u that satisfies

ut = kuxx,

but such that ut = 0 for all x and t. Hence, we are looking for a function of x alone that satisfiesuxx = 0. It is easy to solve this equation by integration and we see that u = Ax + B for someconstants A and B.

Suppose we have an insulated wire, and we apply constant temperature T1 at one end (saywhere x = 0) and T2 and the other end (at x = L where L is the length of the wire). Then our steadystate solution is

u(x) =T2 � T1

Lx + T1.

This solution agrees with our common sense intuition with how the heat should be distributed inthe wire. So in one dimension, the steady state solutions are basically just straight lines.

Things are more complicated in two or more space dimensions. Let us restrict to two spacedimensions for simplicity. The heat equation in two variables is

ut = k(uxx + uyy), (4.18)

or more commonly written as ut = k�u or ut = kr2u. Here the � and r2 symbols mean @2

@x2 +@2

@y2 .We will use � from now on. The reason for that notation is that you can define � to be the rightthing for any number of space dimensions and then the heat equation is always ut = k�u. The � iscalled the Laplacian.

OK, now that we have notation out of the way, let us see what does an equation for the steadystate solution look like. We are looking for a solution to (4.18) which does not depend on t. Hencewe are looking for a function u(x, y) such that

�u = uxx + uyy = 0.

This equation is called the Laplace equation¶. Solutions to the Laplace equation are called har-monic functions and have many nice properties and applications far beyond the steady state heatproblem.

¶Named after the French mathematician Pierre-Simon, marquis de Laplace (1749 – 1827).

Page 205: Urban Illiois

4.9. STEADY STATE TEMPERATURE 205

Harmonic functions in two variables are no longer just linear (plane graphs). For example, youcan check that the functions x2

� y2 and xy are harmonic. However, if you remember your multi-variable calculus we note that if uxx is positive, u is concave up in the x direction, then uyy must benegative and u must be concave down in the y direction. Therefore, a harmonic function can neverhave any “hilltop” or “valley” on the graph. This observation is consistent with our intuitive ideaof steady state heat distribution.

Commonly the Laplace equation is part of a so-called Dirichlet problemk. That is, we havesome region in the xy-plane and we specify certain values along the boundaries of the region. Wethen try to find a solution u defined on this region such that u agrees with the values we specifiedon the boundary.

For simplicity, we will consider a rectangular region. Also for simplicity we will specify bound-ary values to be zero at 3 of the four edges and only specify an arbitrary function at one edge. Aswe still have the principle of superposition, you can use this simpler solution to derive the gen-eral solution for arbitrary boundary values by solving 4 di↵erent problems, one for each edge, andadding those solutions together. This setup is left as an exercise.

We wish to solve the following problem. Let h and w be the height and width of our rectangle,with one corner at the origin and lying in the first quadrant.

�u = 0, (4.19)u(0, y) = 0 for 0 < y < h, (4.20)u(x, h) = 0 for 0 < x < w, (4.21)u(w, y) = 0 for 0 < y < h, (4.22)u(x, 0) = f (x) for 0 < x < w. (4.23)

(0, 0)

(0, h)

u = 0 u = 0

u = f (x) (w, 0)

u = 0 (w, h)

The method we will apply is separation of variables. Again, we will come up with enoughbuilding-block solutions satisfying all the homogeneous boundary conditions (all conditions except(4.23)). We notice that superposition still works for all the equation and all the homogeneousconditions. Therefore, we can use the Fourier series for f (x) to solve the problem as before.

We try u(x, y) = X(x)Y(y). We plug into the equation to get

X00Y + XY 00 = 0.

We put the Xs on one side and the Ys on the other to get

X00

X=

Y 00

Y.

kNamed after the German mathematician Johann Peter Gustav Lejeune Dirichlet (1805 – 1859).

Page 206: Urban Illiois

206 CHAPTER 4. FOURIER SERIES AND PDES

The left hand side only depends on x and the right hand side only depends on y. Therefore, thereis some constant � such that � = �X00

X =Y00Y . And we get two equations

X00 + �X = 0,Y 00 � �Y = 0.

Furthermore, the homogeneous boundary conditions imply that X(0) = X(w) = 0 and Y(h) = 0.Taking the equation for X we have already seen that we have a nontrivial solution if and only if� = �n =

n2⇡2

w2 and the solution is a multiple of

Xn(x) = sinn⇡w

x.

For these given �n, the general solution for Y (one for each n) is

Yn(y) = An coshn⇡w

y + Bn sinhn⇡w

y. (4.24)

We only have one condition on Yn and hence we can pick one of An or Bn constants to be whateveris convenient. It will be useful to have Yn(0) = 1, so we let An = 1. Setting Yn(h) = 0 and solvingfor Bn we get that

Bn =� cosh n⇡h

w

sinh n⇡hw

.

After we plug the An and Bn we into (4.24) and simplify, we find

Yn(y) =sinh n⇡(h�y)

w

sinh n⇡hw

.

We define un(x, y) = Xn(x)Yn(y). And note that un satisfies (4.19)–(4.22).Observe that

un(x, 0) = Xn(x)Yn(0) = sinn⇡n

x.

Suppose that

f (x) =1

X

n=1

bn sinn⇡xw.

Then we get a solution of (4.19)–(4.23) of the following form.

u(x, y) =1

X

n=1

bnun(x, y) =1

X

n=1

bn

sinn⇡w

x◆

0

B

B

B

B

B

@

sinh n⇡(h�y)w

sinh n⇡hw

1

C

C

C

C

C

A

.

As un satisfies (4.19)–(4.22) and any linear combination (finite or infinite) of un must also satisfy(4.19)–(4.22), we see that u must satisfy (4.19)–(4.22). By plugging in y = 0 it is easy to see thatu satisfies (4.23) as well.

Page 207: Urban Illiois

4.9. STEADY STATE TEMPERATURE 207

Example 4.9.1: Suppose that we take w = h = ⇡ and we let f (x) = ⇡. We compute the sine seriesfor the function ⇡ (we will get the square wave). We find that for 0 < x < ⇡ we have

f (x) =1

X

n=1n odd

4n

sin nx.

Therefore the solution u(x, y), see Figure 4.22, to the corresponding Dirichlet problem is given as

u(x, y) =1

X

n=1n odd

4n

(sin nx)

sinh n(⇡ � y)sinh n⇡

!

.

0

1

2

3

x1

2

3y

1

2

3

y

0

1

2

3

0

1

2

3

0

1

2

3

x

u(x,y)

3.5003.1502.8002.4502.1001.7501.4001.0500.7000.3500.000

Figure 4.22: Steady state temperature of a square plate with three sides held at zero and one sideheld at ⇡.

Page 208: Urban Illiois

208 CHAPTER 4. FOURIER SERIES AND PDES

This scenario corresponds to the steady state temperature on a square plate of width ⇡ with 3sides held at 0 degrees and one side held at ⇡ degrees. If we have arbitrary initial data on all sides,then we solve four problems, each using one piece of nonhomogeneous data. Then we use theprinciple of superposition to add up all four solutions to have a solution to the original problem.

There is another way to visualize the solutions. Take a wire and bend it in just the right wayso that it corresponds to the graph of the temperature above the boundary of your region. Then dipthe wire in soapy water and let it form a soapy film streched between the edges of the wire. It turnsout that this soap film is precisely the graph of the solution to the Laplace equation. Harmonicfunctions come up frequently in problems when we are trying to minimize area of some surface orminimize energy in some system.

4.9.1 ExercisesExercise 4.9.1: Let R be the region described by 0 < x < ⇡ and 0 < y < ⇡. Solve the problem

�u = 0, u(x, 0) = sin x, u(x, ⇡) = 0, u(0, y) = 0, u(⇡, y) = 0.

Exercise 4.9.2: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem

uxx + uyy = 0,u(x, 0) = sin ⇡x � sin 2⇡x, u(x, 1) = 0,u(0, y) = 0, u(1, y) = 0.

Exercise 4.9.3: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem

uxx + uyy = 0,u(x, 0) = u(x, 1) = u(0, y) = u(1, y) = C.

for some constant C. Hint: Guess, then check your intuition.

Exercise 4.9.4: Let R be the region described by 0 < x < ⇡ and 0 < y < ⇡. Solve

�u = 0, u(x, 0) = 0, u(x, ⇡) = ⇡, u(0, y) = y, u(⇡, y) = y.

Hint: Try a solution of the form u(x, y) = X(x) + Y(y) (di↵erent separation of variables).

Exercise 4.9.5: Use the solution of Exercise 4.9.4 to solve

�u = 0, u(x, 0) = sin x, u(x, ⇡) = ⇡, u(0, y) = y, u(⇡, y) = y.

Hint: Use superposition.

Page 209: Urban Illiois

4.9. STEADY STATE TEMPERATURE 209

Exercise 4.9.6: Let R be the region described by 0 < x < w and 0 < y < h. Solve the problem

uxx + uyy = 0,u(x, 0) = 0, u(x, h) = f (x),u(0, y) = 0, u(w, y) = 0.

The solution should be in series form using the Fourier series coe�cients of f (x).

Exercise 4.9.7: Let R be the region described by 0 < x < w and 0 < y < h. Solve the problem

uxx + uyy = 0,u(x, 0) = 0, u(x, h) = 0,u(0, y) = f (y), u(w, y) = 0.

The solution should be in series form using the Fourier series coe�cients of f (y).

Exercise 4.9.8: Let R be the region described by 0 < x < w and 0 < y < h. Solve the problem

uxx + uyy = 0,u(x, 0) = 0, u(x, h) = 0,u(0, y) = 0, u(w, y) = f (y).

The solution should be in series form using the Fourier series coe�cients of f (y).

Exercise 4.9.9: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem

uxx + uyy = 0,u(x, 0) = sin 9⇡x, u(x, 1) = sin 2⇡x,u(0, y) = 0, u(1, y) = 0.

Hint: Use superposition.

Exercise 4.9.10: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem

uxx + uyy = 0,u(x, 0) = sin ⇡x, u(x, 1) = sin ⇡x,u(0, y) = sin ⇡y, u(1, y) = sin ⇡y.

Hint: Use superposition.

Page 210: Urban Illiois

210 CHAPTER 4. FOURIER SERIES AND PDES

Page 211: Urban Illiois

Chapter 5

Eigenvalue problems

5.1 Sturm-Liouville problemsNote: 2 lectures, §10.1 in EP

5.1.1 Boundary value problemsWe have encountered several di↵erent eigenvalue problems such as:

X00(x) + �X(x) = 0

with di↵erent boundary conditions

X(0) = 0 X(L) = 0 (Dirichlet) or,X0(0) = 0 X0(L) = 0 (Neumann) or,X0(0) = 0 X(L) = 0 (Mixed) or,X(0) = 0 X0(L) = 0 (Mixed), . . .

For example for the insulated wire, Dirichlet conditions correspond to applying a zero temperatureat the ends, Neumann means insulating the ends, etc... Other types of endpoint conditions alsoarise naturally, such as

hX(0) � X0(0) = 0 hX(L) + X0(L) = 0,

for some constant h.These problems came up, for example, in the study of the heat equation ut = kuxx when we

were trying to solve the equation by the method of separation of variables. During the process weencountered a certain eigenvalue problem and found the eigenfunctions Xn(x). We then found theeigenfunction decomposition of the initial temperature f (x) = u(x, 0) in terms of the eigenfunctions

f (x) =1

X

n=1

cnXn(x).

211

Page 212: Urban Illiois

212 CHAPTER 5. EIGENVALUE PROBLEMS

Once we had this decomposition and once we found suitable Tn(t) such that Tn(0) = 1, we notedthat a solution to the original problem could be written as

u(x, t) =1

X

n=1

cnTn(t)Xn(x).

We will try to solve more general problems using this method. We will study second orderlinear equations of the form

ddx

p(x)dydx

!

� q(x)y + �r(x)y = 0. (5.1)

Essentially any second order linear equation of the form a(x)y00 + b(x)y0 + c(x)y + �d(x)y = 0 canbe written as (5.1) after multiplying by a proper factor.

Example 5.1.1 (Bessel):x2y00 + xy0 +

�x2� n2

y = 0.

Multiply both sides by 1x to obtain

0 =1x

x2y00 + xy0 +⇣

�x2� n2

y⌘

xy00 + y0 +

�x �n2

x

!

y =ddx

xdydx

!

n2

xy + �xy.

We can state the general Sturm-Liouville problem⇤. We seek nontrivial solutions to

ddx

p(x)dydx

!

� q(x)y + �r(x)y = 0 a < x < b

↵1y(a) � ↵2y0(a) = 0�1y(b) + �2y0(b) = 0

(5.2)

In particular, we seek �s that allow for nontrivial solutions. The �s for which there is no nontrivialsolution are called the eigenvalues and the corresponding nontrivial solutions are called eigenfunc-tions. Obviously ↵1 and ↵2 should not be both zero, same for �1 and �2.

Theorem 5.1.1. Suppose p(x), p0(x), q(x) and r(x) are continuous on [a, b] and suppose p(x) > 0and r(x) > 0 for all x in [a, b]. Then the Sturm-Liouville problem (5.2) has an increasing sequenceof eigenvalues

�1 < �2 < �3 < · · ·

such thatlimn!1

�n = +1

and such that to each �n there is (up to a constant multiple) a single eigenfunction yn(x).Moreover, if q(x) � 0 and ↵1,↵2, �1, �2 � 0, then �n � 0 for all n.⇤Named after the French mathematicians Jacques Charles François Sturm (1803 – 1855) and Joseph Liouville

(1809 – 1882).

Page 213: Urban Illiois

5.1. STURM-LIOUVILLE PROBLEMS 213

Note: Be careful about the signs. Also be careful about the inequalities for r and p, they must bestrict for all x! Problems satisfying the hypothesis of the theorem are called regular Sturm-Liouvilleproblems and we will only consider such problems here. That is, a regular problem is one wherep(x), p0(x), q(x) and r(x) are continuous, p(x) > 0, r(x) > 0, q(x) � 0, and ↵1,↵2, �1, �2 � 0.

When zero is an eigenvalue, we will usually start labeling the eigenvalues at 0 rather than 1 forconvenience.

Example 5.1.2: The problem y00 + �y, 0 < x < L, y(0) = 0, and y(L) = 0 is a regular Sturm-Liouville problem. p(x) = 1, q(x) = 0, r(x) = 1, and we have p(x) = 1 > 0 and r(x) = 1 > 0. Theeigenvalues are �n =

n2⇡2

L2 and eigenfunctions are yn(x) = sin(n⇡L x). All eigenvalues are nonnegative

as predicted by the theorem.

Exercise 5.1.1: Find eigenvalues and eigenfunctions for

y00 + �y = 0, y0(0) = 0, y0(1) = 0.

Identify the p, q, r,↵ j, � j. Can you use the theorem to make the search for eigenvalues easier?

Example 5.1.3: Find eigenvalues and eigenfunctions of the problem

y00 + �y = 0, 0 < x < 1hy(0) � y0(0) = 0, y0(1) = 0, h > 0.

These equations give a regular Sturm-Liouville problem.

Exercise 5.1.2: Identify p, q, r,↵ j, � j in the example above.

First note that � � 0 by Theorem 5.1.1. Therefore, the general solution (without boundaryconditions) is

y(x) = A cosp

� x + B sinp

� x if � > 0,y(x) = Ax + B if � = 0.

Let us see if � = 0 is an eigenvalue: We must satisfy 0 = hB � A and A = 0, hence B = 0 (ash > 0), therefore, 0 is not an eigenvalue (no eigenfunction).

Now let us try � > 0. We plug in the boundary conditions.

0 = hA �p

� B,

0 = �Ap

� sinp

� + Bp

� cosp

�.

Note that if A = 0, then B = 0 and vice versa, hence both are nonzero. So B = hAp

�, and 0 =

�Ap

� sinp

� + hAp

p

� cosp

�. As A , 0 we get

0 = �p

� sinp

� + h cosp

�,

Page 214: Urban Illiois

214 CHAPTER 5. EIGENVALUE PROBLEMS

orhp

�= tan

p

�.

Now use a computer to find �n. There are tables available, though using a computer or agraphing calculator will probably be far more convenient nowdays. Easiest method is to plot thefunctions h/x and tan x and see for which x they intersect. There will be an infinite number ofintersections. So denote by

p

�1 the first intersection, byp

�2 the second intersection, etc. . . Forexample, when h = 1, we get that �1 ⇡ 0.86, and �2 ⇡ 3.43. A plot for h = 1 is given in Figure 5.1.The appropriate eigenfunction (let A = 1 for convenience, then B = h

p

�) is

yn(x) = cosp

�n x +hp

�nsin

p

�n x.

0 2 4 6

0 2 4 6

-4

-2

0

2

4

-4

-2

0

2

4

Figure 5.1: Plot of 1x and tan x.

5.1.2 OrthogonalityWe have seen the notion of orthogonality before. For example, we have shown that sin nx areorthogonal for distinct n on [0, ⇡]. For general Sturm-Liouville problems we will need a moregeneral setup. Let r(x) be a weight function (any function, though generally we will assume it ispositive) on [a, b]. Then two functions f (x), g(x) are said to be orthogonal with respect to theweight function r(x) when

Z b

af (x) g(x) r(x) dx = 0.

Page 215: Urban Illiois

5.1. STURM-LIOUVILLE PROBLEMS 215

In this setting, we define the inner product as

h f , gi def=

Z b

af (x) g(x) r(x) dx,

and then say f and g are orthogonal whenever h f , gi = 0. The results and concepts are againanalogous to finite dimensional linear algebra.

The idea of the given inner product is that those x where r(x) is greater have more weight.Nontrivial (nonconstant) r(x) arise naturally, for example from a change of variables. Hence, youcould think of a change of variables such that d⇠ = r(x) dx.

We have the following orthogonality property of eigenfunctions of a regular Sturm-Liouvilleproblem.

Theorem 5.1.2. Suppose we have a regular Sturm-Liouville problem

ddx

p(x)dydx

!

� q(x)y + �r(x)y = 0,

↵1y(a) � ↵2y0(a) = 0,�1y(b) + �2y0(b) = 0.

Let y j and yk be two distinct eigenfunctions for two distinct eigenvalues � j and �k. ThenZ b

ay j(x) yk(x) r(x) dx = 0,

that is, y j and yk are orthogonal with respect to the weight function r.

Proof is very similar to the analogous theorem from § 4.1. It can also be found in many booksincluding, for example, Edwards and Penney [EP].

5.1.3 Fredholm alternativeWe also have the Fredholm alternative theorem we talked about before for all regular Sturm-Liouville problems. We state it here for completeness.

Theorem 5.1.3 (Fredholm alternative). Suppose that we have a regular Sturm-Liouville problem.Then either

ddx

p(x)dydx

!

� q(x)y + �r(x)y = 0,

↵1y(a) � ↵2y0(a) = 0,�1y(b) + �2y0(b) = 0,

Page 216: Urban Illiois

216 CHAPTER 5. EIGENVALUE PROBLEMS

has a nonzero solution, or

ddx

p(x)dydx

!

� q(x)y + �r(x)y = f (x),

↵1y(a) � ↵2y0(a) = 0,�1y(b) + �2y0(b) = 0,

has a unique solution for any f (x) continuous on [a, b].

This theorem is used in much the same way as we did before in § 4.4. It is used when solvingmore general nonhomogeneous boundary value problems. Actually the theorem does not help ussolve the problem, but it tells us when does a solution exist and is unique, so that we know whento spend time looking for a solution. To solve the problem we decompose f (x) and y(x) in termsof the eigenfunctions of the homogeneous problem, and then solve for the coe�cients of the seriesfor y(x).

5.1.4 Eigenfunction seriesWhat we want to do with the eigenfunctions once we have them is to compute the eigenfunctiondecomposition of an arbitrary function f (x). That is, we wish to write

f (x) =1

X

n=1

cnyn(x), (5.3)

where yn(x) the eigenfunctions. We wish to find out if we can represent any function f (x) inthis way, and if so, we wish to calculate cn (and of course we would want to know if the sumconverges). OK, so imagine we could write f (x) as above. We will assume convergence and theability to integrate term by term. Because of orthogonality we have

h f , ymi =

Z b

af (x) ym(x) r(x) dx

=

1

X

n=1

cn

Z b

ayn(x) ym(x) r(x) dx

= cm

Z b

aym(x) ym(x) r(x) dx = cmhym, ymi

Hence,

cm =h f , ymi

hym, ymi=

R ba f (x) ym(x) r(x) dxR b

a

ym(x)�2 r(x) dx

. (5.4)

Page 217: Urban Illiois

5.1. STURM-LIOUVILLE PROBLEMS 217

Note that ym are known up to a constant multiple, so we could have picked a scalar multipleof an eigenfunction such that hym, ymi = 1 (if we had an arbitrary eigenfunction ym, divide itby

p

hym, ymi). In the case that hym, ymi = 1 we would have the simpler form cm = h f , ymi aswe essentially did for the Fourier series. The following theorem holds more generally, but thestatement given is enough for our purposes.

Theorem 5.1.4. Suppose f is a piecewise smooth continuous function on [a, b]. If y1, y2, . . . arethe eigenfunctions of a regular Sturm-Liouville problem, then there exist real constants c1, c2, . . .given by (5.4) such that (5.3) converges and holds for a < x < b.

Example 5.1.4: Take the simple Sturm-Liouville problem

y00 + �y = 0, 0 < x <⇡

2,

y(0) = 0, y0✓⇡

2

= 0.

The above is a regular problem and furthermore we actually know by Theorem 5.1.1 on page 212that � � 0.

Suppose � = 0, then the general solution is y(x) = Ax + B, we plug in the initial conditions toget 0 = y(0) = B, and 0 = y0(⇡2 ) = A, hence � = 0 is not an eigenvalue.

The general solution, therefore, is

y(x) = A cosp

� x + B sinp

� x,

plugging in the boundary conditions we get 0 = y(0) = A and 0 = y0(⇡2 ) =p

� B cosp

� ⇡2 . B

cannot be zero and hence cosp

� ⇡2 = 0. This means that

p

� ⇡2 must be an odd integral multiple of

⇡2 , i.e. (2n � 1)⇡2 =

p

�n⇡2 . Hence

�n = (2n � 1)2.

We can take B = 1. And hence our eigenfunctions are

yn(x) = sin (2n � 1)x.

We finally computeZ ⇡

2

0f (x)

sin (2n � 1)x⌘2

dx =⇡

4So any piecewise smooth function on [0, ⇡2 ] can be written as

f (x) =1

X

n=1

cn sin (2n � 1)x,

where

cn =h f , yni

hyn, yni=

R ⇡2

0 f (x) sin (2n � 1)x dxR ⇡

2

0

sin (2n � 1)x�2 dx

=4⇡

Z ⇡2

0f (x) sin (2n � 1)x dx

Note that the series converges to an odd 2⇡-periodic (not ⇡-periodic!) extension of f (x).

Page 218: Urban Illiois

218 CHAPTER 5. EIGENVALUE PROBLEMS

Exercise 5.1.3 (challenging): In the above example, the function is defined on 0 < x < ⇡2 , yet the

series converges to an odd 2⇡-periodic extension of f (x). find out how the extension is defined for⇡2 < x < ⇡.

5.1.5 ExercisesExercise 5.1.4: Find eigenvalues and eigenfunctions of

y00 + �y = 0, y(0) � y0(0) = 0, y(1) = 0.

Exercise 5.1.5: Expand the function f (x) = x on 0 x 1 using the eigenfunctions of the system

y00 + �y = 0, y0(0) = 0, y(1) = 0.

Exercise 5.1.6: Suppose that you had a Sturm-Liouville problem on the interval [0, 1] and cameup with yn(x) = sin �nx, where � > 0 is some constant. Decompose f (x) = x, 0 < x < 1 in terms ofthese eigenfunctions.

Exercise 5.1.7: Find eigenvalues and eigenfunctions of

y(4) + �y = 0, y(0) = 0, y0(0) = 0, y(1) = 0, y0(1) = 0.

This problem is not a Sturm-Liouville problem, but the idea is the same.

Exercise 5.1.8 (more challenging): Find eigenvalues and eigenfunctions for

ddx

(exy0) + �exy = 0, y(0) = 0, y(1) = 0.

Hint: First write the system as a constant coe�cient system to find general solutions. Do note thatTheorem 5.1.1 on page 212 guarantees � � 0.

Page 219: Urban Illiois

5.2. APPLICATION OF EIGENFUNCTION SERIES 219

5.2 Application of eigenfunction seriesNote: 1 lecture, §10.2 in EP

The eigenfunction series can arise even from higher order equations. Suppose we have anelastic beam (say made of steel). We will study the transversal vibrations of the beam. That is,assume the beam lies along the x axis and let y(x, t) measure the displacement of the point x on thebeam at time t. See Figure 5.2

y

y

x

Figure 5.2: Transversal vibrations of a beam.

The equation that governs this setup is

a4 @4y@x4 +

@2y@t2 = 0,

for some constant a (a4 = EI/⇢ in EP).Suppose the beam is of length 1 simply supported (hinged) at the ends. Suppose the beam is

displaced by some function f (x) at time t = 0 and then let go (initial velocity is 0). Then y satisfies:

a4yxxxx + ytt = 0 (0 < x < 1, t > 0),y(0, t) = yxx(0, t) = 0,y(1, t) = yxx(1, t) = 0,y(x, 0) = f (x), yt(x, 0) = 0.

(5.5)

Again we try y(x, t) = X(x)T (t) and plug in to get a4X(4)T + XT 00 = 0 or as usual

X(4)

X=�T 00

a4T= �.

We note that we want T 00 + �a4T = 0. Let us assume that � > 0. We can argue that we expectvibration and not exponential growth nor decay in the t direction (there is no friction in our modelfor instance). Similarly � = 0 will not occur.

Page 220: Urban Illiois

220 CHAPTER 5. EIGENVALUE PROBLEMS

Exercise 5.2.1: Try to justify � > 0 just from the equations.

Write !4 = �, so that we do not need to write the fourth root all the time. For X we get theequation X(4)

� !4X = 0. The general solution is

X(x) = Ae!x + Be�!x +C sin!x + D cos!x.

Now 0 = X(0) = A+ B+D, 0 = X00(0) = !2(A+ B�D). Hence, D = 0 and A+ B = 0, or B = �A.So we have

X(x) = Ae!x� Ae�!x +C sin!x.

Now 0 = X(1) = A(e! � e�!) + C sin!, and 0 = X00(1) = A!2(e! � e�!) � C!2 sin!. This meansthat C sin! = 0 and A(e! � e�!) = A2 sinh! = 0. If ! > 0, then sinh! , 0 and so A = 0. Thismeans that C , 0 else we do not have an eigenvalue. Also ! must be an integer multiple of ⇡hence ! = n⇡ and n � 1 (as ! > 0). We can take C = 1. So the eigenvalues are �n = n4⇡4 and theeigenfunctions are sin n⇡x.

Now T 00 + n4⇡4a4T = 0. The general solution is T (t) = A sin n2⇡2a2t + B cos n2⇡2a2t. ButT 0(0) = 0 and hence we must have A = 0 and we can take B = 1 to make T (0) = 1 for convenience.So our solutions are Tn(t) = cos n2⇡2a2t.

Since the eigenfunctions are just sines again, we can decompose the function f (x) on 0 < x < 1using the sine series as Now we note that on 0 < x < 1 we have (you know how to do this by now)

f (x) =1

X

n=1

bn sin n⇡x.

Then the solution to (5.5) is

y(x, t) =1

X

n=1

bnXn(x)Tn(t) =1

X

n=1

bn (sin n⇡x)⇣

cos n2⇡2a2t⌘

.

The point is that XnTn is a solution that satisfies all the homogeneous conditions (that is, all condi-tions except the initial position). And since and Tn(0) = 1, we have

y(x, 0) =1

X

n=1

bnXn(x)Tn(0) =1

X

n=1

bnXn(x) =1

X

n=1

bn sin n⇡x = f (x).

So y(x, t) solves (5.5).Note that the natural (circular) frequency of the system is n2⇡2a2. These frequencies are all

integer multiples of the fundamental frequency ⇡2a2, so we will get a nice musical note. The exactfrequencies and their amplitude are what we call the timbre of the note.

The timbre of a beam is di↵erent than for a vibrating string where we will get “more” of thesmaller frequencies since we will get all integer multiples, 1, 2, 3, 4, 5, . . . For a steel beam we willget only the square multiples 1, 4, 9, 16, 25, . . . That is why when you hit a steel beam you hear avery pure sound. The sound of a xylophone or vibraphone is, therefore, very di↵erent from a guitaror piano.

Page 221: Urban Illiois

5.2. APPLICATION OF EIGENFUNCTION SERIES 221

Example 5.2.1: Let us assume that f (x) = x(x�1)10 . On 0 < x < 1 we have (you know how to do this

by now)

f (x) =1

X

n=1n odd

45⇡3n3 sin n⇡x.

Hence, the solution to (5.5) with the given initial position f (x) is

y(x, t) =1

X

n=1n odd

45⇡3n3 (sin n⇡x)

cos n2⇡2a2t⌘

.

5.2.1 ExercisesExercise 5.2.2: Suppose you have a beam of length 5 with free ends. Let y be the transversedeviation of the beam at position x on the beam (0 < x < 5). You know that the constants are suchthat this satisfies the equation ytt + 4yxxxx = 0. Suppose you know that the initial shape of the beamis the graph of x(5 � x), and the initial velocity is uniformly equal to 2 (same for each x) in thepositive y direction. Set up the equation together with the boundary and initial conditions. Just setup, do not solve.

Exercise 5.2.3: Suppose you have a beam of length 5 with one end free and one end fixed (thefixed end is at x = 5). Let u be the longitudinal deviation of the beam at position x on the beam(0 < x < 5). You know that the constants are such that this satisfies the equation utt = 4uxx.Suppose you know that the initial displacement of the beam is x�5

50 , and the initial velocity is �(x�5)100

in the positive u direction. Set up the equation together with the boundary and initial conditions.Just set up, do not solve.

Exercise 5.2.4: Suppose the beam is L units long, everything else kept the same as in (5.5). Whatis the equation and the series solution.

Exercise 5.2.5: Suppose you have

a4yxxxx + ytt = 0 (0 < x < 1, t > 0),y(0, t) = yxx(0, t) = 0,y(1, t) = yxx(1, t) = 0,y(x, 0) = f (x), yt(x, 0) = g(x).

That is, you have also an initial velocity. Find a series solution. Hint: Use the same idea as we didfor the wave equation.

Page 222: Urban Illiois

222 CHAPTER 5. EIGENVALUE PROBLEMS

5.3 Steady periodic solutionsNote: 1–2 lectures, §10.3 in EP

5.3.1 Forced vibrating string.Suppose that we have a guitar string of length L. We have studied the wave equation problem inthis case, where x was the position on the string, t was time and y was the displacement of thestring. See Figure 5.3.

L x

y

y

0

Figure 5.3: Vibrating string.

The problem is governed by the equations

ytt = a2yxx,

y(0, t) = 0, y(L, t) = 0,y(x, 0) = f (x), yt(x, 0) = g(x).

(5.6)

We saw previously that the solution is of the form

y =1

X

n=1

An cosn⇡aL

t + Bn sinn⇡aL

t◆

sinn⇡L

x

where An and Bn were determined by the initial conditions. The natural frequencies of the systemare the (circular) frequencies n⇡a

L for integers n � 1.But these are free vibrations. What if there is an external force acting on the string. Let us

assume say air vibrations (noise), for example a second string. Or perhaps a jet engine. Forsimplicity, assume nice pure sound and assume the force is uniform at every position on the string.Let us say F(t) = F0 cos! t as force per unit mass. Then our wave equation becomes (rememberacceleration is force times mass)

ytt = a2yxx + F0 cos! t. (5.7)

with the same boundary conditions of course.

Page 223: Urban Illiois

5.3. STEADY PERIODIC SOLUTIONS 223

We will want to find the solution here that satisfies the above equation and

y(0, t) = 0, y(L, t) = 0, y(x, 0) = 0, yt(x, 0) = 0. (5.8)

That is, the string is initially at rest. First we find a particular solution yp of (5.7) that satisfiesy(0, t) = y(L, t) = 0. We define the functions f and g as

f (x) = �yp(x, 0), g(x) = �@yp

@t(x, 0).

We then find solution yc of (5.6). If we add the two solutions, we find that y = yc + yp solves (5.7)with the initial conditions.

Exercise 5.3.1: Check that y = yc + yp solves (5.7) and the side conditions (5.8).

So the big issue here is to find the particular solution yp. We look at the equation and we makean educated guess

yp(x, t) = X(x) cos! t.We plug in to get

�!2X cos! t = a2X00 cos! t + F0 cos! tor �!2X = a2X00 + F0 after cancelling the cosine. We know how to find a general solution tothis equation (it is an nonhomogeneous constant coe�cient equation) and we get that the generalsolution is

X(x) = A cos!

ax + B sin

!

ax �

F0

!2 .

The endpoint conditions imply that X(0) = X(L) = 0, so

0 = X(0) = A �F0

!2

or A = F0!2 and

0 = X(L) =F0

!2 cos!La+ B sin

!La�

F0

!2 .

Assuming that sin !La is not zero we can solve for B to get

B =�F0

cos !La � 1

!2 sin !La

. (5.9)

Therefore,

X(x) =F0

!2

0

B

B

B

B

@

cos!

ax �

cos !La � 1

sin !La

sin!

ax � 1

1

C

C

C

C

A

.

The particular solution yp we are looking for is

yp(x, t) =F0

!2

0

B

B

B

B

@

cos!

ax �

cos !La � 1

sin !La

sin!

ax � 1

1

C

C

C

C

A

cos! t.

Page 224: Urban Illiois

224 CHAPTER 5. EIGENVALUE PROBLEMS

Exercise 5.3.2: Check that yp works.

Now we get to the point that we skipped. Suppose that sin !La = 0. What this means is that !

is equal to one of the natural frequencies of the system, i.e. a multiple of ⇡aL . We notice that if ! is

not equal to a multipe of the base frequency, but is very close, then the coe�cient B in (5.9) seemsto become very large. But let us not jump to conclusions just yet. When ! = n⇡a

L for n even, thencos !L

a = 1 and hence we really get that B = 0. So resonance occurs only when both cos !La = �1

and sin !La = 0. That is when ! = n⇡a

L for odd n.We could again solve for the resonance solution if we wanted to, but it is, in the right sense, the

limit of the solutions as ! gets close to a resonance frequency. In real life, pure resonance neveroccurs anyway.

The above calculation explains why a string will begin to vibrate if the identical string isplucked close by. In the absence of friction this vibration would get louder and louder as timegoes on. On the other hand, you are unlikely to get large vibration if the forcing frequency is notclose to a resonance frequency even if you have a jet engine running close to the string. That is,the amplitude will not keep increasing unless you tune to just the right frequency.

Similar resonance phenomena occur when you break a wine glass using human voice (yes thisis possible, but not easy†) if you happen to hit just the right frequency. Remember a glass has muchpurer sound, i.e. it is more like a vibraphone, so there are far fewer resonance frequencies to hit.

When the forcing function is more complicated, you decompose it in terms of the Fourier seriesand apply the above result. You may also need to solve the above problem if the forcing functionis a sine rather than a cosine, but if you think about it, the solution is almost the same.

Example 5.3.1: Let us do the computation for specific values. Suppose F0 = 1 and ! = 1 andL = 1 and a = 1. Then

yp(x, t) =

cos x �cos 1 � 1

sin 1sin x � 1

!

cos t

Call B = cos 1�1sin 1 for simplicity.

Then plug in t = 0 to get

f (x) = �yp(x, 0) = � cos x + B sin x + 1

and after di↵erentiating in t we see that g(x) = �@yp

@t (x, 0) = 0.Hence to find yc we need to solve the problem

ytt = yxx

y(0, t) = 0 y(1, t) = 0y(x, 0) = � cos x + B sin x + 1yt(x, 0) = 0

†Mythbusters, episode 31, Discovery Channel, originally aired may 18th 2005.

Page 225: Urban Illiois

5.3. STEADY PERIODIC SOLUTIONS 225

Note that the formula that we use to define y(x, 0) is not odd, hence it is not a simple matter ofplugging in to apply the D’Alembert formula directly! You must define F to be the odd, 2-periodicextension of y(x, 0). Then our solution would look like

y(x, t) =F(x + t) + F(x � t)

2+

cos x �cos 1 � 1

sin 1sin x � 1

!

cos t (5.10)

0.0

0.2

0.5

0.8

1.0

x0

1

2

3

4

5t

0

1

2

3

4

5

t

-0.20

-0.10

0.00

0.10

0.20

y

-0.20

-0.10

0.00

0.10

0.20

y

0.0

0.2

0.5

0.8

1.0

x

y(x,t)

0.2400.1480.0990.0490.000-0.049-0.099-0.148-0.197-0.254

Figure 5.4: Plot of y(x, t) = F(x+t)+F(x�t)2 +

cos x � cos 1�1sin 1 sin x � 1

cos t.

It is not hard to compute specific values for an odd extension of a function and hence (5.10) isa wonderful solution to the problem. For example it is very easy to have a computer do it, unlikeseries solutions. A plot is given in Figure 5.4.

5.3.2 Underground temperature oscillationsLet u(x, t) be the temperature at a certain location at depth x underground at time t. See Figure 5.5on the following page.

Page 226: Urban Illiois

226 CHAPTER 5. EIGENVALUE PROBLEMS

depth x

Figure 5.5: Underground temperature.

The temperature u satisfies the heat equation ut = kuxx, where k is the di↵usivity of the soil.We know the temperature at the surface u(0, t) from weather records. Let us assume for simplicitythat

u(0, t) = T0 + A0 cos! t.

For some base temperature T0, then t = 0 is midsummer (could put negative sign above to make itmidwinter). A0 is picked properly to make this the typical variation for the year. That is, the hottesttemperature is T0 + A0 and the coldest is T0 � A0. For simplicity, we will assume that T0 = 0. ! ispicked depending on the units of t, such that when t = 1year then !t = 2⇡.

It seems reasonable that the temperature at depth x will also oscillate with the same frequency.And this in fact will be the steady periodic solution, independent of the initial conditions. So weare looking for a solution of the form

u(x, t) = V(x) cos! t +W(x) sin! t.

for the problemut = kuxx, u(0, t) = A0 cos! t. (5.11)

We will employ the complex exponential here to make calculations simpler. Suppose we havea complex valued function

h(x, t) = X(x) ei! t.

We will look for an h such that Re h = u. To find an h, whose real part satisfies (5.11), we look foran h such that

ht = khxx, h(0, t) = A0ei! t. (5.12)

Exercise 5.3.3: Suppose h satisfies (5.12). Use Euler’s formula for the complex exponential tocheck that u = Re h satisfies (5.11).

Substitute h into (5.12).i!Xei! t = kX00ei! t

Hence,kX00 � i!X = 0,

Page 227: Urban Illiois

5.3. STEADY PERIODIC SOLUTIONS 227

orX00 � ↵2X = 0,

where ↵ = ±q

i!k . Note that ±

p

i = ±1+ip

2so you could simplify to ↵ = ±(1 + i)

p

!2k . Hence the

general solution isX(x) = Ae�(1+i)

p

!2k x + Be(1+i)

p

!2k x.

We assume that an X(x) that solves the problem must be bounded as x ! 1 since u(x, t) shouldbe bounded (we are not worrying about the earth core!). If you use Euler’s formula to expand thecomplex exponentials, you will note that the second term will be unbounded (if B , 0), while thefirst term is always bounded. Hence B = 0.

Exercise 5.3.4: Use Euler’s formula to show that e(1+i)p

!2k x will be unbounded as x ! 1, while

e�(1+i)p

!2k x will be bounded as x! 1.

Furthermore, X(0) = A0 since h(0, t) = A0ei! t. Thus A = A0. This means that

h(x, t) = A0e�(1+i)p

!2k xei! t = A0e�(1+i)

p

!2k x+i! t = A0e�

p

!2k xei(! t�

p

!2k x).

We will need to get the real part of h, so we apply Euler’s formula to get

h(x, t) = A0e�p

!2k x

cos

! t �r

!

2kx!

+ i sin

! t �r

!

2kx!!

.

Then finally

u(x, t) = Re h(x, t) = A0e�p

!2k x cos

! t �r

!

2kx!

,

Yay!Notice the phase is di↵erent at di↵erent depths. At depth x the phase is delayed by x

p

!2k . For

example in cgs units (centimeters, grams, seconds) we have k = 0.005 (typical value for soil),! = 2⇡

seconds in a year =2⇡

31,557,341 ⇡ 1.99 ⇥ 10�7. Then if we compute where the phase shift xp

!2k = ⇡

we find the depth in centimeters where the seasons are reversed. That is, we get the depth at whichsummer is the coldest and winter is the warmest. We get approximately 700 centimeters which isapproximately 23 feet below ground.

But be careful. The temperature swings decay rapidly as you dig deeper. The amplitude of thetemperature swings is A0e�

p

!2k x. This decays very quickly as x grows. Let us again take typical

parameters as above. We also will assume that our surface temperature temperature swing is ±15�Celsius, that is, A0 = 15. Then the maximum temperature variation at 700 centimeters is only±0.66� Celsius.

You need not dig very deep to get an e↵ective “refrigerator.” I.e. Why wines are kept in a cellar;you need consistent temperature. The temperature di↵erential could also be used to for energy. Ahome could be heated or cooled by taking advantage of the above fact. Even without the earth coreyou could heat a home in the winter and cool it in the summer. There is also the earth core, sotemperature presumably gets higher the deeper you dig. We did not take that into account above.

Page 228: Urban Illiois

228 CHAPTER 5. EIGENVALUE PROBLEMS

5.3.3 ExercisesExercise 5.3.5: Suppose that the forcing function for the vibrating string is F0 sin!t. Derive theparticular solution yp.

Exercise 5.3.6: Take the forced vibrating string. Suppose that L = 1, a = 1. Suppose that theforcing function is the quare wave which is 1 on the interval 0 < x < 1 and �1 on the interval�1 < x < 0. Find the particular solution. Hint: you may want to use result of Exercise 5.3.5.

Exercise 5.3.7: The units are cgs (centimeters, grams, seconds). For k = 0.005, ! = 1.991⇥10�7,A0 = 20. Find the depth at which the temperature variation is half (±10 degrees) of what it is onthe surface.

Exercise 5.3.8: Derive the solution for underground temperature oscillation without assumingthat T0 = 0.

Page 229: Urban Illiois

Chapter 6

The Laplace transform

6.1 The Laplace transformNote: 2 lectures, §10.1 in EP

6.1.1 The transformIn this chapter we will discuss the Laplace transform⇤. The Laplace transform turns out to be a verye�cient method to solve certain ODE problems. In particular, the transform can take a di↵erentialequation and turn it into an algebraic equation. If the algebraic equation can be solved, applyingthe inverse transform gives us our desired solution. The Laplace transform is also useful in theanalysis of certain systems such as electrical circuits, NMR spectroscopy, signal processing andothers. Finally, understanding the Laplace transform will also help with understanding the relatedFourier transform, which, however, requires more understanding of complex numbers. We will notcover the Fourier transform.

The Laplace transform also gives a lot of insight into the nature of the equations we are dealingwith. It can be seen as converting between the time and the frequency domain. For example, takethe standard equation

mx00(t) + cx0(t) + kx(t) = f (t).

We can think of t as time and f (t) as incoming signal. The Laplace transform will convert theequation from a di↵erential equation in time to an algebraic (no derivatives) equation, where thenew independent variable s is the frequency.

We can think of the Laplace transform as a black box. It eats functions and spits out functionsin a new variable. We write L{ f (t)} = F(s). It is common to write lower case letters for functionsin the time domain and upper case letters for functions in the frequency domain. We will use the

⇤Just like the Laplace equation and the Laplacian, also named after Pierre-Simon, marquis de Laplace (1749 –1827).

229

Page 230: Urban Illiois

230 CHAPTER 6. THE LAPLACE TRANSFORM

same letter to denote that one function is the Laplace transform of the other, for example F(s) isthe Laplace transform of f (t). Let us define the transform.

L{ f (t)} = F(s) def=

Z

1

0e�st f (t) dt.

We note that we are only considering t � 0 in the transform. Of course, if we think of t as time thereis no problem, we are generally interested in finding out what will happen in the future (Laplacetransform is one place where it is safe to ignore the past). Let us compute the simplest transforms.

Example 6.1.1: Suppose f (t) = 1, then

L{1} =Z

1

0e�st dt =

"

e�st

�s

#

1

t=0=

1s.

Of course, the limit only exists if s > 0. So L{1} is only defined for s > 0.

Example 6.1.2: Suppose f (t) = e�at, then

L{e�at} =

Z

1

0e�ste�at dt =

Z

1

0e�(s+a)t dt =

"

e�(s+a)t

�(s + a)

#

1

t=0=

1s + a

.

Of course, the limit only exists if s + a > 0. So L{e�at} is only defined for s + a > 0.

Example 6.1.3: Suppose f (t) = t, then using integration by parts

L{t} =Z

1

0e�stt dt

=

"

�te�st

s

#

1

t=0+

1s

Z

1

0e�st dt

= 0 +1s

"

e�st

�s

#

1

t=0

=1s2 .

Of course, again, the limit only exists if s > 0.

Example 6.1.4: A common function is the unit step function, which is sometimes called the Heav-iside function†. This function is generally given as

u(t) =

8

>

>

<

>

>

:

0 if t < 0,1 if t � 0.

†The function is named after Oliver Heaviside (1850–1925). Only by coincidence is the function “heavy” on “oneside.”

Page 231: Urban Illiois

6.1. THE LAPLACE TRANSFORM 231

Let us find the Laplace transform of u(t � a), where a � 0 is some constant. That is, the functionwhich is 0 for t < a and 1 for t � a.

L{u(t � a)} =Z

1

0e�stu(t � a) dt =

Z

1

ae�st dt =

"

e�st

�s

#

1

t=a=

e�as

s,

where of course s > 0 (and a � 0 as we said before).

By applying similar procedures we can compute the transforms of many elementary functions.Many basic transforms are listed in Table 6.1.

f (t) L{ f (t)} = F(s)

C Cs

t 1s2

t2 2s3

t3 6s4

tn n!sn+1

e�at 1s+a

sin!t !s2+!2

cos!t ss2+!2

sinh!t !s2�!2

cosh!t ss2�!2

u(t � a) e�as

s

Table 6.1: Some Laplace transforms (C, !, and a are constants).

Exercise 6.1.1: Verify Table 6.1.

Since the transform is defined by an integral. We can use the linearity properties of the integral.For example, suppose C is a constant, then

L{C f (t)} =Z

1

0e�stC f (t) dt = C

Z

1

0e�st f (t) dt = CL{ f (t)}.

So we can “pull out” a constant out of the transform. Similarly we have linearity. Since linearityis very important we state it as a theorem.

Theorem 6.1.1 (Linearity of Laplace transform). Suppose that A, B, and C are constants, then

L{A f (t) + Bg(t)} = AL{ f (t)} + BL{g(t)},

and in particularL{C f (t)} = CL{ f (t)}.

Page 232: Urban Illiois

232 CHAPTER 6. THE LAPLACE TRANSFORM

Exercise 6.1.2: Verify the theorem. That is, show that L{A f (t) + Bg(t)} = AL{ f (t)} + BL{g(t)}.

These rules together with Table 6.1 on the previous page make it easy to already find theLaplace transform of a whole lot of functions already. It is a common mistake to think that Laplacetransform of a product is the product of the transforms. But in general

L{ f (t)g(t)} , L{ f (t)}L{g(t)}.

It must also be noted that not all functions have Laplace transform. For example, the function1t does not have a Laplace transform as the integral diverges. Similarly tan t or et2 do not haveLaplace transforms.

6.1.2 Existence and uniquenessLet us consider in more detail when does the Laplace transform exist. First let us consider functionsof exponential order. f (t) is of exponential order as t goes to infinity if

| f (t)| Mect,

for some constants M and c, for su�ciently large t (say for all t > t0 for some t0). The simplestway to check this condition is to try and compute

limt!1

f (t)ect .

If the limit exists and is finite (usually zero), then f (t) is of exponential order.

Exercise 6.1.3: Use L’Hopital’s rule from calculus to show that a polynomial is of exponentialorder. Hint: Note that a sum of two exponential order functions is also of exponential order. Thenshow that tn is of exponential order for any n.

For an exponential order function we have existence and uniqueness of the Laplace transform.

Theorem 6.1.2 (Existence). Let f (t) be continuous and of exponential order for a certain constantc. Then F(s) = L{ f (t)} is defined for all s > c.

You may have existence of the transform for other functions, that are not of exponential order,but that will not relevant to us. Before dealing with uniqueness, let us also note that for exponentialorder functions you also obtain that their Laplace transform decays at infinity:

lims!1

F(s) = 0.

Theorem 6.1.3 (Uniqueness). Let f (t) and g(t) be continuous and of exponential order. Supposethat there exists a constant C, such that F(s) = G(s) for all s > C. Then f (t) = g(t) for all t � 0.

Page 233: Urban Illiois

6.1. THE LAPLACE TRANSFORM 233

Both theorems hold for piecewise continuous functions as well. Recall that piecewise contin-uous means that the function is continuous except perhaps at a discrete set of points where it hasjump discontinuities like the Heaviside function. Uniqueness however does not “see” values at thediscontinuities. So you can only conclude that f (t) = g(t) outside of discontinuities. For example,the unit step function is sometimes defined using u(0) = 1

2 . This new step function, however, wedefined has the exact same Laplace transform as the one we defined earlier where u(0) = 1.

6.1.3 The inverse transformAs we said, the Laplace transform will allow us to convert a di↵erential equation into an algebraicequation which we can solve. Once we do solve the algebraic equation in the frequency domain wewill want to get back to the time domain, as that is what we are really interested in. We, therefore,need to also be able to get back. If we have a function F(s), to be able to find f (t) such thatL{ f (t)} = F(s), we need to first know if such a function is unique. It turns out we are in luck byTheorem 6.1.3. So we can without fear make the following definition.

If F(s) = L{ f (t)} for some function f (t). We define the inverse Laplace transform as

L

�1{F(s)} def

= f (t).

There is an integral formula for the inverse, but it is not as simple as the transform itself (requirescomplex numbers). The best way to compute the inverse is to use the Table 6.1 on page 231.

Example 6.1.5: Take F(s) = 1s+1 . Find the inverse Laplace transform.

We look at the table and we find

L

�1(

1s + 1

)

= e�t.

We note that because the Laplace transform is linear, the inverse Laplace transform is alsolinear. That is,

L

�1{AF(s) + BG(s)} = AL�1

{F(s)} + BL�1{G(s)}.

We can of course also just pull out constants. Let us demonstrate how linearity is used by thefollowing example.

Example 6.1.6: Take F(s) = s2+s+1s3+s . Find the inverse Laplace transform.

First we use the method of partial fractions to write F in a form where we can use Table 6.1 onpage 231. We factor the denominator as s(s2 + 1) and write

s2 + s + 1s3 + s

=As+

Bs +Cs2 + 1

.

Hence A(s2� 1) + s(Bs +C) = s2 + s + 1. Therefore, A + B = 1, C = 1, A = 1. In other words,

F(s) =s2 + s + 1

s3 + s=

1s+

1s2 + 1

.

Page 234: Urban Illiois

234 CHAPTER 6. THE LAPLACE TRANSFORM

By linearity of Laplace transform (and thus of its inverse) we get that

L

�1(

s2 + s + 1s3 + s

)

= L�1(

1s

)

+L�1(

1s2 + 1

)

= 1 + sin t.

A useful property is the so-called shifting property or the first shifting property

L{e�at f (t)} = F(s + a),

where F(s) is the Laplace transform of f (t).

Exercise 6.1.4: Derive this property from the definition.

The shifting property can be used when the denominator is a more complicated quadratic thatmay come up in the method of partial fractions. You always want to write such quadratics as(s + a)2 + b by completing the square and then using the shifting property.

Example 6.1.7: Find L�1n

1s2+4s+8

o

.First we complete the square to make the denominator (s + 2)2 + 4. Next we find

L

�1(

1s2 + 4

)

=14

sin 2t.

Putting it all together with the shifting property we find

L

�1(

1s2 + 4s + 8

)

= L�1(

1(s + 2)2 + 4

)

=14

e�2t sin 2t.

In general, we will want to be able to apply the Laplace transform to rational functions, that isfunctions of the form

F(s)G(s)

where F(s) and G(s) are polynomials. Since normally (for functions that we are considering) theLaplace transform goes to zero as s ! 1, it is not hard to see that the degree of F(s) will alwaysbe smaller than that of G(s). Such rational functions are called proper rational functions and wewill always be able to apply the method of partial fractions. Of course this means we will need tobe able to factor the denominator into linear and quadratic terms, which involves finding the rootsof the denominator.

6.1.4 ExercisesExercise 6.1.5: Find the Laplace transform of 3 + t5 + sin ⇡t.

Exercise 6.1.6: Find the Laplace transform of a + bt + ct2 for some constants a, b, and c.

Page 235: Urban Illiois

6.1. THE LAPLACE TRANSFORM 235

Exercise 6.1.7: Find the Laplace transform of A cos!t + B sin!t.

Exercise 6.1.8: Find the Laplace transform of cos2 !t.

Exercise 6.1.9: Find the inverse Laplace transform of 4s2�9 .

Exercise 6.1.10: Find the inverse Laplace transform of 2ss2�1 .

Exercise 6.1.11: Find the inverse Laplace transform of 1(s�1)2(s+1) .

Page 236: Urban Illiois

236 CHAPTER 6. THE LAPLACE TRANSFORM

6.2 Transforms of derivatives and ODEsNote: 2 lectures, §7.2 –7.3 in EP

6.2.1 Transforms of derivativesLet us see how the Laplace transform is used for di↵erential equations. First let us try to findthe Laplace transform of a function that is a derivative. That is, suppose g(t) is a continuousdi↵erentiable function of exponential order.

L {g0(t)} =Z

1

0e�stg0(t) dt =

h

e�stg(t)i

1

t=0�

Z

1

0(�s) e�stg(t) dt = �g(0) + sL{g(t)}.

We can keep doing this procedure for higher derivatives. The results are listed in Table 6.2. Theprocedure also works for piecewise smooth functions, that is functions which are piecewise con-tinuous with a piecewise continuous derivative. The fact that the function is of exponential orderis used to show that the limits appearing above exist. We will not worry much about this fact.

f (t) L{ f (t)} = F(s)

g0(t) sG(s) � g(0)g00(t) s2G(s) � sg(0) � g0(0)g000(t) s3G(s) � s2g(0) � sg0(0) � g00(0)

Table 6.2: Laplace transforms of derivatives (G(s) = L{g(t)} as usual).

Exercise 6.2.1: Verify Table 6.2.

6.2.2 Solving ODEs with the Laplace transformIf you notice, the Laplace transform turns di↵erentiation essentially into multiplication by s. Letus see how to apply this to di↵erential equations.

Example 6.2.1: Take the equation

x00(t) + x(t) = cos 2t, x(0) = 0, x0(0) = 1.

We will take the Laplace transform of both sides. By X(s) we will, as usual, denote the Laplacetransform of x(t).

L{x00(t) + x(t)} = L{cos 2t},

s2X(s) � sx(0) � x0(0) + X(s) =s

s2 + 4.

Page 237: Urban Illiois

6.2. TRANSFORMS OF DERIVATIVES AND ODES 237

We can plug in the initial conditions now (this will make computations more streamlined) to obtain

s2X(s) � 1 + X(s) =s

s2 + 4.

We now solve for X(s),

X(s) =s

(s2 + 1)(s2 + 4)+

1s2 + 1

.

We use partial fractions (exercise) to write

X(s) =13

ss2 + 1

13

ss2 + 4

+1

s2 + 1.

Now take the inverse Laplace transform to obtain

x(t) =13

cos t �13

cos 2t + sin t.

The procedure is as follows. You take an ordinary di↵erential equation in the time variablet. You apply the Laplace transform to transform the equation into an algebraic (non di↵erential)equation in the frequency domain. All the x(t), x0(t), x00(t), and so on, will be converted to X(s),sX(s) � x(0), s2X(s) � sx(0) � x0(0), and so on. If the di↵erential equation we started with wasconstant coe�cient linear equation, it is generally pretty easy to solve for X(s) and we will obtainsome expression for X(s). Then taking the inverse transform if possible, we find x(t).

It should be noted that since not every function has a Laplace transform, not every equation canbe solved in this manner.

6.2.3 Using the Heaviside functionBefore we move on to more general functions than those we could solve before, we want to con-sider the Heaviside function. See Figure 6.1 on the following page for the graph.

u(t) =

8

>

>

<

>

>

:

0 if t < 0,1 if t � 1.

This function is useful for putting together functions, or cutting functions o↵. Most commonlyit is used as u(t � a) for some constant a. This just shifts the graph to the right by a. That is, it is afunction which is zero when t < a and 1 when t � a. Suppose for example that f (t) is a “signal”and you started receiving the signal sin t at time t. The function f (t) should then be defined as

f (t) =

8

>

>

<

>

>

:

0 if t < ⇡,sin t if t � ⇡.

Page 238: Urban Illiois

238 CHAPTER 6. THE LAPLACE TRANSFORM

-1.0 -0.5 0.0 0.5 1.0

-1.0 -0.5 0.0 0.5 1.0

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

Figure 6.1: Plot of the Heaviside (unit step) function u(t).

Using the Heaviside function, f (t) can be written as

f (t) = u(t � ⇡) sin t.

Similarly the step function which is 1 on the interval [1, 2) and zero everywhere else can be writtenas

u(t � 1) � u(t � 2).

The Heaviside function is useful to define functions defined piecewise. If you want the function ton when t is in [0, 1] and the function �t + 2 when t is in [1, 2] and zero otherwise, you can use theexpression

t�

u(t) � u(t � 1)�

+ (�t + 2)�

u(t � 1) � u(t � 2)�

.

Hence it is useful to know how the Heaviside function interacts with the Laplace transform.We have already seen that

L{u(t � a)} =e�as

s.

This can be generalized into a shifting property or second shifting property.

L{ f (t � a)u(t � a)} = e�asL{ f (t)}. (6.1)

Example 6.2.2: Suppose that the forcing function is not periodic. For example, suppose that wehad a mass spring system

x00(t) + x(t) = f (t), x(0) = 0, x0(0) = 0,

where f (t) = 1 if 1 t < 3 and zero otherwise. We could imagine a mass and spring system wherea rocket was fired for 2 seconds starting at t = 1. Or perhaps an RLC circuit, where the voltage was

Page 239: Urban Illiois

6.2. TRANSFORMS OF DERIVATIVES AND ODES 239

being raised at a constant rate for 2 seconds starting at t = 1 and then held steady again starting att = 3.

We can write f (t) = u(t � 1) � u(t � 3). We transform the equation and we plug in the initialconditions as before to obtain

s2X(s) + X(s) =e�s

s�

e�3s

s.

We solve for X(s) to obtain

X(s) =e�s

s(s2 + 1)�

e�3s

s(s2 + 1).

We leave it as an exercise to the reader to show that

L

�1(

1s(s2 + 1)

)

= 1 � cos t.

In other words L{1 � cos t} = 1s(s2+1) . So using (6.1) we find

L

�1(

e�s

s(s2 + 1)

)

= e�sL{1 � cos t} =

1 � cos(t � 1)�

u(t � 1).

Similarly

L

�1(

e�2s

s(s2 + 1)

)

= e�2sL{1 � cos t} =

1 � cos(t � 3)�

u(t � 3).

Hence, the solution is

x(t) =�

1 � cos(t � 1)�

u(t � 1) ��

1 � cos(t � 2)�

u(t � 2).

The plot of this solution is given in Figure 6.2 on the next page.

6.2.4 Transforms of integralsA feature of Laplace transforms is that it is also able to easily deal with integral equations. That is,equations in which integrals rather than derivatives of functions appear. The basic property, whichcan be proven by applying the definition and again doing integration by parts, is the following.

L

(

Z t

0f (⌧) d⌧

)

=1s

F(s).

It is sometimes useful for computing the inverse transform to writeZ t

0f (⌧) d⌧ = L�1

(

1s

F(s))

.

Page 240: Urban Illiois

240 CHAPTER 6. THE LAPLACE TRANSFORM

0 5 10 15 20

0 5 10 15 20

-2

-1

0

1

2

-2

-1

0

1

2

Figure 6.2: Plot of x(t).

Example 6.2.3: To compute the inverse transform of 1s(s2+1) we could proceed by applying this

integration rule.

L

�1(

1s

1s2 + 1

)

=

Z t

0L

�1(

1s2 + 1

)

d⌧ =Z t

0sin ⌧ d⌧ = 1 � cos t.

If an equation contains an integral of the unknown function the equation is called an integralequation. For example, take the equation

t2 =

Z t

0e⌧x(⌧) d⌧.

If we apply the Laplace transform we obtain (where X(s) = L{x(t)})

2s3 =

1sL{e⌧ f (⌧)}

1s

X(s � 1).

Or

X(s � 1) =2s2 or X(s) =

2(s + 1)2 .

We use the shifting propertyx(t) = 2e�tt.

More complicated integral equations can also be solved using the convolution that we will learnnext.

Page 241: Urban Illiois

6.2. TRANSFORMS OF DERIVATIVES AND ODES 241

6.2.5 ExercisesExercise 6.2.2: Using the Heaviside function write down the piecewise function that is 0 for t < 0,t2 for t in [0, 1] and t for t > 1.

Exercise 6.2.3: Using the Laplace transform solve

mx00 + cx0 + kx = 0, x(0) = 0, x0(0) = 0,

where m > 0, c > 0, k > 0, and c2� 4km > 0 (system is overdamped).

Exercise 6.2.4: Using the Laplace transform solve

mx00 + cx0 + kx = 0, x(0) = 0, x0(0) = 0,

where m > 0, c > 0, k > 0, and c2� 4km < 0 (system is underdamped).

Exercise 6.2.5: Using the Laplace transform solve

mx00 + cx0 + kx = 0, x(0) = 0, x0(0) = 0,

where m > 0, c > 0, k > 0, and c2 = 4km (system is critically damped).

Exercise 6.2.6: Solve x00 + x = u(t � 1) for initial conditions x(0) = 0 and x0(0) = 0.

Exercise 6.2.7: Show the di↵erentiation of the transform property. Suppose L{ f (t)} = F(s), thenshow

L{�t f (t)} = F0(s).

Hint: di↵erentiate under the integral sign.

Page 242: Urban Illiois

242 CHAPTER 6. THE LAPLACE TRANSFORM

6.3 ConvolutionNote: 1 or 1.5 lectures, §7.2 in EP

6.3.1 The convolutionWe have said that the Laplace transformation of a product is not the product of the transforms. Allhope is not lost however. There exists a very important type of a product which works. Take twofunctions f (t) and g(t) defined for t � 0. Define the convolution‡ of f (t) and g(t) as

( f ⇤ g)(t) def=

Z t

0f (⌧)g(t � ⌧) d⌧. (6.2)

So the convolution of two functions of t is another function of t.

Example 6.3.1: Take f (t) = et and g(t) = t for t � 0. Then

( f ⇤ g)(t) =Z t

0e⌧(t � ⌧) d⌧ = et

� t � 1.

Where we of course did one integration by parts.

Example 6.3.2: Take f (t) = sin!t and g(t) = cos!t for t � 0. Then

( f ⇤ g)(t) =Z t

0

sin!⌧� �

cos!0(t � ⌧)�

d⌧.

Now we use the identity

cos ✓ sin =12

sin(✓ + ) � sin(✓ � )�

.

Hence,

( f ⇤ g)(t) =Z t

0

12

sin(!0t) � sin(!0t � 2!0⌧)�

d⌧

=

"

12⌧ sin!0t +

14!0

cos(2!0⌧ � !0t)#t

⌧=0

=12

t sin!0t.

Of course the formula only holds for t � 0. We did assume that f and g are zero (or just notdefined) for negative t.‡ For those that have seen convolution defined before, you may have seen it defined as ( f ⇤g)(t) =

R

1

1

f (⌧)g(t�⌧) d⌧.This definition agrees with (6.2) if you define f (t) and g(t) to be zero for t < 0. When discussing the Laplace transformthe definition we gave is su�cient. Convolution does occur in many other applications, however, where you may haveto use the more general definition with infinities.

Page 243: Urban Illiois

6.3. CONVOLUTION 243

The convolution has many properties that make it behave like a product. Let c be a constantand f , g, and h be functions then

f ⇤ g = g ⇤ f ,(c f ) ⇤ g = f ⇤ (cg) = c( f ⇤ g),( f ⇤ g) ⇤ h = f ⇤ (g ⇤ h).

The most interesting property for us, and the main result of this section is the following theorem.

Theorem 6.3.1. Let f (t) and g(t) be of exponential type, then

L {( f ⇤ g)(t)} = L(

Z t

0f (⌧)g(t � ⌧) d⌧

)

= L{ f (t)}L{g(t)}.

In other words, the Laplace transform of a convolution is the product of the Laplace transforms.The simplest way to use this result is in reverse.

Example 6.3.3: Suppose we have the function of s defined by

1(s + 1)s2 =

1s + 1

1s2 .

We recognize the two entries of Table 6.2. That is

L

�1(

1s + 1

)

= e�t and L

�1(

1s2

)

= t.

Therefore,

L

�1(

1s + 1

1s2

)

=

Z t

0⌧et�⌧ d⌧ = 2et

� t2� 2t � 2.

Where the calculation of the integral of course involved an integration by parts.

6.3.2 Solving ODEsThe next example will demonstrate the full power of the convolution and Laplace transform. Wewill be able to give a solution to the forced oscillation problem for any forcing function as a definiteintegral.

Example 6.3.4: Find the solution to

x00 + !20x = f (t), x(0) = 0, x0(0) = 0,

for an arbitrary function f (t).

Page 244: Urban Illiois

244 CHAPTER 6. THE LAPLACE TRANSFORM

We first apply the Laplace transform to the equation. Denote the transform of x(t) by X(s) andthe transform of f (t) by F(s) as usual.

s2X(s) + !20X(s) = F(s),

or in other wordsX(s) = F(s)

1s2 + !2

0.

We knowL

�1(

1s2 + !2

0

)

=sin!0t!0.

Therefore,

x(t) =Z t

0f (⌧)

sin!0(t � ⌧)!0

d⌧,

or if we reverse the orderx(t) =

Z t

0

sin!0t!0

f (t � ⌧) d⌧.

Let us notice one more thing with this example. We can now also notice how Laplace transformhandles resonance. Suppose that f (t) = cos!0t. Then

x(t) =Z t

0

sin!0⌧

!0(cos!0(t � ⌧)) d⌧ =

1!0

Z t

0

cos!0⌧� �

sin!0(t � ⌧)�

d⌧.

We have already computed the convolution of sine and cosine in Example 6.3.2. Hence

x(t) =

1!0

!

12

t sin!0t!

=1

2!0t sin!0t.

Note the t in front of the sine. This solution will, therefore, grow without bound as t gets large,meaning we get resonance.

Using convolution you can also find a solution as a definite integral for arbitrary forcing func-tion f (t) for any constant coe�cient equation. A definite integral is usually enough for mostpractical purposes. It is usually not hard to numerically evaluate a definite integral.

6.3.3 Volterra integral equationOne of the most common integral equations is the Volterra integral equation§:

x(t) = f (t) +Z t

0g(t � ⌧)x(⌧) d⌧,

§Named for the Italian mathematician Vito Volterra (1860 – 1940).

Page 245: Urban Illiois

6.3. CONVOLUTION 245

where f (t) and g(t) are known functions and x(t) is an unknown. To solve this equation we applythe Laplace transform to get

X(s) = F(s) +G(s)X(s)

where X(s), F(s), and G(s) are the Laplace transforms of x(t), f (t), and g(t) respectively. We find

X(s) =F(s)

1 �G(s)

if we can find the inverse Laplace transform now we obtain the result.

Example 6.3.5: Solve

x(t) = e�t +

Z t

0sinh(t � ⌧)x(⌧) d⌧.

We apply Laplace transform to obtain

X(s) =1

s + 1+

1s2� 1

X(s),

or

X(s) =1

s+1

1 � 1s2�1

=s � 1s2� 2=

ss2� 2�

1s2� 2.

It is not hard to apply Table 6.1 on page 231 to find

x(t) = coshp

2 t �1p

2sinh

p

2 t.

6.3.4 ExercisesExercise 6.3.1: Let f (t) = t2 for t � 0, and g(t) = u(t � 1). Compute f ⇤ g.

Exercise 6.3.2: Let f (t) = t for t � 0, and g(t) = sin t for t � 0. Compute f ⇤ g.

Exercise 6.3.3: Find the solution to

mx00 + cx0 + kx = f (t), x(0) = 0, x0(0) = 0,

for an arbitrary function f (t), where m > 0, c > 0, k > 0, and c2�4km > 0 (system is overdamped).

Write the solution as a definite integral.

Exercise 6.3.4: Find the solution to

mx00 + cx0 + kx = f (t), x(0) = 0, x0(0) = 0,

for an arbitrary function f (t), where m > 0, c > 0, k > 0, and c2� 4km < 0 (system is under-

damped). Write the solution as a definite integral.

Page 246: Urban Illiois

246 CHAPTER 6. THE LAPLACE TRANSFORM

Exercise 6.3.5: Find the solution to

mx00 + cx0 + kx = f (t), x(0) = 0, x0(0) = 0,

for an arbitrary function f (t), where m > 0, c > 0, k > 0, and c2 = 4km (system is criticallydamped). Write the solution as a definite integral.

Exercise 6.3.6: Solvex(t) = e�t +

Z t

0cos(t � ⌧)x(⌧) d⌧.

Exercise 6.3.7: Solvex(t) = cos t +

Z t

0cos(t � ⌧)x(⌧) d⌧.

Page 247: Urban Illiois

Further Reading

[BM] Paul W. Berg and James L. McGregor, Elementary Partial Di↵erential Equations, Holden-Day, San Francisco, CA, 1966.

[EP] C.H. Edwards and D.E. Penney, Di↵erential Equations and Boundary Value Problems:Computing and Modelling, 4th edition, Prentice Hall, 2008.

[F] Stanley J. Farlow, An Introduction to Di↵erential Equations and Their Applications,McGraw-Hill, Inc., Princeton, NJ, 1994.

[I] E.L. Ince, Ordinary Di↵erential Equations, Dover Publications, Inc., New York, NY, 1956.

247

Page 248: Urban Illiois

248 FURTHER READING

Page 249: Urban Illiois

Index

acceleration, 16addition of matrices, 87algebraic multiplicity, 119amplitude, 65angular frequency, 65antiderivative, 14antidi↵erentiate, 14associated homogeneous equation, 70atan2, 66augmented matrix, 91autonomous equation, 36autonomous system, 85

Bernoulli equation, 33boundary conditions for a PDE, 181boundary value problem, 143

catenary, 11Cauchy-Euler equation, 50center, 107cgs units, 227, 228characteristic equation, 52Chebychev’s equation of order 1, 50cofactor, 90cofactor expansion, 90column vector, 87commute, 89complementary solution, 70complete eigenvalue, 119complex conjugate, 102complex number, 53complex roots, 54constant coe�cient, 51, 96convolution, 242

cosine series, 170critical point, 36critically damped, 67

d’Alembert solution to the wave equation, 198damped, 66damped motion, 62defect, 120defective eigenvalue, 120deficient matrix, 120dependent variable, 7determinant, 89diagonal matrix, 111

matrix exponential of, 125diagonalization, 126di↵erential equation, 7direction field, 85Dirichlet boundary conditions, 171, 211Dirichlet problem, 205displacement vector, 111distance, 16dot product, 88, 151dynamic damping, 118

eigenfunction, 144, 212eigenfunction decomposition, 211, 216eigenvalue, 99, 212eigenvalue of a boundary value problem, 144eigenvector, 99eigenvector decomposition, 133, 140ellipses (vector field), 107elliptic PDE, 181endpoint problem, 143equilibrium solution, 36

249

Page 250: Urban Illiois

250 INDEX

Euler’s equation, 50Euler’s equations, 56Euler’s formula, 53Euler’s method, 41even function, 155, 168even periodic extension, 168existence and uniqueness, 20, 48, 57exponential growth model, 9exponential of a matrix, 124exponential order, 232extend periodically, 151

first order di↵erential equation, 7first order linear equation, 27first order linear system of ODEs, 95first order method, 42first shifting property, 234forced motion, 62

systems, 116Fourier series, 153fourth order method, 43Fredholm alternative

simple case, 148Sturm-Liouville problems, 215

free motion, 62free variable, 93fundamental matrix, 96fundamental matrix solution, 96, 125

general solution, 10generalized eigenvectors, 120, 122Genius software, 5geometric multiplicity, 119Gibbs phenomenon, 158

half period, 160harmonic function, 204harvesting, 38heat equation, 181Heaviside function, 230Hermite’s equation of order 2, 50

homogeneous equation, 34homogeneous linear equation, 47homogeneous side conditions, 182homogeneous system, 96Hooke’s law, 62, 110hyperbolic PDE, 181

identity matrix, 88imaginary part, 54implicit solution, 24inconsistent system, 93indefinite integral, 14independent variable, 7initial condition, 10initial conditions for a PDE, 181inner product, 88inner product of functions, 153, 215integral equation, 240, 244integrate, 14integrating factor, 27integrating factor method, 27

systems, 131inverse Laplace transform, 233invertible matrix, 89IODE

Lab I, 18Lab II, 41Project I, 18Project II, 41Project III, 76Project IV, 160Project V, 160

IODE software, 5

la vie, 72Laplace equation, 181, 204Laplace transform, 229Laplacian, 204leading entry, 93Leibniz notation, 15, 22linear equation, 27, 47

Page 251: Urban Illiois

INDEX 251

linear first order system, 85linear operator, 70linear PDE, 181linearity of Laplace transform, 231linearly dependent, 57linearly independent, 49, 57logistic equation, 37

with harvesting, 38

mass matrix, 111mathematical model, 9mathematical solution, 9matrix, 87matrix exponential, 124matrix inverse, 89matrix valued function, 95method of partial fractions, 233Mixed boundary conditions, 211mks units, 65, 175multiplication of complex numbers, 53multiplicity, 60multiplicity of an eigenvalue, 119

natural (angular) frequency, 65natural frequency, 76, 113natural mode of oscillation, 113Neumann boundary conditions, 171, 211Newton’s law of cooling, 31, 36Newton’s second law, 62, 63, 84, 110nilpotent, 126normal mode of oscillation, 113

odd function, 155, 168odd periodic extension, 168ODE, 8one-dimensional heat equation, 181one-dimensional wave equation, 191ordinary di↵erential equation, 8orthogonal

functions, 147, 153vectors, 151

with respect to a weight, 214orthogonality, 147overdamped, 67

parabolic PDE, 181parallelogram, 90partial di↵erential equation, 8, 181particular solution, 10, 70PDE, 8, 181period, 65periodic, 151phase diagram, 37phase portrait, 37, 86phase shift, 65Picard’s theorem, 20piecewise continuous, 163piecewise smooth, 163practical resonance, 80, 180product of matrices, 88projection, 153proper rational function, 234pure resonance, 78, 178

quadratic formula, 52

real part, 54real world problem, 9reduced row echelon form, 93reduction of order method, 50regular Sturm-Liouville problem, 213repeated roots, 59resonance, 78, 117, 178, 244RLC circuit, 62row vector, 87

saddle point, 106sawtooth, 154scalar, 87scalar multiplication, 87second order di↵erential equation, 11second order linear di↵erential equation, 47second order method, 42

Page 252: Urban Illiois

252 INDEX

second shifting property, 238separable, 22separation of variables, 182shifting property, 234, 238side conditions for a PDE, 181simple harmonic motion, 65sine series, 170singular matrix, 89singular solution, 24sink, 105slope field, 18solution, 7solution curve, 86source, 105spiral sink, 108spiral source, 107square wave, 81, 155stable critical point, 36stable node, 105steady periodic solution, 80, 175steady state temperature, 189, 204sti↵ness matrix, 111Sturm-Liouville problem, 212superposition, 47, 57, 96, 182symmetric matrix, 147, 151system of di↵erential equations, 83

tedious, 72, 73, 79, 136timbre, 220trajectory, 86transient solution, 80transpose, 88trigonometric series, 153

undamped, 64undamped motion, 62

systems, 110underdamped, 68undetermined coe�cients, 71

for systems, 116second order systems, 139

systems, 136unforced motion, 62unit step function, 230unstable critical point, 36unstable node, 105

variation of parameters, 73systems, 138

vector, 87vector field, 85vector valued function, 95velocity, 16Volterra integral equation, 244

wave equation, 181, 198weight function, 214