Chapter 11
Numerical Differential Equations:
IVP
**** 4/16/13 EC
(Incomplete)
11.1 Initial Value Problem for Ordinary Differential Equations
We consider the problem of numerically solving a differential equation of the form
dy
dt= f(t, y), a ≤ t ≤ b, y(a) = α (given).
Such a problem is called the Initial Value Problem or in short IVP, because the initial value
of the solution y(a) = α is given.
Since there are infinitely many values between a and b, we will only be concerned here to find
approximations of the solution y(t) at several specified values of t in [a, b], rather than finding
y(t) at every value between a and b.
The following strategy will be used:
• Divide [a, b] into N equal subintervals each of length h:
t0 = a < t1 < t2 < · · · < tN = b.
491
492 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
��������
����
��������
����
��������
��������
������������
��������
a = t0 t1 t2 tN = b
• Set h =b− a
N(step size)
• Compute approximations yi to y(ti) at t = ti; that is, given y(0) = y0, compute, recur-
sively, y1, y2, . . . , yn.
· y(ti) = Exact value of y(t) at t = ti.
· yi = An approximation of y(ti) at t = ti.
The Initial Value Problem
Given
(i) y′ = f(y, t), a ≤ t ≤ b
(ii) The initial value y(a) = y(t0) = y0 = α
(iii) The step-size h.
Find yi (approximate value of y(ti)), i = 1, . . . , N , where N = b−ah
.
We will briefly describe here the following well-known numerical methods for solving the IVP:
• The Euler and Modified Euler Method (Taylor Method of order 1)
• The Higher-order Taylor Methods
• The Runge-Kutta Methods
• The Multistep Methods: The Adams-Bashforth and Adams-Moulton Method
• The Predictor-Corrector Methods
• Methods for Systems of IVP Differential Equations
We will also discuss the error behavior and convergence of these methods. However, before doing
so, we state a result without proof, in the following section on the existence and uniqueness
of the solution for the IVP. The proof can be found in most books on ordinary differential
equations.
11.1. INITIAL VALUE PROBLEM FOR ORDINARY DIFFERENTIAL EQUATIONS 493
11.1.1 Existence and Uniqueness of the Solution for the IVP
Theorem 11.1 (Existence and Uniqueness Theorem for the IVP). Suppose
(i) f(t, y) is continuous on the domain defined by R = {a ≤ t ≤ b,−∞ < y < ∞},
and
(ii) the following inequality is satisfied:
|f(t, y)− f(t, y∗)| ≤ L|y − y∗|,
whenever (t, y) and (t∗, y∗) ∈ R, and L is constant.
Then the IVP
y′ = f(t, y), y(a)α
has a unique solution.
Definition 11.2. The condition |f(t, y) − f(t, y∗)| ≤ L|y − y∗| is called the Lipschitz Con-
dition. The number L is called a Lipschitz Constant.
Definition 11.3. A set S is said to be convex if whenever (t1, y1) and (t2, y2) belong to S,
the point ((1− λ)t1 + λt2, (1− λ)y1 + λy2) also belongs to S for each λ, 0 ≤ λ ≤ 1.
11.1.2 Simplification of the Lipschitz Condition for the Convex Domain
Suppose that the domain R in Theorem 11.1 is convex set, then the The IVP has a unique
solution if∣
∣
∣
∣
∂f
∂y(t, y)
∣
∣
∣
∣
≤ L for all (t, y) ∈ R.
11.1.3 Liptischitz Condition and Well-Posedness
Definition 11.4. An IVP is said to be well-posed if a small perturbation in the data of the
problem leads to only a small change in the solution.
Since numerical computation may introduce some perturbations to the problem, it is important
that the problem that is to be solved is well-posed. Fortunately, the Lipschitz condition is a
sufficient condition for the IVP problem to be well-posed.
Theorem 11.5 (Well-Posedness of the IVP problem). If f(t, y) satisfies the Lipschitz
condition, then the IVP is well-posed.
494 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
11.2 The Euler Method
One of the simplest methods for solving the IVP is the classical Euler method.
The method is derived from the Taylor Series expansion of the function y(t).
Recall that the function y(t) has the following Taylor series expansion of order n at t = ti+1:
y(ti+1) = y(ti) + (ti+1 − ti)y′(ti) +
(ti+1 − ti)
2!
2
y′′(ti)
+ · · · +(ti+1 − ti)
n!
n
y(n)(ti) +(ti+1 − ti)
(n+ 1)!
n+1
yn+1(ξi), where ξi is in (ti, ti+1).
Substitute h = ti+1 − ti. Then
Taylor Series Expansion of y(t) of order n at t = ti+1
y(ti+1) = y(ti) + hy′(ti) +h2
2!y′′(ti) + · · · +
hn
n!y(n)(ti) +
hn+1
(n + 1)!y(n+1)(ξi).
For n = 1, this formula reduces to
y(ti+1) = y(ti) + hy′(ti) +h2
2!y′′(ξi).
The term h2
2! y(2)(ξi) is called the remainder term.
Neglecting the remainder term, we can write the above equation as:
y(ti+1) ≈ y(ti) + hy′(ti)
Using our notations, established earlier, we then have
yi+1 = yi + hf(ti, yi),
since y′i = f(ti, yi)
We can then resursively generate the successive approximations yi, y2, . . . , yN to y(t1), y(t2), . . .
and y(tN ) as follows:
11.2. THE EULER METHOD 495
y0 = α (given)
y1 = y0 + hf(t0, y0)
y2 = y1 + hf(t1, y1)...
yN = yN−1 + hf(tN−1, yN−1)
.
These iterations are known as Euler’s iterations and the method is the historical Euler’s
method.
Figure 11.1: Geometrical Interpretation
y(tN )
= y(b)
y(t2)
y(t1)
y(t0)
= y(a) = α
a t1 t2 tN−1 b = tN
496 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
Algorithm 11.6 (Euler’s Method for IVP).
Input: (i) The function f(t, y)
(ii) The end points of the interval [a, b]: a and b
(iii) The initial value: α = y(t0) = y(a)
(iv) The step size: h
Output:Approximations yi+1 of y(ti + 1), i = 0, 1, . . . , N − 1.
Step 1. Initialization: Set t0 = a, y0 = y(t0) = y(a) = α and N = b−ah
.
Step 2. For i = 0, 1, . . . , N − 1 do
Compute yi+1 = yi + hf(ti, yi)
End
Example 11.7
Using Euler’s method, solve numerically y′ = t2 + 5, 0 ≤ t ≤ 1, with h = 0.25.
Input Data:
f(t, y) = t2 + 5
t0 = a = 0, t1 = t0 + h = 0.25, t2 = t1 + h = 0.50
t3 = t2 + h = 0.75, t4 = b = t3 + h = 1.
y0 = y(t0) = y(0) = 0.
Find:
y1 = approximation of y(t1) = y(0.25)
y2 = approximation of y(t2) = y(0.50)
y3 = approximation of y(t3) = y(0.75)
y4 = approximation of y(t4) = y(1)
• The exact solution obtained by direct integration:
y(t) =t3
3+ 5t.
Solution.
i = 0: Compute y1 from y0 (Set i = 0 in Step 2) of Algorithm 11.6:
11.2. THE EULER METHOD 497
y1 = y0 + hf(t0, y0) = 0 + 0.25(5)
= 1.25 (Approximate value of y(0.25))
Actual value of y(0.25) = 1.2552.
i = 1: Compute y2 from y1 (set i = 1 in Step 2).
y2 = y1 + hf(t1, y1)
= 1.25 + 0.25(t21 + 5) = 1.25 + 0.25((0.25)2 + 5)
= 2.5156 (Approximate value of y(0.50))
Note: of y(0.5) = 2.5417. (Four significant digits)
i = 2 : Compute y3 from y2 (Set i = 2 in Step 2).
y3 = y2 + hf(t2, y2)
= 2.5156 + 0.25((0.5)2 + 5)
= 3.8281 (Approximate value of y(0.75))
Actual value of y(0.75) = 3.8696.
Etc.
Example 11.8
y′ = t2 + 5, 0 ≤ t ≤ 2
y(0) = 0, h = 0.5
Solution.
i = 0 : Compute y1 from y0 (Set i = 0 in Step 2 of Algorithm 11.6).
y1 = y0 + hf(t0, y0) = y(0) + hf(0, 0) = 0 + 0.5× 5
= 2.5 (Approximate value of y(0.50))
Note:y(0.50) = 2.5417 (four significant digists)
i = 1 : Compute y2 from y1 (Set i = 1 in Step 2 in Algorithm 11.6).
498 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
y2 = y1 + hf(t1, y1) = 2.5 + 0.5((0.5)2 + 5)
= 5.1250 (Approximate value of y(1))
Note: y(1) = 5.3333 (four significant digits).
i = 3 : Compute y3 from y2 (Set i = 2 in Step 2).
y3 = y2 + hf(t2, y2) = 5.1250 + 0.5(t22 + 5) = 5.1250 + 0.5(1 + 5)
= 8.1250 (Approximate value of y(1.5))
Note: of y(1.5) = 8.6250 (four significant digits).
Etc.
11.2.1 The Errors in Euler’s Method
The approximations obtained by a numerical method to solve the IVP are usually subjected to
three types of errors:
• Local Truncation Error
• Global Truncation Error
• Round-off Error
We will not consider the round-off error for our discussions below.
Definition 11.9. The local error is the error made at a single step due to the truncation
of the series used to solve the problem.
Recall that the Euler Method was obtained by truncating the Taylor series
y(ti+1) = y(ti) + hy′(ti) +h2
2y′′(ti) + · · ·
after two terms. Thus, in obtaining Euler’s method, the first term neglected is h2
2 y′′(ti).
The local error in Euler’s method is:
11.2. THE EULER METHOD 499
ELE =
h2
2y′′
(ξi), where ξi lies between ti and ti+1.
In this case, we say that the local error is of order h2, written as O(h2). Note that the local
error ELE converges to zero as h → 0. That means, the smaller h is, the better accuracy will be.
Definition 11.10. The global error is the difference between the true solution y(ti) and the
approximate solution yi at t = ti. Thus, Global error = y(ti)− yi. Denote this by EGE .
The following theorem shows that the global error, EGE , is of order h.
Theorem 11.11 (Global Error Bound for the Euler Method). (i) Let y(t) be the unique
solution of the IVP: y′ = f(t, y); y(a) = α, where
a ≤ t ≤ b, −∞ < y < ∞.
(ii) Let L and M be two numbers such that∣
∣
∣
∣
∂f(t, y)
∂y
∣
∣
∣
∣
≤ L, and |y′′(t)| ≤ M in [a, b].
Then the global error EGE at t = ti satisfies the following inequality:
|EGE | = |y(ti)− yi| ≤
hM
2L(eL(ti−a) − 1).
Note: The global error bound for Euler’s method depends upon h, whereas the local error
depends upon h2.
Proof of the above theorem can be found in the book by G.W. Gear, Numerical Initial Value
Problems in Ordinary Differential Equations, Prentice Hall, Inc. (1971).
Remark: Since the exact solution y(t) of the IVP is not known, the above bound may not be
of practical value as far as knowing how large the error can be a priori. However, from this
error bound, we can say that the Euler method can be made to converge faster by decreasing
the step-size. Furthermore, if the equalities, L and M of the above theorem can be found, then
we can determine what step-size will be needed to achieve a certain accuracy, as the following
example shows.
Example 11.12
Consider the IVP:
dy
dt=
t2 + y2
2, y(0) = 0
0 ≤ t ≤ 1, −1 ≤ y(t) ≤ 1.
500 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
Determine how small the step-size should be so that the error does not exceed ǫ = 10−4.
Input Data:
f(t, y) = t2+y2
2
y(0) = y0 = 0
a = 0, b = 1, y(t) ∈ [−1, 1], t ∈ [0, 1].
Compute L: Since
f(t, y) =t2 + y2
2,
we have∂f
∂y= y.
Thus∣
∣
∣
∣
∂f
∂y
∣
∣
∣
∣
≤ 1 for all y, giving L=1 .
Find M :
To find M , we compute the second-derivative of y(t) as follows:
y′ =dy
dt= f(t, y) (Given)
By implicit differentiation, y′′ =∂f
∂t+ f
∂f
∂y
= t+
(
t2 + y2
2
)
y = t+y
2(t2 + y2)
So,
|y′′(t)| =∣
∣
∣t+
y
2(t2 + y2)
∣
∣
∣≤ 2, for − 1 ≤ y ≤ 1.
Thus, M=2,
The, Global Error Bound:
|EGE | = |y(ti)− yi| ≤
2h
2L(e(ti) − 1)
= h(e(ti) − 1)
Again, |e(ti)| ≤ e for 0 ≤ t ≤ 1
Thus, |EGE | ≤ h(e− 1).
Compute h: For the error not to exceed 10−4, we must have:
h(e− 1) < 10−4 or h <10−4
e− 1≈ 5.8198 × 10−5.
11.3. HIGH-ORDER TAYLOR METHODS 501
11.3 High-order Taylor Methods
• Euler’s method was developed by truncating the Taylor series expansion of y(t) after just
one term.
• Higher-order Taylor series can be developed by retaining more terms in the series.
These higher-order methods will be more accurate than Euler’s method ; however, it will be com-
putationally more demanding to implement these methods, because of the need for computing
the higher-order derivatives, as shown below. Computations of these derivatives of y(t) = f(t, y)
will require implicit differentiations.
Recall that the Taylor’s series expansion of y(t) of degree n is given by
y(ti+1) = y(ti) + hy′(ti) +h2
2y′′(ti) + · · ·+
hn
n!y(n)(ti) +
hn+1
(n+ 1)!y(n+1)(ξi)
In order to develop a computational method for IVP, based on the above series, we must write
the various derivatives of y(t) in terms of the derivatives of f(t, y). Thus, we write
(i) y′(t) = f(t, y(t)) (Given).
(ii) y′′(t) = f ′(t, y(t)).
In general,
(iii) y(i)(t) = f (i−1)(t, y(t)), i = 1, 2, . . . , n.
With these notations, we can now write
y(ti+1) = y(ti) + hf(ti, y(ti)) +h2
2f ′(ti, y(ti)) + · · · +
hn−1
(n− 1)!f (n−2)(ti, y(ti))
+hn
n!f (n−1)(ti, y(ti)) +
hn+1
(n+ 1)!f (n)(ξi, y(ξi)) (Remainder term)
= y(ti) + h
[
f(ti, y(ti)) +h
2f ′(ti, y(ti)) + · · ·
+hn−1
n!fn−1(ti, y(ti))
]
+Remainder Term
Neglecting the remainder term the above formula can be written in compact form as follows:
yi+1 = yi + hTk(ti, yi), i = 0, 1, . . . , N − 1,
502 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
where Tk(ti, yi) is defined by:
Tk(ti, y1) = f(ti, yi) +h
2f ′(ti, yi) + · · ·+
hk−1
k!f (k−1)(ti, yi)
(f (0)(ti, yi) = f(ti, yi)).
So, if we truncate the Taylor Series after k terms and use the truncated series to obtain the
approximation y1+1 of y(ti+1), we have the following of k-th order Taylor’s algorithm for
the IVP.
11.3. HIGH-ORDER TAYLOR METHODS 503
Algorithm 11.13 (Taylor’s Algorithm of order k for IVP).
Input: (i) The function f(t, y)
(ii) The end points: a and b
(iii) The initial value: α = y(t0) = y(a)
(iv) The order of the algorithm: k
(v) The step size: h
Step 1. Initialization: Set t0 = a, y0 = α, N = b−ah
Step 2. For i = 0, . . . , N − 1 do
2.1 Compute Tk(ti, yi) = f(ti, yi) +h2f
′(ti, yi) + · · ·+ hk−1
k! f (k−1)(ti, yi)
2.2 Compute yi+1 = yi + hTk(ti, yi)
End
Remark:
(i) Taylor’s algorithm of order k reduces to Euler’s method when k = 1.
(ii) Higher order methods will involve implicit differentiation and thus, there will be an
increasing complexity of computation as the order of the method increases.
Example 11.14
Using Taylor’s algorithm of order 2, approximate y(0.2) and y(0.4) of the following IVP:
y′ = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5, h = 0.2.
Input Data:
f(t, y) = y − t2 − 1
t0 = 0, t1 = 0.2, t2 = 0.4
Preparation: We compute f ′(t, y(t), f′′
(t, y(t)), etc. which will be needed to compute y1 and
y2.
504 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
f(t, y(t)) = y − t2 + 1 (Given).
f ′(t, y(t)) =d
dt(y − t2 + 1) = y′ − 2t
= y − t2 + 1− 2t
f ′′(t, y(t)) =d
dt(y − t2 + 1− 2t) =
dy
dt−
d
dt(t2 − 1 + 2t))
= f(t, y)− (2t+ 2) = y − t2 + 1− 2t− 2 = y − t2 − 2t− 1.
Solution.
i = 0: Compute y1 from y0 (Set i = 0 in Step 2):
y1 = y0 + hf(t0, y(t0)) +h2
2f ′(t0, y(t0))
= 0.5 + 0.2 × 1.5 +(0.2)2
2(0.5 + 1) = 0.8300
(approximate value of y(0.2)).
i = 1 : Compute y2 from y1 (Set i = 1 in Step 2).
y2 = 1.215800 (approximate value of y(0.4)).
etc.
Truncation Error for Taylor’s Method of Order k
Since the Taylor’s method of order k was obtained by truncating the series after k terms, the
remainder after k terms is
Rk =hk+1y(k+1)(ξ)
k + 1, xk < ξ < xk+h
and
since y(k+1)(t) = f (k)(t, y(t)),
the error term for Taylor’s method of order k is:
ETk=
hk+1
(k + 1)!f (k)(ξ, y(ξ))
Thus, the truncation error (local error) of Taylor’s algorithm of order k is O(hk+1).
11.4. RUNGE-KUTTA METHODS 505
11.4 Runge-Kutta Methods
• The Euler’s method is the simplest to implement; however, even for a reasonable accuracy
the step-size h needs to be very small.
• The difficulties with higher order Taylor’s series methods are that the derivatives of higher
orders of f(t, y) need to be computed, which are very often difficult to compute; indeed,
f(t, y) is not even explicitly known in many areas.
The Runge-Kutta methods aim at achieving the accuracy of higher order Taylor series methods
without computing the higher order derivatives. (At the cost of more function evaluations per
step.)
We first develop the simplest one: The Runge-Kutta Methods of order 2.
11.4.1 The Runge-Kutta Methods of order 2
Suppose that we want an expression of the approximation yi+1 in the form:
yi+1 = yi + α1k1 + α2k2, (11.1)
where
k1 = hf(ti, yi), (11.2)
and
k2 = hf(ti + αh, yi + βk1). (11.3)
The constants α1 and α2 and α and β are to be chosen so that the formula is as accurate as
the Taylor’s Series Method of as high order as possible.
To develop such a method we need an important result from Calculus: Taylor’s series for a
function to two variables.
506 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
Taylor’s Theorem for a Function of Two Variables
Let f(t, y) and its partial derivatives of orders up to (n + 1) be continuous in the domain
D = {(t, y)|a ≤ t ≤ b, c ≤ y ≤ d}.
Then
f(t, y) = f(t0, y0) +
[
(t− t0)∂f
∂t(t0, y0) + (y − y0)
∂f
∂y(t0, y0)
]
+ · · ·
+
[
1
n!
n∑
h=0
(
n
i
)
(t− t0)n−i(y − y0)
i ∂nf
∂tn−1∂yi(t0, y0)
]
+Rn(t, y),
where Rn(t, y) is the remainder after n terms and involves the partial derivative of order n+1.
Substituting the value of k1 and k2, respectively from (11.2) and (11.3) into (11.1), we obtain:
yi+1 = yi + α1hf(ti, yi) + α2hf(ti + αh, yi + βk1) (11.4)
Again, Taylor’s Theorem of order n = 1 gives
f(ti + αh, yi + βk1) = f(ti, yi) + αh∂f
∂t(ti, yi) + βk1
∂f
∂y(ti, yi) (11.5)
Thus,
yi+1 = yi + α1hf(ti, yi) + α2h
[
f(ti, yi) + αh∂f
∂t(ti, yi) + βhf(ti, yi)
∂f
∂y(ti, yi)
]
= yi + (α1 + α2)hf(ti, yi) + α2h2
[
α∂f
∂t(ti, yi) + βf(ti, yi)
∂f
∂y(ti, yi)
] (11.6)
Also, note that y(ti+1) = y(ti)+hf(ti, yi)+h2
2
(
∂f∂t(ti, yi) + f(ti, yi)
∂f∂y
(ti, yi))
+higher order terms.
So, neglecting the higher order terms, we can write
yi+1 = yi + hf(ti, yi) +h2
2
(
∂f
∂t(ti, yi) + f
∂f
∂y(ti, yi)
)
. (11.7)
11.4. RUNGE-KUTTA METHODS 507
If we want (11.6) and (11.7) to agree for numerical approximations, then we must have the
corresponding coefficients be equal. This will give us
• α1 + α2 = 1 (comparing the coefficients of hf(ti, yi)).
• α2α = 12 (comparing the coefficients of h2 ∂f
∂t(ti, yi)).
• α2β = 12 (comparing the coefficients of h2f(ti, yi)
∂f∂y(tiyi)).
Since the number of unknowns here exceeds the number of equations, there are infinitely many
possible solutions. The simplest solution is:
α1 = α2 =12 , α = β = 1 .
With these choices we can generate yi+1 from yi as follows. The process is known as the
Modified Euler’s Method.
Generating yi+1 from yi in Modified Euler’s Method
Given
y′ = f(t, y), a ≤ t ≤ b,
y0 = y(t0) = y(0) = α.
h = ti+1 − ti, i = 0, 1, . . . , N − 1.
• Compute k1 = hf(ti, yi).
• Compute k2 = hf(ti + h, yi + k1).
• Compute
yi +1
2[k1 + k2] , i = 0, 1, . . . , N − 1.
508 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
Algorithm 11.15 (The Modified Euler Method).
Inputs: The given function: f(t, y)
The end points of the interval: a and b
The step-size: h
The initial value: y(t0) = y(a) = α
Outputs:Approximations yi+1 of y(ti+1) = y(t0 + ih),
i = 0, 1, 2, · · · , N − 1
Step 1. Initialization
Set t0 = a, y0 = y(t0) = y(a) = α
N = b−ah
Step 2. For i = 0, 1, 2, · · · , N − 1 do
Compute k1 = hf(ti, yi)
Compute k2 = hf(ti + h, yi + k1)
Compute yi+1 = yi +12(k1 + k2).
End
Example 11.16
Solve the Initial Value Problem: y′ = et, y(0) = 1, and h = 0.5, 0 ≤ t ≤ 1.
Input Data:
f(t, y) = et,
y0 = y(t0) = y(0) = 1.
h = 0.5
Step 1.
t0 = 0, y0 = y(0) = 1
Step 2.
i = 0 : Compute y1 from y0:
k1 = hf(t0, y0) = 0.5et0 = 0.5
k2 = hf(t0 + h, y0 + k1) = 0.5(et0+h) = 0.5e0.5 = 0.8244
y1 = y0 +12(k1 + k2) = 1 + 0.5(0.5 + 0.8244) = 1.6622
Note: y(0.5) = e0.5 = 1.6487 (Four significant-digits).
i = 1: Compute y2 from y1:
11.4. RUNGE-KUTTA METHODS 509
k1 = hf(t1, y1) = 0.5et1 = 0.5e0.5 = 0.8244
k2 = hf(t1 + h, y1 + k1) = 0.5e0.5+0.5 = 0.5e = 1.3591
y2 = y1 +12 (k1 + k2) = 1.6622 + 1
2(0.8244 + 1.3591) = 2.7539
Note: y(1) = 2.7183
Example 11.17
Given: y′ = t+ y, y(0) = 1, compute y1 (approximation to y(0.01)) and y2 (approximation to
y(0.02) by using Modified Euler Method.
Input Data:
f(t, y) = t+ y
t0 = 0, y(0) = 1
h = 0.01
i = 0 : y1 = y0 +12 (k1 + k2)
k1 = hf(t0, y0) = 0.01(0 + 1) = 0.01
k2 = hf(t0 + h, y0 + k1) = 0.01 × f(0.01, 1 + 0.01)
= 0.01 × (0.01 + 1.01) = 0.01 × 1.02 = 0.0102
Thus y1 = 1 + 12(0.01 + 0.0102) = 1.0101 (Approximate value of y(0.01))
i = 1 : y2 = y1 +12 (k1 + k2)
k1 = hf(t1, y1)
= 0.01 × f(0.01, 1.0101) = 0.01 × (0.01 + 1.0101)
= 0.0102
k2 = hf(t1 + h, y1 + k1)
= 0.01 × f(0.02, 1.0101 + 0.0102) = 0.01× (0.02 + 1.0203)
= −0.0104
y2 = 1.0101 + 12(0.0102 + 0.0104) = 1.0204 (Approximate value of y(0.02)).
Local Error in the Modified Euler Method
Since in deriving the modified Euler method, we neglected the terms involving h3 and higher
powers of h, the local error for this method is O(h3). Thus with the modified Euler
method, we will be able to use larger step-size h than the Euler method to obtain
the same accuracy.
510 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
The Midpoint and Heun’s Methods
In deriving the modified Euler’s Method, we have considered only one set of possible values of
α1, α2, α1 and β. We will now consider two more sets of values.
Choice 1: α = 0, α2 = 1, α = β = 12 .
This choice yields the Midpoint Method.
Algorithm 11.18 (The Midpoint Method).
Inputs:
(i) The given function: f(t, y)
(ii) Step-size: h
(iii) The initial value: y(t0) = y(a) = α
Output:
Approximation yi+1 of y(ti+1) = y(t0 + ih), i = 0, 1, · · · , N − 1.
For i = 0, 1, 2, · · · , N − 1 do
Compute k1 : k1 = hf(ti, yi)
Compute k2 = hf(
ti +h2 , yi +
k12
)
Compute yi+1 from yi : yi+1 = yi + k2
End
Example 11.19
For the IVP:
y′ = et, y(0) = 1, h = 0.5, 0 ≤ t ≤ 1, approximate y(0.5) and y(1).
Solution
Input Data:
11.4. RUNGE-KUTTA METHODS 511
f(t, y) = et
y0 = y(0) = 1
h = 0.5
t0 = 0, t1 = 0.5
Compute y1, an approximation to y(0.5) and y(1).
i = 0 : k1 = hf(t0, y0) = 0.5et0 = 0.5e0 = 0.5
k2 = hf(t0 +h2 , y0 +
k12 ) = 0.5e
0.52 = 0.6420
y1 = y0 + k2 = 1 + 0.6420 = 1.6420
Note: y(0.5) = 1.6487
Compute y2, an approximation of y(1):
i = 1 : k1 = hf(t1, y1) = 0.5e0.5 = 0.8244
k2 = hf(t1 +h2 , y1 +
k12 ) = 0.5e.75 = 1.0585
y2 = y1 + k2 = 1.6420 + 1.0585 = 2.7005
(y(1) = e = 2.7183)
Choice 2: α1 =14 , α1 =
34 , α = β = 2
3 .
This choice gives us Heun’s method.
Heun’s Method
Generating yi+1 from yi by Heun’s Method:
• Compute k1 : k1 = hf(ti, yi)
• Compute k2 : k2 = hf(
ti +23h, yi +
23k1)
• Compute yi+1 from yi: yi+1 = yi +14k1 +
34k2, i = 0, 1, . . . , N − 1.
Heun’s Method and the Modified Euler’s Method are classified as the Runge-Kutta
methods of order 2.
These methods have local errors of O(h3)
512 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
11.5 The Runge-Kutta Method of order 4
A method widely used in practice is the Runge-Kutta method of order 4. It’s derivation is
complicated. We will just state the method without proof.
Algorithm 11.20 (The Runge-Kutta Method of Order 4).
Inputs: f(t, y) : the given function
a, b : the end points of the interval
α : the initial value y(t0)
h : the step size =b− a
N
Outputs: The approximations yi+1 of y(ti+1), i = 0, 1, . . . , N − 1
Step 1. (Initialization)
Set t0 = a, y0 = y(t0) = y(α)N = b−ah
.
Step 2. For i = 0, 1, 2, . . . , N − 1 do
Step 2.1. Compute the Runge-Kutta coefficients:
k1 = hf(ti, yi)
k2 = hf(ti +h2 , yi +
12k1)
k3 = hf(ti +h2 , yi +
12k2)
k4 = hf(ti + h, yi + k3)Step 2.2. Compute yi+1 from yi:
yi+1 = yi +16 (k1 + 2k2 + 2k3 + k4)
End
The Local Truncation Error: The local truncation error of the Runge-Kutta Method of
order 4 is O(h5).
Example 11.21
Apply Runge-Kulta Method of order 4 to the following IVP:
y′ = t+ y, 0 ≤ t ≤ 0.05
y(0) = 1
h = 0.01
Input Data:
11.6. MULTI-STEP METHODS (EXPLICIT-TYPE) 513
f(t, y) = t+ y
t0 = a = 0, t1 = t0 + h = 0.01, t2 = t1 + h = 0.02, t3 = 0.03,
t4 = 0.04, t5 = 0.05.
Step 2.1. Compute y1 from y0 (Set i = 0 in Step 2):
y1 = y0 +1
6(k1 + 2k2 + 2k3 + k4)
Step 2.2. Compute the Runge-Kutta coefficients.
k1 = hf(t0, y0) = 0.01f(0, 1) = 0.01 × 1 = 0.01.
k2 = hf(t0 +h2 , y0 +
k12 ) = 0.01f
(
0.012 , 1 + 0.01
2
)
= 0.01[
0.012 + 1+0.01
2
]
= 0.0101.
k3 = hf(
t0 +h2 , y0 +
k22
)
= h(
t0 +h2 + y0 +
k22
)
= 0.0101005.
k4 = hf(t0 + h, y0 + k3) = h(t0 + h+ y0 + k3) = 0.01020100
Step 2.3. Compute y1 from y0:
y1 = y0 +16(k1 + 2k2 + 2k3 + k4) = 1.010100334
and so on.
11.6 Multi-Step Methods (Explicit-type)
The methods we have discussed so far, Euler’s method and Runge-Kutta methods of order 2
and order 4, are single-step methods; because, given y(t0) = f(t0, y0) = α, yi+1 is computed
from yi. However, if f(t, y) is known at m + 1 points, say, at ti, ti−1, ti−2, . . ., ti−m; that is,
if f(tk, yk), k = i, i − 1, . . . , i − m are known, then we can develop higher-order methods to
compute yi+1.
One such class of methods can be developed based on numerical integration as follows:
From
y′ = f(t, y)
we have, by integrating from ti to ti+1:
∫ ti+1
ti
y′ dt =
∫ ti+1
ti
f(t, y) dt
or
yi+1 − yi =
∫ ti+1
ti
f(t, y) dt
514 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
or
yi+1 = yi +
∫ ti+1
ti
f(t, y) dt
Estimating the Integral
∫ ti+1
ti
f(t, y)dt :
Since the functional values f(t, y)are known at m points backward from ti; namely at t =
ti−1, . . . , ti−m, the obvious thing to do is:
• Find the interpolating polynomial Pm(t) that interpolates at ti, ti−1, . . . , ti−m using New-
ton’s backward difference formula:
Pm(t) =m∑
k=0
(−1)k
(
−s
k
)
∆kfi−k,
where s =t− ti
h, ft,k = f(tk, y(tk))
• Integrate
∫ ti+1
ti
Pm(t)dt
Substituting the expression of Pm(t) and performing the integral, we obtain an explicit formula
for computing yi+1 from yi that makes use of the (m + 1) values of f(t, y) at the points, ti,
ti−1, ti−2, · · · , ti+m,, yielding the (m+ 1) step Adams-Bashforth Formula.
11.6. MULTI-STEP METHODS (EXPLICIT-TYPE) 515
The (m+ 1) Step Adams-Bashforth Formula:
Given: fi, fi−1, . . . , fi−m.
Compute: yi + h[c0fi + c1∆fi−1 + · · · + cm∆mfi−m]
where
ck = (−1)k∫ 1
0
(
s
k
)
ds, k = 1, 2, · · · ,m
fk = f(xk, y(xk)), k = i, i− 1, · · · , i−m
Special Cases:
• m = 0: Adams-Bashforth One-Step Formula ≡ Euler’s Method.
• m = 2 : Adams-Bashforth Three-Step Formula
Given fi, fi−1, fi−2.
Compute: yi+1 = yi +h
12[23fi,−16fi−1 + 5fi−2]
• m = 3 : Adams-Bashforth Four-Step Formula
Given: fi, fi−1, fi−2, fi−3
Compute: yi+1 = yi +h
24[55fi − 59fi−1 + 37fi−2 − 9fi−3].
Other explicit higher multi-step methods can similarly be developed.
Implicit Multistep Methods
Consider now integrating
∫ ti+1
ti
f(t, y) dt using an interpolating polynomial Pm+1(t) of degree
m+ 1 that interpolates at (m+ 2) points (rather than (m+ 1) points as in Adams-Bashforth
formula): ti+1, ti, ti−1, . . . , ti−m.
Then using Newton’s backward formula, we have
Pm+1(t) =
m+1∑
k=0
(−1)k
(
1− s
k
)
∆kfi+1−k.
516 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
For m = 2, We have Adams-Moulton Three-Step Formula:
yi+1 = yi +h
24(9fi+1 + 19fi − 5fi−1 + fi−2).
Since fk = f(tk, y(tk)), k = i, i− 1, i− 2, the above expression becomes:
yi+1 = yi +h
24(9f(ti+1, y(ti+1)) + 19f(ti, y(ti))− 5f(ti−1, y(ti−1)) + f(ti−2, y(ti−2)).
The formulas of the above type are implicit multi-step formulas; because computation of
yi+1 requires y(ti+1).
• Error Term: The error term of three-step implicit Adams-Moulton formula is also of
O(h5), but with a smaller constant term. Specifically, it is
E3AM = − 19
720h5y5(ξ), where ξ lies between ti−2 and ti+1.
11.7 Predictor-Corrector Methods
A class of methods called Predictor-Corrector methods, is based on the following principle:
• Predict the initial value of y(0)i+1 by using an explicit formula
• Correct the predicted value of y(0)i+1 iteratively to y
(1)i+1, y
(2)i+1, . . ., y
(k)i+1 by using an
implicit formula until two successive iterations agree to each other by a prescriben error
tolerance.
In the sequel, we will state two such methods.
11.7.1 Euler-Trapezoidal Predictor-Corrector Method
This method uses Euler’s method as the predictor and then a corrector formula is devel-
oped based on integrating∫ ti+1
tif(t, y) dt using trapezoidal rule.
• Euler’s method gives
y(0)i+1 = yi + hf(ti, yi) (predicted value)
• Trapezoidal rule of integration applied to∫ yi+1
yiy′ dt gives
yi+1 = yi +h
2[f(ti, yi) + f(ti+1, y(ti+1))].
11.7. PREDICTOR-CORRECTOR METHODS 517
This is an implicit formula of compute yi+1 since the value of y(ti+1) is needed to compute yi+1.
However, having found the initial gues of y(0)i+1 from the predictor, we can now correct this value
iteratively using the above formula.
Thus, compute
y(1)i+1 = yi +
h
2
[
f(ti, yi) + f(ti+1, y(0)i+1)
]
y(2)i+1 = yi +
h
2
[
f(ti, yi) + f(ti+1, y(1)i+1)
]
In general,
y(k)i+1 = yi +
h
2[f(ti, yi) + f(ti+1, y
(k−1)i+1 )], k = 1, 2, 3, . . . .
We then have a second-order predictor-corrector method:
518 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
Algorithm 11.22 (Euler-Trapezoidal Predictor-Corrector Method).
Input: (i) y′ = f(t, y), y(t0) = α
(ii) Step-size h
(iii) Error tolerance ǫ
(iv) Points ti+1 = ti + h, i = 0, 1, · · · , N − 1 at which approximations to y(ti) are
sought.
Output:Approximations y(k)i+1, k = 1, 2, · · · of y(ti+1) for a fixed i = 0, 1, 2, · · · , N − 1.
For i = 0, 1, 2, · · · , N − 1 do
Step 1. (Predict): Compute y(0)i+1 using y
(0)i+1 = yi + hf(ti, yi).
Step 2. (Correct): For k = 1, 2, · · · do
Step 2.1. Compute y(k)i+1: y
(k)i+1 = yi +
h2 [f(ti, yi) + f(ti+1, y
(k−1)i+1 )]
Step 2.2. Stop when the relative change is less than ǫ:|y
(k)i+1−y
(k−1)i+1 |
|y(k)i+1|
< ǫ.
Step 2.3. Accept the current value of y(k)i+1 as yi+1.
End
Example 11.23
Given the Initial Value Problem:
y′ = t+ y, y(0) = 1
h = 0.01
Compute an approximation of y(0.01), usingEuler-Trapezoidal Predictor-Corrector Method.
Input Data:
f(t, y) = t+ y
y0 = y(0) = 1
h = 0.01
N = 1
Analytical Solution: y =t2
2+ t+ 1
Solution
i = 0.
11.7. PREDICTOR-CORRECTOR METHODS 519
Step 1. Predict y01 using formula in Step 1.
y(0)1 = y0 + hf(t0, y0)
= 1 + 0.01(t0, y0)
= 1 + 0.01(1) = 1 + 0.01 = 1.01
Step 2. Correct y(0)1 using formula in Step 2.1.
k = 0: y(1)1 = y0 +
h
2[f(t0, y0) + f(t1, y
(0)1 )]
= 1 +0.01
2[(t0 + y0) + (t1 + y
(0)1 )]
= 1 +0.01
2[1 + (0.01 + 1.01)]
= 1.0101
k = 1: y(2)1 = y0 +
h
2[f(t0, y0) + f(t1, y
(1)1 )]
= 1 +0.01
2[1 + 0.01 + 1.0101]
= 1.0101
y(2)1 is accepted as y1, the approximate value of y(0.01).
Error |y(0.1) − y1| = |1.0100 − 1.0101| = 5× 10−5.
When does the iteration in 2.1 converge?
It can be shown [Exercise] that if f(t, y) and ∂f∂y
are continuous on [a, b], then the iteration
will converge if h is chosen so that∣
∣
∣
∂f∂y
∣
∣
∣h < 2.
Convergence of the Iteration of Example 11.23:
Note that for the above example, ∂f∂y
= 1, so the iteration will converge if h < 2. Since we had
h = 0.01 < 2, the iteration converged after 1 iteration.
11.7.2 Higher-order Predictor-corrector Methods
Higher-order predictor corrector methods can be developed by combining an explicit multistep
method (corrector) with an implicit multistep method (predictor).
520 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
For example, when a Four-Step explicit Adams-Bashforth formula is combined with a
Three-Step implicit Adams-Moulton formula , the result is Adams-Bashforth-Moulton
predictor-corrector method.
Here:
• Predictor: Four-Step Adams-Bashforth Formula: yi+1 = yi+h
24(55fi−59fi−1+37fi−2−
9fi−3)
• Corrector: Four-Step Adams-Moulton Formula: yi+1 = yi +h
24(9fi+1 + 19fi − 5fi−1 +
fi−2)
11.7. PREDICTOR-CORRECTOR METHODS 521
Algorithm 11.24 (Adams-Bashforth-Moulton Predictor-Corrector Method).
Inputs: (i) f(t, y) - the given function
(ii) h - the step size
(iii) f(t0, y(t0)), f(t−1, y(t−1)), f(t−2, y(t−2)), f(t−3, y(t−3))
the values of f(t, y) at four given points t0, t1, t2, and t3.
(iv) ǫ - Error tolerance
(v) points ti+1 = ti + h, i = 0, 1, 2, . . . , N − 1 at
which approximations to y(ti+1) are sought.
Outputs:Approximations of y(k)i+1, k = 1, 2, 3, . . ., for fixed i = 0, 1, 2, . . . , N − 1.
For i = 0, 1, 2, . . . , N − 1 do
Step 1. Compute y(0)i+1, using explicit Four-Step Adams-Bashforth Formula:
y(0)i+1 = yi +
h24 [55f(ti, y(ti))− 59f(ti−1, y(ti−1))
+ 37f(ti−2, y(ti−2))− 9f(ti−3, y(ti−3))]
Step 2. Predict y(1)i+1, y
(2)i+1 . . . using implicit Adams-Moulton formula.
For k = 1, 2, . . . , do
y(k)i+1 = yi +
h
24[9f(ti+1, y
(k−1)i+1 + 19f(ti, y(ti))− 5f(ti−1, y(ti−1) + f(ti−2, y(ti−2)]
Stop if|y
(k)i+1 − y
(k−1)i+1 |
|y(k)i+1|
< ǫ
Accept the current value of y(k)i+1 as yi+1.
End
End
Milne’s Predictor-Corrector Method
The well-known Milne’s predictor-corrector method is obtained by using the corrector formula
based on Simpson’s rule of integration and the following formula as the predictor.
522 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
Predictor: y(0)i+1 = yi−3 +
4h3 (2fi − fi−1 + 2fi−2)
Corrector: y(1)i+1 = yi +
h3 (f
(0)i+1 + 4fi + fi−1)
where f(0)i+1 = f(ti+1, y
(0)i+1),
and fk = f(tk, yk), k = i+ 1, i, and i− 1.
• Error Term: The error term for the predictor: 2890h
5yv(ξ) and that for the corrector:
− 190h
5yv(η) where ti < ξ < ti−2, and ti+1 < η < ti−1.
Both errors are of the same order, but the corrector has the lower constant term.
EXAMPLES TO BE INSERTED
11.8 Systems of Differential Equations
So far we have considered solution of a single first-order differential equation of the form:
y′ − f(t, y), given y(t0) = α.
However, many applications give rise to systems of differential equations. A system of n first
order differential equations has the form:
y′1 = f1(t, y1, y2, · · · , yn)
y′2 = f2(t, y1, y2, · · · , yn)
...
y′n = fn(t, y1, y2, · · · , yn)
The numerical methods that we have discussed so far for a single equation can be applied to
the system of equations as well.
For the purpose of illustration, let’s consider a system of two equations only, written as
dy
dt= f1(t, y, z)
dz
dt= f2(t, y, z),
and suppose that Euler’s method is applied to solve them.
Then given the initial values
y(t0) = y(a) = α
and z(t0) = z(a) = β
11.8. SYSTEMS OF DIFFERENTIAL EQUATIONS 523
we can obtain successive approximations y1, y2, · · · , yN to y(t1), y(t2), · · · , y(tN ) and those z1,
z2, · · · , zN to z(t1), z(t2), · · · , z(tN ), as follows:
Euler’s Method for a System of Two Equations
yi+1 = yi + hf1(ti, yi, zi)
zi+1 = zi + hf2(ti, yi, zi)
for i = 0, 1, · · · , N − 1.
Given
Example 11.25
dydt
= 2y + 3z, y(0) = 1dzdt
= 2y + z, z(0) = −1,
h = 0.1
Find an approximation of (y(0.1), z(0.1)) uisng Euler’s method.
Exact Solution: y(t) = e−t
z(t) = −e−t
Input Data:
(i) f1(t, y, z) = 2y + 3z
(ii) f2(t, y, z) = 2y + z
(iii) Initial Values: y0 = y(0) = 1; z0 = z(0) = −1
(iv) Step size: h = 0.1
524 CHAPTER 11. NUMERICAL DIFFERENTIAL EQUATIONS: IVP
i = 0
y1 = y0 + hf1(t0, y0, z0)
= 1 + 0.1(2y0 + 3z0)
= 1 + 0.1(2 − 3) = 1− .01 = 0.9000 (Approximate value of y(0.1))
z1 = z0 + hf2(t0, y0, z0)
= −1 + 0.1(2y0 + z0)
= −1 + 0.1(2 − 1) = −1 + 0.1 = −0.9000 (Approximate value of z(0.1))
Exact solution (correct up to four significant digits):
(
y(0.1) = e−0.1 = 0.9048
z(0.1) = e0.1 = −0.9048
)
.
11.8. SYSTEMS OF DIFFERENTIAL EQUATIONS 525
Similarly, if Runge-Kutta method of order 4 is applied to the above two equations we have:
Runge-Kutta Method of Order 4 for a System of Two Equations
Input: (i) Two functions f1(t, y, z) and f2(t, y, z).
(ii) Step size h
(iii) Initial values y(t0) = α, and z(t0) = β.
(iv) points t1, t2, · · · , tN−1, tN where the approximations are sought
Output:Approximations to to y(t1), · · · , y(tN ) and z(t1), · · · , z(tN ).
For i = 0, 1, · · · , N − 1 do
Step 1. Compute the following Runge-Kutta Coefficients:
k1 = hf1(ti, yi, zi)
l1 = hf2(ti, yi, zi)
k2 = hf1(ti +h2 , yi +
k12 , zi +
l12 )
l2 = hf2(ti +h2 , yi +
k12 , zi +
l12 )
k3 = hf1(ti +h2 , yi +
k22 , zi +
l22 )
l3 = hf2(ti +h2 , yi +
k22 , zi +
l22 )
k4 = hf1(ti + h, yi + k3, zi + l3)
l4 = hf2(ti + h, yi + k3, zi + l3).
Step 2. Compute
yi+1 = yi +16 (k1 + 2k2 + 2k3 + k4)
zi+1 = zi +16(l1 + 2l2 + 2l3 + l4).
End
Example will be inserted.
More: Aspects of Stability, stiffness, etc.
Much more to come