4 INFINITE SERIES Thus far in this text, only finite dimensional equations and vector spaces have been en- countered. This chapter begins the transition to classes of applications that involve differ- ential equations and their solution spaces, which are infmite dimensional. Before delving into infmite series solution methods for differential equations, a review is made of the the- ory of infinite series, upon which solution methods presented here and later in the text are based. Certain types of second order ordinary differential equations describe physical systems and arise as a result of separation of variables in partial differential equations, so they deserve to be studied in their own right. Such equations, with constant and polyno- mial coefficients, can often be solved by power series techniques, a subject of this chapter. Finally, some special classes of functions that arise as solutions of second order ordinary differential equations are studied. 4.1 INFINITE SERIES WHOSE TERMS ARE CONSTANTS lnfmite series play a key role in both theoretical and approximate treatment of differential equations that arise in engineering applications. Consider first the problem of attaching meaning to the sum of an infmite number of terms. Convergence of Series of Nonnegative Constants Definition 4.1.1. For a sequence of finite constants { uj} = { u 1 , u 2 , ... }, de- fine the nth partial sum as n S = U· n .L.J 1 (4.1.1) i=l which establishes a sequence of partial sums {snl· A sequence {sn}, whether par- tial sums or otherwise, is said to converge to s if, for any number e > 0, there is an inte- ger M, such that if n ;;;:: M, then I s - &n I < e. If the partial sum Sn converges to a fmite limit s as n --? oo, denoted 00 limsn=s=Lui (4.1.2) i=l 125
43
Embed
INFINITE SERIES - user.engineering.uiowa.eduuser.engineering.uiowa.edu/~kkchoi/Chapter_4.pdf · 4 INFINITE SERIES Thus far in this text, only finite dimensional equations and vector
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
4
INFINITE SERIES
Thus far in this text, only finite dimensional equations and vector spaces have been encountered. This chapter begins the transition to classes of applications that involve differential equations and their solution spaces, which are infmite dimensional. Before delving into infmite series solution methods for differential equations, a review is made of the theory of infinite series, upon which solution methods presented here and later in the text are based. Certain types of second order ordinary differential equations describe physical systems and arise as a result of separation of variables in partial differential equations, so they deserve to be studied in their own right. Such equations, with constant and polynomial coefficients, can often be solved by power series techniques, a subject of this chapter. Finally, some special classes of functions that arise as solutions of second order ordinary differential equations are studied.
4.1 INFINITE SERIES WHOSE TERMS ARE CONSTANTS
lnfmite series play a key role in both theoretical and approximate treatment of differential equations that arise in engineering applications. Consider first the problem of attaching meaning to the sum of an infmite number of terms.
Convergence of Series of Nonnegative Constants
Definition 4.1.1. For a sequence of finite constants { uj} = { u1, u2, ... }, define the nth partial sum as
n
S = ~ U· n .L.J 1 (4.1.1)
i=l
which establishes a sequence of partial sums {snl· A sequence {sn}, whether partial sums or otherwise, is said to converge to s if, for any number e > 0, there is an integer M, such that if n ;;;:: M, then I s - &n I < e. If the partial sum Sn converges to a fmite limit s as n --? oo, denoted
00
limsn=s=Lui n~oo
(4.1.2) i=l
125
126 Chap. 4 Infinite Series
00
the infinite series ,L ui is said to be a convergent series and to have the value s, i=1
called the sum of the series. • Defining the sum of a convergent infinite series in Eq. 4.1.1 as s, a necessary
This condition, however, is not sufficient to guarantee convergence. The partial sums Sn may not converge to a single limit, but may oscillate or diverge
to± oo. In the case ui = ( -1 )i,
1, n even
0, n odd
This is an oscillating series that does not converge to a limit. For the case ui = i,
n
Sn=lui= i=1
n(n+1) 2 (4.1.3)
so, as n ~ oo, lim Sn = oo, Whenever the sequence of partial sums approaches ± oo, the n-+oo
infinite series is said to be a divergent series.
Example 4.1.1
For the geometric sequence with r ~ 0; i.e.,
u· "" ari 1
n-1
the partial sums are Sn = ,L ari. Note that for n = 1, rO = 1, so s1 = a. For n = 2, i=O
s2 = a + ar = a ( 1 + r ) = a ( 1 - r2 ) I ( 1 - r ). Assume, for proof by induction, that
sk = a ( 1 - rk ) I ( 1 - r ). Then, by definition of the partial sum,
= a ( 1 - rk) I ( 1 - r) + ark = a ( 1 - rk+1) I ( 1 - r)
Sec. 4.1 Infinite Series Whose Terms Are Constants
00
Thus, the partial sum of the geometric series ,L ari is i=O
127
(4.1.4)
which can also be verified by division of the fraction. Taking the limit as n ~ oo, for r < 1,
a lim Sn = n~oo 1- r
(4.1.5)
By definition, the geometric series converges for r < 1 and its sum is given by
00
arl =-L . a 1-r
i=O
(4.1.6)
On the other hand, if r ;::: 1, the necessary condition ui ~ 0 is not satisfied and the geometric series diverges. •
In practice, it is a matter of extreme importance to be able to tell whether a given series is convergent. Several convergence tests are given, starting with the simple comparison test and working up to more complicated but quite sensitive tests. Proofs of validity of these tests may be found in Refs. 7-9. For the present, consider a series of positive terms,~> 0.
Theorem 4.1.1 (Comparison Test). If I ui I ~ ~ for all but a finite number of 00
i and if the ~ form a convergent series, then the series ,L ui is also convergent. If i=l
vi;::: bi;::: 0 for all but a finite number of i and if the bi form a divergent series, then the 00
series ,L vi is also divergent. i=l
• As a convergent series ~ for application of the comparison test, the geometric series
of Example 4.1.1 is already available. Once other series are identified as being either convergent or divergent, they may be used as the known series in this comparison test.
Example 4.1.2
Consider ui = 1 I ( i )i, hence the series
128 Chap. 4 Infinite Series
00
1 1 1 1 + 4 + 27 + 256 + ...
Since I ui I = ( 1 I i )i ~ ( 1 I 2 i for i > 1 and the geometric series of Example 4.1.1 with r = 1 I 2 converges, this series converges. •
Theorem 4.1.2 (Cauchy Root Test). If~ ~ 0 and ( ai )11i ~ r < 1 for all suf-00
ficiently large i; i.e., if ).im ( ai )11i < 1, then ~ ai is convergent. If ( ~ )11i > 1 for all 1--+00 ~
i=1 00
sufficiently large i; i.e., if .lim ( ~ )11i > 1, then ~ ai is divergent. If ).im ( ai )11i = 1, 1--+00 ~ 1--+00
i=1 the test fails to yield any conclusion. •
The first part of this Cauchy root test is verified easily by raising ( ~ )1/i ~ r to the ith power, noting that
(4.1.7) 00
Since t is just the ith term in a convergent geometric series, L ~ is convergent by the i=1
comparison test. Conversely, if ( ai )1/i > 1, the series must diverge. • The root test is particularly useful in establishing properties of power series.
Theorem 4.1.3 (Ratio Test). If~ ~ 0 and ai+11ai ~ r < 1 for all sufficiently 00
large i; i.e., if ).im ai+1 < 1, then ~ ~ is convergent. If ai+11ai > 1 for all sufficiently 1--+oo a· ~
1 i=1 00
1 · · "f lim ~+1 1 th - L · di If lim ai+1 1 h c. '1 · ld arge 1; 1.e., 1 . - > , en ~ 1s vergent. . - = , t e test 1a1 s to y1e 1--+00 ~ 1--+oo a·
~1 1
any conclusion. •
For proof of the Ratio Test, see Refs. 7-9.
Although not quite so sensitive as the root test, the ratio test is one of the easiest to apply and is widely used.
Example 4.1.3 00
Consider the series L \. Using the ratio test, i=l 2
Sec. 4.1 Infinite Series Whose Terms Are Constants
a . for all sufficiently large i. Equivalently, .lim __!:!:!. = .lim ~1 = 1. The ratio test
1~ ai 1~00 1+ thus fails and it cannot be detennined whether the series converges or not. •
Theorem 4.1.4 (Integral Test). Let f(x) ~ 0 be a continuous, monotonic
decreasing function; i.e., f(x) ~ f(y) if y ~ x. Let~ = f(i). Then 2: ~converges if i=1
the integral J 1
00
f(x) dx is finite and it diverges if the integral is infinite. • Example 4.1.5
For the series of Example 4.1.4, f(i) = 1 I i. By the integral test,
J 100
f(x) dx = roo ( 1 I X ) dx
=lnx 17 =co
130 Chap. 4 Infinite Series
Thus, the series diverges. Recall that the ratio test failed to yield a conclusion in Example 4.1.4. This suggests that the integral test is more powerful than the ratio test.
• Example 4.1.6
The Riemann Zeta function is defined as
00
~(p) = ~ .!.. .f..J ·P i=l 1
where p ~ 0. Define f(x) = x-P and calculate
xl-p I oo'
1-p 1 ifp:t1
lnx ~~, if p=1
The integral and the series are divergent for p ~ 1 and convergent for p > 1. •
Alternating Series
In the foregoing tests, attention has been limited to series with positive terms. Consider next infinite series in which the signs of terms alternate, called alternating series. The partial cancellation that occurs due to alternating signs makes convergence more rapid and much easier to identify.
00
Theorem 4.1.5 (Leibniz Test). With ai ~ 0, form L ( -1 )i+l ~· If, beyond i=l
some integer N, the ai are monotonically decreasing; i.e., ak ~ ai for all i > k > N, and if }.im ~ = 0, then the alternating series converges. • l-+00
Example 4.1.7
The divergent series in Examples 4.1.4 and 4.1.5 is altered so that the signs of terms alternate; i.e.,
00
L < -1 )i: i=l
The term ai = ~ is monotonically decreasing for i ~ 1 and 1
Sec. 4.1 Infinite Series Whose Tenns Are Constants 131
lim a· = 0 • 1 }~00
Thus the alternating series converges. • Example 4.1.8
00
To show that the series I, ( -1 )i ~ i converges, it may be shown that all of the i=1
conditions of the Leibniz test are satisfied, except possibly for the inequality
1n ( i + 1 ) 1n i h . 1 bl" h th" . h h f( ) 1n X • . 1 < -.-. T e s1mp est way to esta 1s 1s 1s to s ow t at x =- 1s 1+ 1 X
1-ln x strictly decreasing. This follows, since its derivative is f'(x) = 2 < 0, for all
X
x>e. • Absolute Convergence
Definition 4.1.2. Given a sequence of terms ui, in which ui may vary in sign, if 00 00
I, I ui I converges, then I, ui is said to be absolutely convergent. A series is said ~1 ~1
to be conditionally convergent if it converges, but does not converge absolutely. •
The series in Examples 4.1.7 and 4.1.8 converge conditionally, but they do not converge absolutely. The following property of an absolutely convergent series follows from the comparison test.
Theorem 4.1.6. An absolutely convergent series is convergent and
• The tests established for convergence of series with positive terms are immediately
applicable as tests for absolute convergence of arbitrary series. The following is a restatement of the ratio test for absolute convergence.
00
Theorem 4.1.7 (Ratio Test for Absolute Convergence). Let I,~ be a i=1
series with nonzero terms. If Jim I ai+1 I < 1, then " ~converges absolutely. If ~~ ai £-i i=1
132 Chap. 4 Infinite Series
00
Pin I ai+1 I 1--+oo ~
> 1, then~ ~is divergent. If .lim I ai+1 I = 1, the test fails to yield any .L..J 1--+oo ~ i=1
conclusion. • Note that for the case )im I ~+1 I > 1, the conclusion is not that the series fails to
1--+oo ~
converge absolutely, but that it actually diverges. Another point to note is that the Ratio Test for Absolute Convergence never establishes convergence of conditionally convergent series.
EXERCISES 4.1
1. Determine whether the following series converge or not:
00
(a) Lonif1
i=2 00
(b) ~ 2i ( 2~-1) 1=1
00
(c) Li~i i=2
2. Determine whether
00
is convergent or divergent by the Cauchy root test.
3. Test for absolute convergence, conditional convergence, or divergence of the following series:
~ (-1 )i-1 (a) .L..J 2i + 3
i=1 00 .4 ~ ( )i-1 1
(b) ~ -1 (~'1"'""i -+ -1 ):--! 1=1
Sec. 4.2 Infinite Series Whose Terms Are Functions 133
00
(c) L(-1)ii~i i=l e
00
(d) ~ ( _ 1 )i cos ia £.J -2 i=l 1
4. Show by three counterexamples that each of the three conditions of the Leibniz test is needed in the statement of that test; i.e., the alternating of signs, the decreasing nature of ~. and the limit of ~ being zero.
4.2 INFINITE SERIES WHOSE TERMS ARE FUNCTIONS
The concept of infinite series is next extended to include the possibility that each term ui may be a function of a real variable x; i.e., ui = ui(x). The partial sums thus become functions of the variable x; i.e.,
n
Sn(X) = L Uj(X) i=l
The sum of the series, defined as the limit of the partial sums, is also a function of x; i.e.,
00
L ui(x) = s(x) = lim Sn(x) i=l n-+oo
(4.2.1)
Thus far, attention has been focused on the behavior of partial sums as functions of n. Consider now how convergence depends on the value of x.
Definition 4.2.1. A sequence sn(x) is said to converge pointwise to a function s(x) that is defmed on an interval [ a, b ] if
lim sn(x) = s(x) n-+oo
for each x e [ a, b ] . Equivalently, for each E > 0 and x e [ a, b ] , there exists an integer N, which may be dependent onE and x, such that
I sn(x) - s(x) I < E
for all n > N.
Example 4.2.1 •
The sequence sn(x) = xn, x e [ 0, 1 ], converges pointwise to s(x) on [ 0, 1 ], where s(x) = 0 for x e [ 0, 1 ) and s(1) = 1. Note that even though each Sn(x) is continu-
134 Chap. 4 Infinite Series
ous, the limit s(x) is not continuous, since it has a discontinuity at x = 1. This elementary example shows that a sequence of continuous functions of a variable x can converge for each value of x to a discontinuous function. •
Uniform Convergence
Definition 4.2.2. If for each £ > 0 there is an integer N that is independent of x in the interval [ a, b ] , such that
I s(x) - Sn(x) I < £ (4.2.2)
for all n :=::Nand for all x e [a, b ], then the sequence { sn(x) } is said to be uniformly convergent to s(x) in the interval [a, b ]. •
This condition is illustrated in Fig. 4.2.1. No matter how small£ is taken to be, N can always be chosen large enough so that the magnitude of the difference between s(x) and Sn(x) is less than£, for all n >Nand all x e [a, b ].
f(x)
~::::::=>:::aSn~L~x_)-:_-:_-_-_-_
v-X
a b
Figur~ 4.2.1 Uniform Convergence of a Sequence
Example 4.2.2
To see that the sequence Sn(x) = xn in Example 4.2.1 is not uniformly convergent in [ 0, 1 ] , choose £ = 1 I 2. Let B > 0 be a small constant, so
I s( 1 - B ) - Sn( 1 - B ) I = I sn( 1 - B ) I = sn( 1 - B ) = ( 1 - B t
For any integer N, set
(1-B)N = 3/4
so
Sec. 4.2 Infinite Series Whose Terms Are Functions 135
o = 1 - ( 3 I 4 )1/N > 0
Thus, for any N, it is possible to find an x = 1 - o e [ 0, 1 ] such that
I s(x) - Sn(x) I = 3 I 4 > 1 I 2
The sequence thus fails to be uniformly convergent in the interval [ 0, 1 ]. •
Example 4.2.3
Consider the series
00 00 L ui(x) = L ___ x __ _ i=1 [ ( i- 1 ) x + 1 ] [ ix + 1 ]
(4.2.3) i=1
To show that sn(x) = nx ( nx + 1 )-1, first note that this expression holds for sn(x) with n = 1 and 2. For proof by induction, assume that the relation holds for n. It is then to be shown that it holds for n + 1. Since
X
Sn+l (x) = Sn(x) + [ nx + 1 ] [ ( n + 1 ) x + 1]
nx x = ~-~+
[nx+1] [nx+1][(n+1)x+ 1]
(n+1)x = (n+1)x + 1
the proof by induction is complete. Letting n approach infinity, the pointwise convergent sum of the series is
{ 0, if X= 0
s(x) = lim sn(x) = n-+oo 1, if X :;t: 0
(4.2.4)
Thus, the sum has a discontinuity at x = 0. However, Sn(x) is a continuous function of x, 0 ~ x ~ 1, for all finite n. As shown in Example 4.2.2, with any fixed e, Eq. 4.2.2 must be violated for some finite n, so this series does not converge uniformly .
• The most commonly encountered test for uniform convergence is the Weier
strass M-test.
Theorem 4.2.1 (Weierstrass M-test). If a sequence of nonnegative numbers Mi can be found for which Mi :=:: I ui(x) I, for all x in the closed interval [ a, b ], and for
136 Chap. 4 Infinite Series
which L ~is convergent, then the series L ui(x) is absolutely and uniformly conver-i=l i=l
gent for x e [ a, b ]. • 00
The proof of the Weierstrass M-test is direct, simple, and instructive. Since L Mi i=l
converges, for each E > 0, there is an N such that for n + 1 ~ N,
00
.L~<E i=n+l
Then, with I ui(x) I :::;;; Mi, for all x in the interval a :::;;; x :::;;; b,
00
L I Uj(X) I < E
i=n+l
By the comparison test, the series L ui(x) converges to a limit s(x), for each x e [a, b ]. i=l
Furthermore,
00 00
I s(x) - Su(x) I = L ui(x) :::;;; ~ I ui(x) I < E i=n+l i=n+l
00
for n + 1 >Nand all x e [a, b ]. Thus L ui(x) is uniformly convergent in [a, b ]. i=l
Since absolute values of Uj(x) are used in the statement of the Weierstrass M-test, the
series L ui(x) is also absolutely convergent. • i=l
Example 4.2.4
Consider the series
00
~ ix -x -2x £..J e- cos ix = 1 + e cos x + e cos 2x + ... i=O
Sec. 4.2 Infinite Series Whose Terms Are Functions 137
for a ~ x < oo, where a > 0. Note that
I ix 1 -ix -ia M e- cos ix ~ e ~ e = i
and the series of constants
00 00 00 00
with r = e-a < 1 is a convergent geometric series (see Example 4.1.1). Thus, the series of functions converges absolutely and uniformly in [ a, oo ), for any a > 0. •
Since the Weierstrass M-test establishes both uniform and absolute convergence, it will necessarily fail for series that are uniformly but not absolutely convergent. A somewhat more delicate test for uniform convergence has been given by Abel [8, 9].
Theorem 4.2.2 (Abel's Test). Let :L, ~be convergent and i=l
(4.2.5)
where for fixed x, fi(x) is monotonic in i, either increasing or decreasing, and fi(x) ~ 0 is
bounded; i.e., 0 ~ fi(x) s; M, for all x e [a, b ]. Then, :L, ui(x) converges uniformly in i=l
[a, b ]. • To see why monotonicity in i is important, let ai = ( -1 )i ~, so that :L, ~con
i=l verges. Let fi(x) = 1 if i is even and fi(x) = 0 if i is odd on [a, b ], which is not monotonic
00 00
in i. Then, ~fi(x) = ~ if i is even and ~fi(x) = 0 if i is odd. Thus, :L, ~fi(x) = :L, ~ i=l i=l,i even
diverges.
The Abel test is especially useful in analyzing power series with coefficients ~·
Example 4.2.5
00
If :L, ~ is a convergent series of constants, since I xi I ~ M = 1 and xi is monotonic n=O
138 Chap. 4 Infinite Series
in i; i.e., xi+l ~xi for x e [ 0, 1 ], the power series ,L 3.j_Xi converges uniformly i=O
for 0 ~ x ~ 1. • Differentiation and Integration of Series
00
As shown in Examples 4.2.2 and 4.2.3, series ,L ui(x) exist for which each ui(x) is i=l
continuous (in fact all their derivatives are continuous) and the series converges at each x to a finite sum s(x), but s(x) is not even continuous. This raises important questions about properties of functions that are defined as infinite series of functions. Many methods of solving differential equations are based on assuming a solution y(x) in the form of an infmite series
00
y(x) = ,L 3.j_<j> i(x) (4.2.6) i=O
where the functions <l>i(x) are known. The series of Eq. 4.2.6 is formally differentiated; i.e., the order of differentiation and infinite summation are reversed, without theoretical justification, to form
dy - ~a· d<l>i
dx """ 1 dx i=O
up to the highest order of derivatives that appear in the differential equation.
(4.2.7)
Equations 4.2.6, 4.2.7, and any higher order derivatives that are needed are then substituted into the differential equation. The coefficients 3.j_ are then determined to satisfy the resulting equation. The ai obtained are then substituted into Eq. 4.2.6 to yield the hoped for solution. But, is the function y(x) of Eq. 4.2.6 that is constructed by this formal (nonrigorous) procedure really a solution of the differential equation?
First, the infinite series on the right of Eq. 4.2.6 might diverge. Or, it might converge pointwise, as in Example 4.2.2, to a discontinuous function. Since a discontinuous function does not have a derivative, the function y(x) defined by the infmite series that is constructed can not possibly be the solution of a differential equation.
Since many methods of solving differential equations use infinite series in a formal approach, similar to that outlined above, great care must be taken to assure that operations with infmite series of functions preserve continuity and differentiability properties that are required of solutions. The following theorem provides valuable tools that will be used in establishing the validity and limits of applicability of infinite series methods for solving differential equations.
Sec. 4.2 Infinite Series Whose Terms Are Functions 139
00
Theorem 4.2.3. Let the series ,L ui(x) be uniformly convergent on an interval i=l
[ a, b ]. Then,
(a) if the individual terms ~(x) are continuous, the limit
00
f(x) = L ui(x) (4.2.8)
i=l
is also continuous.
(b) if the individual terms ui(x) are continuous, the series may be integrated term by term and the sum of the integrals is equal to the integral of the sum; i.e.,
b b 00 00 b
J f(x) dx = J L ui(x) dx = L J ui(x) dx a a i=l i=l a
(4.2.9)
(c) the derivative of the sum f(x) equals the sum of the individual derivatives; i.e.,
~ f(x) ~ ! ( ~ n;(x) J ~ t. ! n;(x) (4.2.10)
provided the following conditions are satisfied:
dui(x) (i) ui(x) and dx are continuous in [ a, b ]
~ dui(x) (ii) LJ dx is uniformly convergent in [ a, b ] •
i=l
For proof, the reader is referred to a calculus text such as Ref. 8 or 9.
The significance of this theorem should not be under estimated. Presume that the functions ui(x) in the infinite series of Eq. 4.2.8 have been found by some method of approximation or formal solution. Equation 4.2.8 thus defines a continuous function, under the stated conditions. Since no closed form formula can be found for the sums of most infinite series, this theorem may be the only tool available to assure that the infinite series constructed really defines the solution being sought. Equations 4.2.9 and 4.2.10, under the conditions stated, permit the interchange of order of infinite summation and integration and differentiation, respectively. Note that the conditions (b) that permit interchange of in-
140 Chap. 4 Infinite Series
finite summation and integral are mild. In contrast, the conditions (c) that permit interchange of infinite summation and derivative are much more demanding. This is a disappointing fact of mathematical life, which must be understood and accepted, since it is this interchange that must be justified in most engineering applications of infinite series.
Example 4.2.6
Consider again the series of Example 4.2.4; i.e.,
00
f(x) = L e -ix cos ix i=O
(4.2.11)
for 0 < a ~ x ~ b < oo. It was shown in Example 4.2.4 that the series of Eq. 4.2.11 converges uniformly in [a, b ]. Since each term in the series is continuous, Eq. 4.2.9 of Theorem 4.2.3 holds. In this case,
b 00 b J f(x) dx = L J e-ix cos ix dx a i=O a
To verify that f(x) has a derivative, which is given by
f'(x) = - ,L ie-ix (cos ix + sin ix) i=O
(4.2.12)
(4.2.13)
dui(x) note first that ~(x) ~d dx are continuous in a~ x ~b. To show that the series
on the right of Eq. 4.2.13 is uniformly convergent, note that
I ie-ix (cos ix + sin ix) I ~ 2ie-ia = ~ By the ratio test,
lim Mi+l = lim 2 ( i + 1) .e -G+l)a
i~oo Mi i~ 2ie -ta
= .lim ( 1 + ~) e-a 1-~ 1
00
Thus the series L ~converges and, by the Weierstrass M-test, the series on the i=l
Sec. 4.2 Infinite Series Whose Terms Are Functions 141
right of Eq. 4.2.13 is uniformly convergent in [a, b ]. Thus, Theorem 4.2.3 assures that f(x) has a derivative and that it is given by Eq. 4.2.13. •
Taylor Series
Let a function f(x) have a continuous nth derivative in the interval a ::;; x ::;; b. Integrating this derivative from a to x ::;; b,
Integrating again,
Finally, integrating for the nth time,
2 ( x- a) - 21 f"(a) -
n-1 - ( x - a ) f ( n-1 >(a) (4 2 14)
( n -1 )! · ·
Note that this expression is exact. No terms have been dropped and no approximations are made. Solving Eq. 4.2.14 for f(x) yields the following fundamental result.
Theorem 4.2.4 (Taylor's Theorem). If f(x) has n continuous derivatives in [a, b ], then it is equal to its Taylor expansion
( x- a )2 f(x) = f(a) + ( x- a) f'(a) + 21 f"(a) + ...
( n-1 + x-a) f(n-1)(a) + Rn
( n- 1 )!
where the integral form of the remainder Rn is given by
(4.2.15)
(4.2.16)
•
142 Chap. 4 Infinite Series
The remainder term of Eq. 4.2.16 may be put into a more useful form, by using the Mean Value Theorem of calculus (see Theorem 6.1.1 or Refs. 8 and 9); i.e., if a function g(x) is continuous in [ a, x ], there is a ~ with a ::;; ~ ::;; x such that
fax g(x) dx = ( x - a ) g(~) (4.2.17)
Applying this result to Eq. 4.2.16 and integrating n times yields the Lagrangian form of the remainder
(4.2.18)
where a ::;; ~n ::;; x. In Taylor's expansion, there are no questions of infinite series convergence, since
the series ofEq. 4.2.15 is finite. Thus, the only questions that arise concern the magnitude of the remainder. When the function f(x) is such that
(4.2.19)
then Eq. 4.2.15 yields the Taylor series
(4.2.20)
If the expansion of Eq. 4.2.20 is about the origin (a = 0), Eq. 4.2.20 is known as a Maclaurin series,
00 • 1
f(x) = L ~~ f( i )(0) 1.
i=O
or Maclaurin expansion of f(x).
Example 4.2. 7
(4.2.21)
Let f(x) =ex. Differentiating and evaluating at x = 0, f(n>(O) =eO= 1, for all n. The remainder of Eq. 4.2.18 is thus
n X Rn=-n!
For x = 0, Rn = 0. For any given x ".# 0, let N be an integer greater than I x 1. Then,
Sec. 4.2 Infinite Series Whose Terms Are Functions 143
I Rn I = I :; I 1
=
and lim Rn = 0. Thus, the Maclaurin series for ex converges and is the power n-+oo
series
00 0
x ~ xt e = ""-" -:r 1.
i=O
(4.2.22)
This series converges for all x, but it is not uniformly convergent on -oo < x < oo. Uniform convergence may be verified for -as; x s; a, with any a< +oo, by the Weierstrass M-test. •
Example 4.2.8
Let f(x) = 1n ( 1 + x ). By differentiating,
f'(x) = - 1-1+x
f<i>(x) = (-1)i-1 (i-1)!(1+xfi
The Maclaurin expansion ofEq. 4.2.21 yields
n .
ln ( 1 + x) = L ( -1 )i-l ~1 + Rn i=l
In this case, the remainder ofEq. 4.2.18 is
where 0 s; ~n s; x. Thus,
144 Chap. 4 Infinite Series
The remainder approaches zero as n approaches oo, provided 0 ~ x ~ 1, so the infinite series
00 • 1
ln ( 1 + x) = L ( -1 )i-1 ~ (4.2.23) i=1
converges for 0 ~ x ~ 1. Absolute convergence in -1 < x < 1 is easily established by the ratio test; i.e.,
lim I ~+1 I = lim I xi+1 I I ( i + 1 ) _ lim i I x I ~ 1 x 1 < 1 i__.oo I ~ I i__.oo I xi I I i - i__.oo i + 1
The series is uniformly convergent on -r ~ x ~ r, for any 0 ~ r < 1, since Mi = t forms a convergent geometric series and may be used in the Weierstrass M-test. •
For a function f(x, y) of two variables, the Taylor expansion about the point (a, b) is
()f ()f f(x, y) = f(a, b) + (X- a) dX + ( Y- b) dy
1 [ z a2f a2f 2 a2f J +- (x-a) -+2(x-a)(y-b)-+(y-b)-2! ax2 axay al
Sec. 4.2 Infinite Series Whose Terms Are Functions 145
Power Series
Power series is a special and extremely useful type of infinite series, of the form
00
(4.2.26)
where the coefficients ~ are constants that are independent of x. This series may readily be tested for convergence by either the root test or the ratio test If
(4.2.27)
then
(4.2.28)
and the series converges for -R < x < R, which is the interval of convergence and R is called the radius of convergence. Since the root and ratio tests fail when the limit is unity, the endpoints of the interval require special attention.
Example 4.2.9 00
For the power series 'L, ixi, ~ = i and the radius of convergence is i=O
R-1 = lim I ai+l I = )im I i ~ 1 I = 1 i~ ~ 1~00 1
Thus, the series converges for -1 < x < 1. Note that at both x = -1 and x = 1, the series diverges. •
Theorem 4.2.5. For a power series of the form of Eq. 4.2.26 that satisfies Eq. 4.2.28, the following results hold:
(a) Uniform and absolute convergence: The power series is convergent for R < x < R and it is uniformly and absolutely convergent in any interior interval -r :5: x :5: r with 0 < r < R.
(b) Continuity: The function f(x) defined by Eq. 4.2.26 is continuous on any interval -r :5: x ~ r with 0 :5: r < R.
146
and
Chap. 4 Infinite Series
(c) Differentiation and Integration: Derivatives and integrals of f(x) in -r S x S r may be obtained by interchanging the order of derivative and integral and infinite summation in Eq. 4.2.26, for any r with 0 S r < R.
(d) Uniqueness: If
f(x) = L 3-j_Xi, -Ra <X< Ra i=O
00
= 'L, bixi, -Rb<x<Rb i=O
with overlapping intervals of convergence that include x = 0, then
ai = bi, i = 1, 2, ...
(4.2.29)
(4.2.30)
(e) Addition of power series: Two power series may be added term by term for every value of x for which both series are convergent.
(f) Multiplication of power series: The product of two power series
A = L akxk = ao + alx + ... k=O
00
B = 'L, bkxk = b0 + b1x + ... k=O
is the power series
This series is called the Cauchy product of the series A and B.
(4.2.31)
(4.2.32)
• Each part of Theorem 4.2.5 is proved separately in the following. Part (a) may be proved directly by the Weierstrass M-test, using 13-j_xi IS~= 13-j_ It
M· 1 lim I al.+l I ri+l r lim __!:!:.. = - - < 1 i~oo Mi i~ I 3.j_ I i - R
Sec. 4.2 Infinite Series Whose Terms Are Functions 147
00
Since ui(x) = 3J.Xi is a continuous function of x and f(x) = ,L aixi converges unii=l
formly for -r s; x s; r, f(x) is a continuous function in the interval of uniform convergence, which proves part (b).
00
Since ui(x) = 3J_Xi is continuous and L aixi is uniformly convergent, the differen-i=l
tiated series is a power series of continuous functions and
_lim I u'i;t I = .lim 1-+oo u i ~~00
( i + 1) 3J.+lxi . i-1 1aix
= I X I
R < 1
which has the same radius of convergence as the original series. The new factors that are introduced by differentiation (or integration) do not affect either the root or the ratio test. Therefore, the power series may be differentiated or integrated as often as desired, within the interval of uniform convergence. This proves part (c).
To prove part (d), from Eq. 4.2.29,
00 00
(4.2.33)
on -R < x < R, where R is the smaller of R8 and Rb. Setting x = 0, to eliminate all but the constant terms, Eq. 4.2.33 reduces to llQ = b0• Using differentiability of the power series, both sides ofEq. 4.2.33 may be differentiated to obtain
~. i-1 LJ 13jX i=l
00
" "b i-1 = LJ 1 jX
i=l
Evaluating this result at x = 0 yields a1 = b1• By repeating this process i times,
which shows that the two series coincide. Therefore the power-series representation is unique.
Part (e) follows from a rearrangement property of convergent series [8]. Part (f) is verified by multiplying each term of series A by each term of series B in
Eq. 4.2.31 and collecting terms of products of the same powers of x, to obtain Eq. 4.2.32. •
148
EXERCISES 4.2
1. Find the range of x for uniform convergence of
Leo (-1)i-1 (a)
·X i=l 1
00
(b) L -~ i=l 1
00
Chap. 4 Infinite Series
2. For what range of xis the geometric series L xi uniformly convergent? i=O
00
3. For what range of positive values of x is L 1 i i=O ( 1 +X )
(a) convergent?
(b) uniformly convergent?
4. Show, by a Taylor Expansion about e = 1t I 2, that sin ( 1t I 2 + e ) =cos e. 5. Show that
2 4 2n (9 ) X X n X n+l COS zX 2n+2 (b) cosx = 1-2!+ 41 - ... + (-1) ( 2n)! +(-1) ( 2n+ 2 )! x
6. Are the following series (i) absolutely convergent, (ii) convergent, or (iii) divergent?
00
(a) Le-i
i=l
00 • 2
(b) L 1 + ( ~ I 1) ( -1 )i
i=l
Sec. 4.3 Power Series Solution of Ordinary Differential Equations 149
00 2
7. Find the range of uniform convergence of the series 2', :_1 xn-1, indicating caren=l 2
fully the bounds on the range and whether the series converges at the bounds.
8 C 'd th . ~ ( 1 )i-1 (X- 1 )i . ons1 er e senes £..J - i .
i=1
(a) For what values of x does the series converge?
(b) For what values ofx is the convergence uniform?
9. Find the Maclaurin series of f(x) = 1 I ( 1 + x2 ) (Hint: Substitute -x2 for x).
10. Develop 1 I (a- bx) in powers of ( x- c), where (a- cb) :;i: 0 and b :;i: 0.
11. Find the Maclaurin series off(x) =tan x (Hint: f'(x) = sec2 x).
4.3 POWER SERIES SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS
Consider the general second-order linear homogeneous ordinary differential equation, written in the form
y" + f(x) y' + g(x) y = 0 (4.3.1)
Definition 4.3.1. A value x0 of x is called an ordinary point of the differential equation of Eq. 4.3.1 if there exists an interval I x- x0 I ~ R in which f(x) and g(x) have convergent power series representations
f(x) = 2', ai ( x - Xo i i=O
(4.3.2)
g(x) = 2', bi ( x - x0 i i=O
• Expansions About Ordinary Points
Theorem 4.3.1. If x0 is an ordinary point of Eq. 4.3.1, then the differential equation has a power series solution
150 Chap. 4 Infinite Series
00
y(x) = L ck ( x - xo )k k=O
that converges in the interval I x - Xo I < R.
(4.3.3)
• To prove Theorem 4.3.1, assume, without loss of generality, that x0 = 0. If not, a
new independent variable ~ = x - Xo can be introduced and the equation will then have an ordinary point at ~ = 0. The first part of the proof is to carry out manipulations that are actually used in constructing a power series solution. The second part is to prove convergence.
Assume there is a solution that is valid for I x I ~ r < R, of the form
(4.3.4)
Formally; i.e., without considering the validity of operations performed or convergence of the series,
00
(4.3.5)
where j = i- 1,
00 00
y"(x) = L i ( i- 1 ) cixi-2 = L ( k + 2 ) ( k + 1 ) ck+2xk (4.3.6) i=O k=O
and k = i- 2. Substituting these series and the series for f(x) and g(x) from Eq. 4.3.2 into Eq. 4.3.1,
Using Eq. 4.2.32 to expand the products of series and combining like powers of x, this becomes
Sec. 4.3 Power Series Solution of Ordinary Differential Equations 151
- n L { ( n + 2) ( n + 1 ) cn+2 + L ( n- j + 1 ) apn-j+l n=O j=O
(4.3.7)
Since the coefficients of the power series on the right of Eq. 4.3. 7 are all zero, part (d) of Theorem 4.2.5 implies that the coefficients of all powers of x on the left of Eq. 4.3.7 must be zero. This yields the recurrence formula for cn,
n
( n + 2) ( n + 1) Cn+2 = - L [ ( n- j + 1) apn-j+l + bpn-j] (4.3.8) j=O
for n = 0, 1, 2, ... Expanding these equations for n = 0, 1, and 2 yields
1 c2 = - 2 [ aocl + boco] (4.3.9)
The coefficients c0 and c1 can be assigned arbitrarily, yielding two arbitrary constants in the general solution of the second-order differential equation. Equation 4.3.9 determines c2 in terms of c0 and c1• Equation 4.3.10 then determines c3 in terms of c0, c1,
and c2, with c2known from Eq. 4.3.9. Equation 4.3.11 next determines c4 in terms of c0 to c3, with c2 and c3 known from Eqs. 4.3.9 and 4.3.10. This process may be continued using Eq. 4.3.8 for n = 3, 4, 5, ... , to determine c5, c6, c7, ••• , respectively, all in terms of arbitrary c0 and c1. This process constructs a general solution of the differential equation, provided the formal operations performed on the series; i.e., differentiation, multiplication, addition, etc., can be justified. This part of the proof provides the procedure that is actually carried out to construct power series solutions of differential equations.
The remainder of the proof of Theorem 4.3.1 is the more technical argument that the procedure developed above yields a convergent series in Eq. 4.3.4 and that its sum is the solution of Eq. 4.3.1. Since the power series for f(x) and g(x) in Eq. 4.3.2 are absolutely -convergent, L I 8i_ II x li and L I bi II x li converge for I x I ::;; r < R. Therefore,
i=O i=O
152 Chap. 4 Infinite Series
pm Ia;. I ri =Jim I bi I ri = 0. Hence, there exist constants M and N such that Ia;. I~ Mr-i 1-?oo 1~00
and I bi I~ Nr-i, for all i. Let K be the larger of M and Nr. Then,
I a;. I ~ Kr-i
I bi I ~ Kr-i-1 (4.3.12)
and from Eq. 4.3.9,
2 I c2 I ~ I c1 I I ao I + I co I I bo I
so that I c2 1 ~ y2, where 2y2 = K ( 21 c1 I+ I c0 I r-1 ). Also, from Eqs. 4.3.10, 4.3.11, and 4.3.12,
6 I c3 I ~ 212 I ao I + I c1 I I a1 I + I c1 I I bo I + I co I I b1 I
~ 3y2K + 2 I c1 I Kr-1 + I c0 I Kr-2
12 I c4 I ~ 3y3 I ao I + 212 I a1 I + I c1 I I a2 I + Y2 I bo I
+ I c1 I I b1 I + I co I I b2 I
Therefore, I c3 I ~ y3 and I c4 I ~ y4, where
Continuing in this way with Eq. 4.3.8, fori ~ 5, I ci I ~ 'Yi• where
(i- 1) iyi = K [ iyi_1 + (i- 1) "fi_2r-1 + ...
+ 2 I c1 I r-i+2 + I c0 I r-i+1] (4.3.13)
Next, write Eq. 4.3.13 with i replaced by i- 1 and multiply by r-1, to obtain
Sec. 4.3 Power Series Solution of Ordinary Differential Equations 153
Now consider the series L 'Y ixi. The ratio test implies that this series converges for i=O
I xI< r, since
lim 'Yi I X I = ~ < 1 i-¥<> 'Y i-1 r
00
By the comparison test, L cixi converges in the same interval, because I cixi I S 'Yi I x li. i=O
Since r is any positive number less than R, the series
<»
y(x) = L cixi i=O
converges for I xI < R. Thus, all the operations used to define the ci are valid and Eq. 4.3.4 yields the desired solution. •
Example 4.3.1
For the differential equation
(4.3.15)
f(x) = 0 and g(x) = -1, sox = 0 is an ordinary point. Writing the assumed solution in the form
154
00
y(x) = L cixi i=O
and obtaining, by differentiation,
2 00
d y L . < . 1 ) i-2 - = 1 1- C·X 2 1
dx i=O
The differential equation ofEq. 4.3.15 is then
00
Chap. 4 Infinite Series
(4.3.16)
In order to collect the coefficients of like powers of x, the indices of summation are changed in such a way that exponents of x in the two summations are the same. For this purpose, replace i by i - 2 in the second summation, so it becomes
00
and Eq. 4.3.16 becomes
00 00
L i ( i - 1 ) cixi-2 - L ci-2xi-2 = 0 i=O i=2
Since the first two terms (i = 0, 1) of the first summation are zero, the lower limit of summation may be taken as i = 2 and the summations combined to obtain
i ( i- 1 ) ci = ci_2, i = 2, 3, ...
This is the recurrence formula for the ci. It expresses each coefficient c~ for which i ~ 2 as a multiple of the second preceding coefficient ci_2. The general solution may thus be written in terms of the undetermined constants c0 and c1. •
Returning to the differential equation ofEq. 4.3.1, if f(x) or g(x) are not continuous at Xo· then x0 is called a singular point of the differential equation.
Definition 4.3.2. If x0 is not a regular point of Eq. 4.3.1, it is called a singular point. If ( x- x0 ) f(x) and ( x- Xo )2 g(x) have power series representations in some interval! x- x0 I< R, then x0 is called a regular singular point of the differential equa~ .
Sec. 4.3 Power Series Solution of Ordinary Differential Equations 155
Expansions About Singular Points (Frobenlus Method)
In a neighborhood of a regular singular point of Eq. 4.3.1, a series solution of the differential equation can still be obtained by the Frobenius method that is presented next. Assume, without loss of generality, that Xo = 0. First multiply the differential equation ofEq. 4.3.1 by x2, to obtain
x2 y" + x F(x) y' + G(x) y = 0
where
F(x) = x f(x) = L ~xi i=O
2 ~ i G(x) = x g(x) = L.i bix i=O
and these series converge for I x I < R. Assume a solution of the form
y(x) = L cixi+a i=O
(4.3.17)
(4.3.18)
(4.3.19)
where a. is a constant that is to be determined. Then, by formally differentiating and manipulating,
00
I ~ (" ) i+a xy = k 1+a cix i=O
00
x2y" = L ( i + a ) ( i + a - 1 ) cixi+a i=O
Substituting these results and Eq. 4.3.18 into Eq. 4.3.17,
156 Chap. 4 Infinite Series
Expanding the products of series and combining terms, as in deriving Eq. 4.3.7, and equating coefficients of the various powers of x to zero yields the recurrence formula
c0 [a(a-1) +Ciao+ b0 ] = 0 (4.3.20)
cd(a+i)(a+i-1) + ao(a+i) + bo]
i-1
+ L cj [ (a+ j) ~-j + bi-j] = 0, i = 1, 2, . . . (4.3.21) j=O
Equation 4.3.20 indicates that c0 is arbitrary, only if a is a solution of the indicial equation
a(a-1) + aao + b0 = 0 (4.3.22)
This equation is quadratic in a, so it has two solutions that are not necessarily distinct. Let a 1 and az be the roots of the indicia! equation. Three special cases arise, with properties as follows.
Theorem 4.3.2. Let x0 = 0 be a regular singular point for Eq. 4.3.1, with the series in Eq. 4.3.18 converging if I xI< R. Then,
(1) if a 1 '# az and Clz- a 1 is not an integer, Eq. 4.3.21 yields coefficients for two series solutions, one associated with a 1 and the other with az; i.e.,
00
Y 1 = L CnXn+al
n=O
both with radius of convergence R.
(4.3.23)
(2) if a 1 = az, the method yields only one convergent series solution, with radius of convergence R. However, if one solution is known, another independent solution can be obtained by a substitution that reduces the problem to a linear first-order equation, to be carried out later.
(3) if a 2 = a 1 + n, where n is an integer, Eq. 4.3.20 yields
( al + n ) ( al + n - 1 ) + ao ( al + n ) + bo = 0 (4.3.24)
Sec. 4.3 Power Series Solution of Ordinary Differential Equations 157
This means that the coefficient of Cn in Eq. 4.3.21 is zero for ex1 and the solution corresponding to the root ex1 fails, unless
n-1
L, cj [ ( ex1 + j ) 3n-j + bn-j ] = 0 j=O
(4.3.25)
In the latter case, cn is arbitrary. Otherwise, the method fails to yield two solutions. One series with a radius of convergence R is obtained, however, using ~~~ .
To prove Theorem 4.3.2, write Eq. 4.3.21 as
where
Then,
i-1
ci I( ex + i ) = - L, cj [ ( ex + j ) ai-j + bi-j ] j=O
~ 2: I Cj I [ ( I ~ I + j ) I 3j_j I + I bi-j I J j=O
for all i ~ I ex1 - ~ 1. Let I cj I = 1J• for j < m, where m is some integer greater than I ex1 - ex2 1. Then, for
ex2 ~ exl,
m-1
m ( m - I ex1 - ex2 I ) I Cm I ~ L Y j [ ( I ~ I + j ) I 3m-j I + I bm-j I ] j=O
As in the case of an ordinary point, since the series in Eq. 4.3.18 converge, there exists a constant K such that
1 5 8 Chap. 4 Infinite Series
Then,
I ~ I ~ Kr-i
m-1 m ( m- I a1 - ~ I ) I em I ~ K L Y j ( I a2 I + j + 1 ) r-m+j
j=O
and I em I ~ Ym• where Ym is the solution of
m-1 m( m-1 a1-~ I )Ym = Klyj( I~ I +j+1 )r-m+j
j=O
Further, for n ;;:: m,
n-1
n ( n- I a1- a2 I ) I Cn I ~ K L 'Yj [ ( I a2 I + j + 1) r-n+j j=O
and I en I ~ Yn• where Yn is defined by
n-1
n ( n - I a1 - a2 I ) Y n = K L y j ( I ~ I + j + 1 ) r -n+j j=O
Replacing n by n - 1 and dividing by r,
n-a ( n - 1 ) ( n - 1 - I a1 - a2 I ) Y n-1r -l = K L y j ( I ~ I + j + 1 ) r -n+j
j=O
Subtracting the preceding pair of equations yields
n(n-1 a1-a2l )Yn- (n-1)(n-1-l a1 -~ I )Yn-1r-1
= KYn-1 ( n+ I a2 I )r-1
or,
y n ( n - 1 ) ( n - 1 - I a1 - a2 I ) K ( n + I ~ I ) --- = ------------------- + ----------~--
n ( n - I a 1 - ~ I ) r n ( n - I a1 - az I ) r Yn-1
Sec. 4.3 Power Series Solution of Ordinary Differential Equations 159
00
Consider the series L "{ ixi. This series converges absolutely for I x I < r, by the i=O
Ratio Test for Absolute Convergence (Theorem 4.1.7), since
lim 'Yn I X I = n~oo "{ n-1
I X I -- < 1
r
00
when I x I < r. By the comparison test, L cixi converges absolutely in the same interval. i=O
This implies that it converges absolutely for I x I < R, since r is any positive number less than R. This justifies all of the formal operations that were used in deriving the series solution of the differential equation and shows that
00
y(x) = xa.z L cixi i=O
(4.3.29)
where a 2 ;;:: a 1, is a solution of the differential equation that is valid for I x I <Rand the coefficients ci are determined by the recurrence formula of Eq. 4.3.21.
If a2 - a 1 is not zero or a positive integer, then a second independent solution is obtained by the Frobenius method, using the root a1, and the proof of convergence of the series is essentially the same as that for a:z.
If~- a1 is zero or a positive integer and the method fails to give two independent solutions, let u(x) be a known solution of the differential equation and seek a second solution of the form y(x) = u(x) v(x). Upon substituting this into Eq. 4.3.1, the differential equation that v(x) must satisfy is
( 2u' ) v" + 7 + f v' = 0
This is a first-order linear equation in v', with the solution
, A -2 -f f(x) dx v = u e
Then,
y = uv = Au J u-2e-J f(x)dx dx + Bu (4.3.30)
is the general solution of Eq. 4.3.1, which completes the proof of Theorem 4.3.2. •
160 Chap. 4 Infinite Series
Example 4.3.2
To illustrate the Frobenius method, consider the equation
or,
Thus, x0 = 0 is a regular singular point. By direct substitution of
co co
into the above differential equation,
co
( o.Z- 1) c0xa + L { [ (a+ i )2 - 1] ci + (a+ i- 1) ci_1 } xi+a = 0 i=1
Hence, the indicial equation is
a2- 1 = 0
with roots a 1 = -1 and ~ = 1, and the recurrence formula for ci is
[ ( a + i )2 - 1 ] ci + ( a + i - 1 ) ci-l = 0
or,
( a+ i- 1 ) [ ( a+ i + 1 ) ci + ci_1 ] = 0, i ~ 1
Since a 1 and~ differ by an integer, a solution of the required type is assured only when a has the larger value (Case 3 of Theorem 4.3.2).
With ~ = 1, the recurrence form~a is
i [ (i + 2 ) ci + ci_1 ] = 0
or, since i ::1= 0, fori~ 1,
ci-1 c·---1- i+2
Sec. 4.3 Power Series Solution of Ordinary Differential Equations
Thus,
and the solution corresponding to ~ = 1 is
X X X ( 2 3 4
Yt(x) = co x - T + 3·4 - 3·4·5
L xi = 2c ( -1 )i+I __ _
0 i=l (i + 1 )!
e-x - 1 + x = 2c0 ----x
161
+ ... )
Using u(x) =.! [e-x- 1 + x] in Eq. 4.3.30, with f(x) = 1 +.!,yields the gen-x X eral solution
1 J xe-x dx 1 y(x) = A -x [ e-x- 1 + x ] ----- + B - [ e-x - 1 + x ]
[ -x ]2 X e -1+x
• EXERCISES 4.3
1. For each of the following differential equations, obtain a general solution that is representable by a Maclaurin series:
d2 (a) 2 + 2y = 0
dx2
2 d2y dy (b) X - - X- + y = 0
dx2 dx
2. Use the Frobenius method to obtain the general solution of each ofthe following differential equations, near x = 0:
2 d2y dy ( 2 1 ) (a) X - + X - + X - - y = 0 di dx 4
162 Chap. 4 Infinite Series
d2y dy (b) X ( 1 - X) - - 2- + 2y = 0
dx2 dx
dy 3 dx + 4x y = 0
3. For each of the equations in Exercise 2, give the largest interval inside which convergence of the Frobenius solution is guaranteed.
4. Determine two values of the constant a for which all solutions of the equation
d2 d X ___I. + ( X - 1 ) dxy - CXy = 0
dx2
are regular at x = 0. Obtain the general solution in each of these cases.
5. Solve the following differential equation, using the power series method:
y" + x2y = 0
Find only the first eight terms. What is the radius of convergence?
6. Derive and solve the indicia! equation of the Frobenius method for solving the differential equation
x2y" + (ex sin 3x) y' - 3y cos 3x = 0
4.4 INTRODUCTION TO SPECIAL FUNCTIONS
As will be seen later in treating problems that are governed by partial differential equations, a few second order ordinary differential equations arise again and again. It is efficient, therefore, to construct their solutions once and for all. These solutions are called special functions of mathematical physics and are studied in a rather extensive literature in their own right [ 1 0]. As a simple illustration, consider the second order differential equation with constant coefficients,
d2y 2 -+roy=O dx2
The general solution is
y(x) = A sin rox + B cos rox
Sec. 4.4 Introduction to Special Functions 163
which could be obtained by power series methods that define power series expansions of the sine and cosine functions. Thus, trigonometric functions could be viewed as the most common of the special functions.
Legendre Polynomials
Another important second order equation is the Legendre equation
[ ( 1 - x2 ) y' ]' + Ay = 0
which can be written as
II 2x I
y- -y + 1-x2
A --y= 0 1-x2
(4.4.1)
(4.4.2)
Note that x = 0 is an ordinary point, while x = ± 1 are regular singular points. The indicia! equation at either of the singular points is a2 = 0, so that a 1 = ~ = 0. Hence, a power series solution can be obtained that is valid in a neighborhood of either of the singularities. However, these solutions are not finite at the other singularity, unless A has certain values. These values of A can be determined by seeking solutions that are valid near the origin as series in powers of x. Hence, assume
00
Differentiating and substituting in the differential equation,
or
00 00 00
( 1 2) ~ . ( . 1 ) i-2 2 ~ . i-1 '\ ~ i 0 - X £..J 1 1 - CjX - X £..J 1CjX + A £..J CjX = i=2 i=l i=O
00 00
L [ ACm + ( m + 2 ) ( m + 1 ) cm+2 ] xm - L 2mcmxm m~ m~
00
- L m ( m- 1 ) CmXm = 0 m=O
Setting the coefficients of various powers of x equal to zero, the recurrence formula is
m(m+1)-A em,
(m+2)(m+1) m = 0, 1, 2, ...
164 Chap. 4 Infinite Series
Setting c1 = 0 with c0 :1: 0 yields a series in even powers of x. Setting c0 = 0 with c1 :1: 0 yields a series in odd powers of x. These solutions are linearly independent. Therefore, a linear combination of them will be the general solution of the differential equation. If A. :1: n ( n + 1 ) for any nonnegative integer n, these solutions will be infinite series that converge for I xI< 1. However, it can be shown that they diverge at x = ± 1 [lO].Iffinite solutions are sought at x = ± 1, A. must equal n ( n + 1 ), for some nonnegative integer n. In this case, either the series in even powers or the one in odd powers terminates, depending on whether n is even or odd, giving a polynomial solution. If c0 or c1 is adjusted so that the solution takes on the value 1 at x = 1, the Legendre polynomials Pn(x) are obtained, the first four of which are P0(x) = 1, P1 (x) = x, P2(x) = ( 3x2 - 1 ) I 2, and P3(x) = ( 5x3 - 3x ) I 2. For general n,
where K = n I 2 or K = ( n - 1 ) I 2, depending on whether n is even or odd. The definition of P n(x) may be restated as the Rodrigues' formula
n . = _1_ dn L ( -1 in! x2n-2i
2n n! dxn i=O i! ( n- i )!
The Legendre polynomials are solutions of the Legendre equation,
( 1 - i ) y" - 2xy' + n ( n + 1 ) y = 0 (4.4.3)
Since they are polynomials, they are finite in any finite interval. They are, therefore, solutions of the equation
2 [ ( 1 - X ) y' ]' + Ay = 0
with y(1) and y(-1) finite for A.n = n ( n + 1 ).
Bessel Functions
A third important second order differential equation is Bessel's equation,
(4.4.4)
Sec. 4.4 Introduction to Special Functions 165
(4.4.5)
It has a regular singular point at x = 0. If a series solution is sought that is valid in a neighborhood of x = 0, the indicia! equation is a.2 - v2 = 0. The roots are a.1 = -v and ~ = v. Three distinct cases arise, as follows:
Case 1: lfv is not an integer in Eq. 4.4.5, two independent solutions are
00
{4.4.6)
Substituting y 1 into the differential equation,
00 00
The coefficient of xv+l is ( 1 + 2v) c1• Since this must be zero, c1 = 0 ifv 'i= -1/ 2. The other coefficients are determined by the recurrence formula
ci+2 = - ( i + 2 )( i + 2 + 2v )
Since c1 = 0, all coefficients with odd subscripts are also zero. Even if c0 is arbitrary, the other coefficients are determined in tenns of c0 as
( -1 t c0 C2n = _______ _,...;; ___ ....,__
2211 n! ( v + 1 ) ( v + 2 ) ... ( v + n )
Thus, a solution of Bessel's equation is
oo oo ( )n 2n v""' n v""' -1 Cox Jv(X) = X £..J CnX = X k ~2------------n=O n;::O 2 n n! ( V + 1 ) ( V + 2 ) ... ( V + n )
(4.4.7)
where Co is defined in the literature as Co = [ 2v r( v + 1 ) r 1 and r is the Gamma function [10]. Even with arbitrary c0, Eq. 4.4.7 yields a solution of Bessel's equation. This is called the Bessel function of the first kind of order v.
166 Chap. 4 Infinite Series
If v is not an integer and v :t- 1/2, another solution that corresponds to the root a1 =-vis
This is the Bessel function of the first kind of order -v. These two solutions are linearly independent, so the general solution is
y = A Iv(x) + B Lv(x)
where A and B are arbitrary constants.
Case 2: If v = 0, then the solutions of Eqs. 4.4. 7 and 4.4.8 are the same.
Case 3: If v = m, m = 1, 2, 3, ... , then a 2 - a 1 =2m is an integer and the situation arises in which the Frobenius method with a = a 1 may not yield a solution. The recurrence formula is
bk + ( k + 2 )( k + 2 - 2m ) bk+2 = 0
When k =2m-2, this equation can not be used to determine b2m. However, suppose b0 = 0. Then, b0 = b2 = ... = b2m_2 = 0 and the recurrence formula fork = 2m- 2 is satisfied for arbitrary b2m. Hence,
From this,
bk bk+2 = - , k = 2m, 2m + 2, ...
( k + 2 ) ( k + 2 - 2m )
(-1tb2m b2m+2n =
22n n! ( m + 1 ) ( m + 2 ) ... ( m + n )
&nd a solution of Eq. 4.4.5 is
-m ~ ( -1 t x2m+2n Y =X b2m£.-i __ 2 ____ ~--~-------------
n=O 2 n n! ( m + 1 ) ( m + 2 ) ... ( m + n )
(4.4.9)
Sec. 4.4 Introduction to Special Functions 167
But this solution is proportional to Jm(x). Therefore, ifv = 1, 2, 3, ... , no new independent solution is obtained.
EXERCISES 4.4
1. Find the bounded solution J0(x) of the zeroth-order Bessel equation
xy" + y' + xy = 0
that is valid at the origin. Show that the other solution Y 0(x) becomes unbounded of the form ( 2 I 1t ) ln x as x approaches zero.
2. Show that the Frobenius method yields two independent solutions of Bessel's equation of order 1 I 2 that are valid near the origin, even though the roots of the indicial equation differ by an integer. Show that the general solution can be written as y = ( A sin X + B cos X ) I rx.
3. Show that the Frobenius method fails to give a nontrivial solution of the equation
x3y" + x2y• + y = 0
that is valid near the origin. Note that the singularity at x = 0 is not a regular singularity.
4. Find the general solution of the Legendre differential equation
( 1 - x2) y" - 2xy' + n ( n + 1 ) y = 0
with n a nonnegative integer, that is valid near x = 1. Show that it becomes unbounded with the form of a constant times ln I x - 1 I as x approaches 1.