Top Banner

of 33

Lecture Aid 2012

Apr 03, 2018

Download

Documents

Dee Crankson
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 7/28/2019 Lecture Aid 2012

    1/33

    APM2B10

    Introduction to Numerical Analysis

    Lecture aid

    K.D. Anderson

    October 24, 2012

  • 7/28/2019 Lecture Aid 2012

    2/33

    Contents

    2 Series expansion 1

    3 Nonlinear equations 8

    4 Systems of linear equations 14

    5 Approximation methods 17

    6 Numerical Differentiation 26

    7 Numerical Integration (Quadrature) 27

  • 7/28/2019 Lecture Aid 2012

    3/33

    2 Series expansion

    Taylors theorem

    We want to derive Taylors theorem from the fundamental theorem of calculus,i.e.

    f(b) = f(a) +

    ba

    f(x) dx. (2.1)

    This is done by repeated integration by parts on the integralba

    f(x) dx,

    for which the formula is u dv = uv

    v du. (2.2)

    Let u = f(x) and dv = dx. Then

    du = f(x) dx

    v = x

    and substitute into equation (2.2) to obtainba

    f(x) dx = xf(x)|ba ba

    xf(x) dx

    = bf(b) af(a) ba

    xf(x) dx.

    We can now reapply the fundamental theorem of calculus to the term f(b)in the form

    f(b) = f(a) +

    ba

    f(x) dx.

    to obtainba

    f(x) dx = b

    f(a) +

    ba

    f(x) dx

    af(a)

    ba

    xf(x) dx.

    = (b a)f

    (a) + b

    a (b x)f

    (x) dx

    and substitution of this back into equation (2.1) yields

    f(b) = f(a) + (b a)f(a) +ba

    (b x)f(x) dx

    Repetition of this process yields Taylors theorem as it is shown in the text-book.

    1

  • 7/28/2019 Lecture Aid 2012

    4/33

    Taylor series examples

    Students should be somewhat familiar with the methodology of obtaining Tay-lor series expansions of functions. We list some examples which illustrate thismethodology.

    Example 1. Find the Taylor series expansion of ln(1 x) centered aroundx0 = 0.

    Solution: If we let f(x) = ln(1 x), then we obtain the following derivativesand their values at x0 = 0:

    f(x) = ln(1 x) f(0) = 0f(x) = (1 x)1 f(0) = 1

    f(x) =

    (1

    x)2 f(0) =

    1

    f(x) = 2(1 x)3 f(0) = 2f(4)(x) = 2.3(1 x)4 f(4)(0) = 2.3

    ......

    f(n)(x) = (n 1)!(1 x)n f(n)(0) = (n 1)!We thus obtain

    f(x) =f(0)

    0!(x 0)0 + f

    (0)

    1!(x 0)1 + f

    (0)

    2!(x 0)2

    +f(0)

    3!(x 0)3 + . . .

    =

    n=0

    f(n)(0)

    n!(x

    0)n

    =n=0

    (n 1)!n!

    xn

    = n=0

    xn

    n

    For the representation to be valid (or for the series to converge), we requirelimn Rn = 0 and thus we investigate the generalised terms an of the infinitesum, where

    an =xn

    n

    It follows that

    anan1

    =xn

    n

    n 1xn1

    =

    n 1n

    1

    x1

    =

    n 1

    n

    x

    for which

    anan1

    |x| as n .

    2

  • 7/28/2019 Lecture Aid 2012

    5/33

    Thus, we require |x| < 1 for the infinite series to converge. And only if we havethis convergence, are we actually allowed to represent the function as an infiniteseries.

    Example 2. Given the function

    g(x) = sin x,

    complete the following instructions:

    (a) Find the Taylor series expansion about x0 = 0,

    (b) Find the radius and interval of convergence, and

    Solution: (a) Given g(x) and x0 = 0, we obtain the following

    g(x) = sin x g(0) = 0g(x) = cos x g(0) = 1

    g(x) = sin x g(0) = 0g(x) = cos x g(0) = 1

    g(4)(x) = sin x g(4)(0) = 0...

    ...

    We see thatg(n)(x) =

    sin x for n even

    cos x for n oddand

    g(n)(0) =

    0 for n even

    1 for n odd

    Using this information and the generic formula

    g(x) =

    n=0g(n)(0)

    n!(x x0)n

    we obtain

    g(x) =n=0

    xn

    n!g(n)(0)

    = x x3

    3!+

    x5

    5! . . .

    =n=0

    (1)n x2n+1

    (2n + 1)!

    (b) It follows that

    an = (

    1)n

    x2n+1

    (2n + 1)!and therefore

    anan1

    = (1)n x2n+1

    (2n + 1)!

    (1)n1 x

    2(n1)+1

    (2(n 1) + 1)!= x

    2n+1

    x(2n1)

    (2n 1)!(2n + 1)!

    = x2

    2.3 . . . (2n 1)2.3 . . . (2n 1)(2n)(2n + 1)

    = x2

    2n(2n + 1)

    3

  • 7/28/2019 Lecture Aid 2012

    6/33

    Of course, convergence requires that limn an

    an1 < 1 and thus as n wehave that x22n(2n + 1) 0, for all x.

    It follows that the radius of convergence is infinity, i.e. Rc = , and the intervalof convergence is the entire real line, i.e. Ic = R.

    Example 3. Find the radius and interval of convergence of the series expansionof ln(x) about x0 = 1.

    Solution: It is up to the student to verify that the series expansion of ln(x)is given by

    ln(x) =

    n=0(1)n+1(x 1)n

    n

    .

    For convergence, we use the limit test and hence we let

    an =(1)n+1(x 1)n

    n

    so that we may calculate |an/an1|, i.e.

    anan1

    =(1)n+1(x 1)n

    n

    (1)(n1)+1(x 1)n1

    n 1=

    (1)n+1

    (1)n

    (x 1)n(x 1)n1

    n 1

    n

    = (x 1)n 1n

    .

    Now of course, it follows that

    limn

    anan1 = limn

    (x 1)

    n 1n

    = |x 1| lim

    n

    n 1n

    = |x 1| limn

    1 1

    n

    = |x 1|

    We require that |x 1| < 1 for convergence, or that 0 < x < 2. Hence, theradius of convergence is Rc = 1 and the interval of convergence is Ic = (0, 2).Note: What about convergence at the endpoints of the interval?

    Example 4. The following problem is number 11 on page 15 of NumericalAnalysis by RL Burden & JD Faires (9th edition).

    11. Let f(x) = 2x cos(2x) (x 2)2 and x0 = 0.(a) Find the third Taylor polynomial P3(x), and use it to approxi-

    mate f(0.4).

    4

  • 7/28/2019 Lecture Aid 2012

    7/33

    (b) Use the error formula in Taylors Theorem to find an upper

    bound for the error |f(0.4) P3(0.4)|. Compute the actualerror.(c) Find the fourth Taylor polynomial P4(x), and use it to approx-

    imate f(0.4).

    (d) Use the error formula in Taylors Theorem to find an upperbound for the error |f(0.4) P4(0.4)|. Compute the actualerror.

    Note that the error term they refer to is the Lagrange estimate of the error termas by your notes.

    Solution: (a) The derivatives and their values at x0 = 0 is as follows:

    f(x) = 2x cos(2x) (x 2)2 f(0) = 4f(x) = 2 cos(2x) 4x sin(2x) 2(x 2) f(0) = 6

    f(x) = 8 sin(2x) 8x cos(2x) 2 f(0) = 2f(x) = 16x sin(2x) 24 cos(2x) f(0) = 24

    We thus obtain

    P3(x) = f(0) + xf(0) +

    x2

    2!f(0) +

    x3

    3!f(0)

    = 4 + 6x x2 4x3

    and it follows that P3(0.4) = 2.016.

    (b) The Lagrange estimate of the error is given by the formula

    Rn =(x x0)n+1

    (n + 1)!f(n+1)(x),

    where x0 < x < x.For this particular problem, we have n = 3, x0 = 0 and x = 0.4. Thus we

    obtain

    R3 =x4

    4!f(4)(x),

    with 0 < x < 0.4. The reason we havent substituted x = 0.4 into the equationfor the estimate of the error shall become clear when we try to find a bound onthe absolute value ofR3. Before we get there, we calculate the fourth derivativeof f(x), i.e.

    f(4)(x) = 64 sin(2x) + 32x cos(2x),

    and it follows that

    |R3| =x44! [64 sin(2x) + 32x cos(2x)]

    =

    x4[64 sin(2x) + 32x cos(2x)]24

    =

    4x4[2 sin(2x) + x cos(2x)]3

    5

  • 7/28/2019 Lecture Aid 2012

    8/33

    We apply the identity |a + b| |a| + |b| to the last equation to obtain

    |R3| 8x43 sin(2x)

    + 4x53 cos(2x)

    Of course |sin(2x)| 1 and |cos(2x)| 1 for all x and we can write the lastequation then as

    |R3| 8x43 sin(2x)

    +4x53 cos(2x)

    8x43

    +4x53

    and thus

    |R3|

    8x4

    3

    +

    4x5

    3

    We evaluate this inequality at x = 0.4 to obtain |R3| 0.095573 which is an

    upper bound on the error. However, the book from which the problem is takenstates this upper bound as |R3| 0.05849. Why the difference? We investigatethe graph of R3(x) =

    x4

    24[64 sin(2x) + 32x cos(2x)] on the interval [0, 0.4] given

    by figure 1.

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    -0.1 0 0.1 0.2 0.3 0.4 0.5

    y

    x

    abs((x**4/24)*(64*sin(2*x) + 32*x*cos(2*x)))abs(8*(x**4/3)) + abs(4*(x**5/3))

    Figure 1: Graph of |R3(x)| = |x424 [64 sin(2x) + 32x cos(2x)] | versus |R3| = 8x43 + 4x53 .At x = 0.4 we see from the figure that for the actual error term is smaller

    than estimate we obtained, even though we used an algebraically sound deriva-tion to estimate the error. The conclusion should then be that we do obtain

    6

  • 7/28/2019 Lecture Aid 2012

    9/33

    some upper bound estimate of the error, even though it might not be the small-

    est estimate for an upper bound. The reason behind this is the x term inthe remainder term. Theoretically, we know that it lies in the interval (0, 0.4),however we dont ever determine its actual value in this interval and hence theestimate. If we were to determine the actual value of x in the interval, it wouldbe the same as approximating the actual function via the Taylor polynomialexactly, i.e. the Taylor polynomial and Taylor series agree everywhere on theinterval.

    (c) We have seen that

    f(4)(x) = 64 sin(2x) + 32x cos(2x)

    and thus f(4)(0) = 0. Thus, the fourth Taylor polynomial is the exact same as

    the third Taylor polynomial, i.e.

    P4(x) = 4 + 6x x2 4x3.

    (d) Now we have n = 4, with x0 = 0 and x = 0.4 as previously. Thus we obtain

    |R4| =x55! f(5)(x)

    =

    x5120 (160 cos(2x) 64x sin(2x)) ,

    with 0 < x < 0.4. Weve chosen to differentiate between x and

    x to avoidconfusion between the different approximations and their error estimates. The

    smallest upper bound on the error estimate here is found to be

    |R4| 0.00795,

    which is a lot less than the actual error in the approximation (see part (b) of thisquestion). What is the meaning of this in the context of the approximation?

    There is a difference between the Taylor series expansion of a function abouta point and the nth Taylor polynomial used as an approximation to a functionat a point. Figure 2 shows the graphical representation of sin x and the sev-enth Taylor polynomial approximation of sin x. The series expansion was donearound x = 0 and from a previous example we found that the interval of con-vergence was the entire real line. Why the difference then? The interval of

    convergence is applicable only to the Taylor series expansion, i.e. infinite sum,whereas the seventh Taylor polynomial is found after truncating this infiniteseries expansion at certain n. And as been stated in the lectures, this act oftruncation introduces an error, for which we can determine an upper boundusing the Lagrange estimate of the error.

    7

  • 7/28/2019 Lecture Aid 2012

    10/33

    -15

    -10

    -5

    0

    5

    10

    15

    -6 -4 -2 0 2 4 6

    y

    x

    sin(x)x - x**3/6 + x**5/120 - x**7/5040

    Figure 2: Difference between sin x and the seventh Taylor polynomial approxi-mation to it.

    3 Nonlinear equations

    Bisection method

    Consider the functionf(x) =

    x cos x, (3.1)

    which well use to illustrate the different methods covered in the chapter onnonlinear equations. A small C++ program was written to do calculations asan example of practical implementation of numerical method algorithms.

    From the outset we have no idea where the root(s) of this function may be.Substituting x = 0 into (3.1) yields the value f(0) = 1 and susbstituting x = 1into (3.1) yields f(1) = 1

    cos(1) > 0 and we can conclude that the root lies in

    the interval [0, 1]. From figure 3 we can see that the root in fact lies somewhereclose to x 0.6.

    Since the function changes sign on the interval [0, 1] and we have deducedthat a root is present, we may also choose this interval as the initial interval forthe bisection method. Using this method, we obtain the root to be situated atx0 0.6417141 if we choose our tolerance to be = 105 (with this tolerancewe get accuracy up to the fifth digit after the decimal); the results are listedin table 1. These results were calculated by the following algorithm used in aC++ program:

    8

  • 7/28/2019 Lecture Aid 2012

    11/33

    -1

    -0.5

    0

    0.5

    1

    1.5

    0 0.2 0.4 0.6 0.8 1 1.2 1.4

    y

    x

    Figure 3: f(x) =

    x cos x

    do {

    x3=(x1+x2)/2;

    if( fabs(f(x3)) < eps ) {

    done=true;

    } else {

    done=false;

    if( f(x3)*f(x1) < 0 ) {

    x2=x3;

    } else if( f(x3)*f(x2) < 0 ) {

    x1=x3;

    } else {

    cout

  • 7/28/2019 Lecture Aid 2012

    12/33

    The results are listed in table 2 and we give the corresponding C++ code

    used in calculation. We see that this method obtains the root a lot quicker forthe same tolerance level.

    do {

    x3=(x1*y2-x2*y1)/(y2-y1);

    y3=f(x3);

    if( fabs(y3) < eps ) {

    done=true;

    } else {

    done=false;

    x1=x2; y1=y2;

    x2=x3; y2=y3;

    }

    i++;} while( !done && i < i_max );

    Newtons method

    To implement Newtons method we have to calculate f(x) which is straight-forward for this example. We obtain the formula

    f(x) =1

    2

    x sin x.

    Again, convergence to the root is quick and we obtain an answer within a fewsteps that is within the required accuracy. The results are listed in table 3 andwere calculated using the following code in a C++ program:

    do {

    x0 = x1;

    x1 = x0 - delta(f,df,x0);

    fabs(x1 - x0) > eps ? done=false : done=true;

    i++;

    } while( !done && i < i_max );

    Figure 4 and figure 5 illustrate the difference in error between the differentmethods.

    10

  • 7/28/2019 Lecture Aid 2012

    13/33

  • 7/28/2019 Lecture Aid 2012

    14/33

    0

    0.02

    0.04

    0.06

    0.08

    0.1

    0.12

    0.14

    0 1 2 3 4 5

    y

    x

    ./bisection-error.dat./linear-interpolation-error.dat

    ./newton-error.dat

    Figure 4: Comparison of the absolute value of the error between different meth-ods.

    12

  • 7/28/2019 Lecture Aid 2012

    15/33

    -20

    -18

    -16

    -14

    -12

    -10

    -8

    -6

    -4

    -2

    0 2 4 6 8 10 12 14

    y

    x

    ./bisection-display-error.dat./linear-interpolation-display-error.dat

    ./newton-display-error.dat

    Figure 5: Comparison of the natural logarithm of the absolute value of the errorbetween different methods.

    13

  • 7/28/2019 Lecture Aid 2012

    16/33

    4 Systems of linear equations

    To better understand the Jacobi method, we take another look at the examplefrom the notes (on p. 24). It was given that

    A =

    5 3 12 3 0

    6 1 8

    from which we obtain the following matrices

    D =

    5 0 00 3 00 0 8

    , L =

    0 0 02 0 0

    6 1 0

    , and U =

    0 3 10 0 00 0 0

    .

    The Jacobi method is implemented via the vectorial equation

    x(k+1) = D1

    b (L + U)x(k)

    , k = 1, 2, 3, . . .

    From the information we have at our disposal, it follows that

    D =

    15 0 00 13 0

    0 0 18

    0.2 0 00 0.33 0

    0 0 0.125

    and

    L + U = 0 3 1

    2 0 0

    6 1 0 .The first step (k = 1) of the Jacobi method for this example is then

    x(2) = D1

    b (L + U)x(1)

    =

    0.2 0 00 0.33 0

    0 0 0.125

    307

    0 3 12 0 0

    6 1 0

    22

    2

    =

    0.2 0 00 0.333 0

    0 0 0.125

    307

    84

    10

    =0.2 0 00 0.333 0

    0 0 0.125

    5417

    =

    1.0001.3322.125

    14

  • 7/28/2019 Lecture Aid 2012

    17/33

    Next we calculate the residual term for this step

    r(2) =

    5 3 12 3 06 1 8

    1.0001.3322.125

    307

    =

    3.1295.59623.132

    307

    =

    6.1295.59616.132

    and its magnitude is calculated to be

    r(2) = (6.129)2 + (5.596)2 + (16.132)2 18.1417.This is of course much larger than any tolerance which we would choose toimpose. So we repeat the process. Let k = 2 and then

    x(3) = D1

    b (L + U)x(2)

    =

    0.2 0 00 0.333 0

    0 0 0.125

    307

    0 3 12 0 0

    6 1 0

    1.0001.3322.125

    =

    0.2 0 00 0.333 0

    0 0 0.125

    30

    7

    1.8712.000

    7.332

    =

    0.2 0 00 0.333 0

    0 0 0.125

    1.1292.000

    0.332

    =

    0.22580.6660

    0.0415

    The residual term for this step is calculated to be

    r(3) =

    5 3 1

    2 3 06 1 8

    0.2258

    0.66600.0415

    30

    7

    =

    0.82752.44962.3528

    307

    =

    3.82752.44964.6472

    and its magnitude is calculated to ber(3) = (3.8275)2 + (2.4496)2 + (4.6472)2 6.49975.

    15

  • 7/28/2019 Lecture Aid 2012

    18/33

    We keep on repeating the process until the magnitude of the residual term is

    less than our imposed tolerance.Example 5. Find the first two iterations of the Jacobi method for the followinglinear systems, using x(0) = 0

    4x1 + x2 x3 = 5x1 + 3x2 + x3 = 42x1 + 2x2 + 5x3 = 1

    This problem was taken from [1], pg. 459, problem (1.a).Solution: We identify

    A = 4 1 11 3 1

    2 2 5

    =D

    4 0 00 3 00 0 5

    +L+U

    0 1 11 0 12 2 0

    and

    b =

    54

    1

    .

    The first step (k = 0) of the Jacobi method is calculated as follows

    x(1) = D1

    b (L + U)x(0)

    = 0.25 0 0

    0 0.333 00 0 0.5

    541

    0 1 11 0 1

    2 2 000

    0

    =

    0.2 0 00 0.333 0

    0 0 0.125

    54

    1

    =

    1.2501.332

    0.500

    The residual term and its magnitude is calculated next.

    r

    (1)

    = 4 1 1

    1 3 12 2 51.250

    1.3320.500

    5

    41=

    0.8327.246

    6.664

    and r(1) 9.87956.

    16

  • 7/28/2019 Lecture Aid 2012

    19/33

    Now we set k = 1 and repeat the process

    x(1) = D1 b (L + U)x(0)=

    0.25 0 00 0.333 0

    0 0 0.5

    54

    1

    0 1 11 0 1

    2 2 0

    1.251.332

    0.5

    =

    0.2 0 00 0.333 0

    0 0 0.125

    4.1683.254.164

    =

    1.042001.082252.08200

    The residual term and its magnitude for this step is given by

    r(1) =

    4 1 11 3 1

    2 2 5

    1.042001.082252.08200

    54

    1

    =

    0.167752.3707511.4905

    and r(2) 11.73372.

    5 Approximation methods

    Polynomial interpolation

    Example 6. Find the interpolating polynomial if we want to interpolate y(x) =sin x at the points

    2

    , 0, 2

    . Also find a generalised expression for the upper

    bound on the error.

    Solution: To find the polynomial we first identify

    x0 = 2

    , x1 = 0, x2 =

    2

    and thus n = 2. We have to solve the following system of linear equations

    a0 2

    a1 + 2

    4a2 = 1

    a0 + (0)a1 + (0)a2 = 0

    a0 +

    2a1 +

    2

    4a2 = 1

    Immediately we see that a0 = 0 and thus we are left with

    2

    a1 +2

    4a2 = 1

    2a1 +

    2

    4a2 = 1

    17

  • 7/28/2019 Lecture Aid 2012

    20/33

    from which we obtain the solutions

    a2 = 0 and a1 =2

    .

    We find the interpolating polynomial to be

    p2(x) = a0 + a1x + a2x2

    =2

    x

    Figure 6 illustrates the interpolating polynomial compared to the actual func-tion. An expression for the error is obtained from equation (5.4) in the notes as

    -1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    -1.5 -1 -0.5 0 0.5 1 1.5

    y

    x

    sin(x)(2/pi)*x

    Figure 6: Graphs of y(x) = sin x and p2(x) =2

    x.

    follows

    y(x) p2(x) = y3((x))

    3!

    3i=1

    (x xi)

    = cos((x))6

    x +

    2

    (x 0)

    x

    2

    We dont know what the value of (x) is, we only know that x0 < (x) < x2. Tocircumvent this, we calculate a generalised upper bound on the error by takinga maximum on the derivative because

    max[

    2, 2]|cos x| = 1

    18

  • 7/28/2019 Lecture Aid 2012

    21/33

    and

    |cos((x))| max[2 ,2 ] |cos x| for

    2 < (x)