8/6/2019 2 Roots of Equations
1/83
8/6/2019 2 Roots of Equations
2/83
MATH160NUMERICALMETHODS
8/6/2019 2 Roots of Equations
3/83
ROOTS OF EQUATIONS
Objective: Determine the roots of an equation using
bisection method.
Determine the roots of an equation usingNewton-Rhapson method.
Determine the roots of an equation usingsecant method.
Determine the roots of an equation usingfixed-point iteration method.
Determine the roots of an equation usingfalse-position method.
8/6/2019 2 Roots of Equations
4/83
ROOTS OF EQUATIONS
Objective: Create algorithms to find the roots of
equation.
Determine the advantages and disadvantagesof each algorithm.
8/6/2019 2 Roots of Equations
5/83
ROOTS OF EQUATIONS
A root or solution of equation f(x)=0 are the valuesof x for which the equation holds true.
Methods for finding roots of equations are basicnumerical methods and the subject is generally
covered in books on numerical methods or numericalanalysis
Numerical methods for finding roots of equationscan often be easily programmed and can also be
found in general numerical libraries.
8/6/2019 2 Roots of Equations
6/83
ROOTS OF EQUATIONS
Make sure you arent confused by the terminology.
All of these are the same:
Solving a polynomial equation p(x) = 0
Finding roots of a polynomial equation p(x) = 0 Finding zeroes of a polynomial function p(x)
Factoring a polynomial function p(x)
8/6/2019 2 Roots of Equations
7/83
ROOTS OF EQUATIONS
In math, the bisection method is a root-findingalgorithm which repeatedly bisects an interval thenselects a subinterval in which a root must lie forfurther processing.
It is a very simple and robust method, but it is alsorelatively slow.
http://en.wikipedia.org/wiki/File:Bisection_method.png8/6/2019 2 Roots of Equations
8/83
ROOTS OF EQUATIONS
The bisection method is applicable when we wish tosolve the equation for the variable x, where fis acontinuous function
The bisection method requires two initial points a
and bsuch that f(a) and f(b) have opposite signs. This is called a bracket of a root, for by the IVT
the continuous function fmust have at least oneroot in the interval (a, b).
8/6/2019 2 Roots of Equations
9/83
ROOTS OF EQUATIONS
Theorem: If fis a continuous function on the interval [a, b]
and f(a)f(b) < 0, then the bisection methodconverges to a root of f.
The absolute error is halved at each step. Thus, the method converges linearly, which is quite
slow.
On the other hand, the method is guaranteed toconverge if f(a) and f(b) have different signs.
8/6/2019 2 Roots of Equations
10/83
Basis of Bisection Method
Theorem: An equation f(x)=0, where f(x) is a realcontinuous function, has at least one root betweenxl and xu if f(xl) f(xu) < 0.
Figure 1 At least one root exists between the two points if thefunction is real, continuous, and changes sign.
x
f(x)
xux
8/6/2019 2 Roots of Equations
11/83
Basis of Bisection Method
Figure 2 If function does not change sign betweentwo points, roots of the equation may still existbetween the two points.
x
f(x)
xux
8/6/2019 2 Roots of Equations
12/83
Basis of Bisection Method
Figure 3 If the function does not change signbetween two points, there may not be any roots forthe equation between the two points.
x
f(x)
xux
x
f(x)
xu
x
8/6/2019 2 Roots of Equations
13/83
Basis of Bisection Method
Figure 4 If the function changes sign between twopoints, more than one root for the equationmay exist between the two points.
x
f(x)
xu x
8/6/2019 2 Roots of Equations
14/83
Algorithm for Bisection Method
Step 1: Choose xl and xu as two guesses for theroot such that f(xl) f(xu) < 0, or in other words, f(x)changes sign between xl and xu.
This was demonstrated in Figure 1.
x
f(x)
xux
8/6/2019 2 Roots of Equations
15/83
Algorithm for Bisection Method
Step 2: Estimate the root, xm of the equationf (x) = 0 as the mid point between xl and xu as
xx
m =xu
2
x
f(x)
xuxx
m
8/6/2019 2 Roots of Equations
16/83
Algorithm for Bisection Method
Step 3: Check the following
a) If , then the root lies between xl andxm; then xl = xl ; xu = xm.
b) If , then the root lies between xm andxu; then xl = xm; xu = xu.
c) If ; then the root is xm. Stop the
algorithm if this is true.
0ml xfxf
0ml xfxf
0ml xfxf
8/6/2019 2 Roots of Equations
17/83
Algorithm for Bisection Method
Step 4: Find the new estimate of the root
Find the absolute relative approximate error
where
xx
m =xu
2
100new
m
old
m
new
ax
xxm
rootofestimatepreviousoldmx
rootofestimatecurrentnewmx
8/6/2019 2 Roots of Equations
18/83
Algorithm for Bisection Method
Step 5: Compare the absolute relative approximate error
with the pre-specified error tolerance.
Note one should also check whether the number ofiterations is more than the maximum number ofiterations allowed. If so, one needs to terminatethe algorithm and notify the user about it.
8/6/2019 2 Roots of Equations
19/83
Bisection Method
Example: Consider the equation:
How many roots does the equation have? What are the intervals that contains the
roots?
Solve for the roots using bisection methodwith error of less than 1%.
016
xx
8/6/2019 2 Roots of Equations
20/83
Bisection Method
Example: Consider the equation:
How many roots does the equation have? What are the intervals that contains the
roots?
Solve for the roots using bisection method
with error of less than 1%.
0422sin52 xxx
8/6/2019 2 Roots of Equations
21/83
Bisection Method
Example: Consider the equation:
How many roots does the equation have? What are the intervals that contains the
roots?
Solve for the roots using bisection method
with error of less than 1%.
0sin1
0133 x
ex
xxx
8/6/2019 2 Roots of Equations
22/83
Bisection Method
Advantages : Always convergent
The root bracket gets halved with eachiteration - guaranteed.
Drawbacks: Slow convergence
If one of the initial guesses is close to the root,the convergence is slower
8/6/2019 2 Roots of Equations
23/83
Bisection Method
Drawbacks: If a function f(x) is such that it just touches
the x-axis it will be unable to find the lower andupper guesses.
f(x)
x
8/6/2019 2 Roots of Equations
24/83
Bisection Method
Drawbacks: Function changes sign but root does not exist
f(x)
x
8/6/2019 2 Roots of Equations
25/83
Bisection Method
Note: The number of iterations nhas to satisfy
to ensure that the error is smaller than thetolerance .
8/6/2019 2 Roots of Equations
26/83
NEWTON-RAPHSONMETHOD
8/6/2019 2 Roots of Equations
27/83
Newtons Method
In numerical analysis, Newton's method (alsoknown as the NewtonRaphson method), namedafter Isaac Newton and Joseph Raphson , isperhaps the best known method for findingsuccessively better approximations to the zeroes(or roots) of a real-valued function.
Newton's method can often converge remarkablyquickly, especially if the iteration begins"sufficiently near" the desired root.
Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be,depends on the problem.
8/6/2019 2 Roots of Equations
28/83
Newtons Method
Unfortunately, when iteration begins far from thedesired root, Newton's method can easily lead anunwary user astray with little warning.
Thus, good implementations of the method embed
it in a routine that also detects and perhapsovercomes possible convergence failures.
8/6/2019 2 Roots of Equations
29/83
Newtons Method
Given a function (x) and its derivative '(x), webegin with a first guess x0.
Provided the function is reasonably well-behaved abetter approximation x1 is
The process is repeated until a sufficientlyaccurate value is reached:
http://en.wikipedia.org/wiki/Derivativehttp://en.wikipedia.org/wiki/Derivative8/6/2019 2 Roots of Equations
30/83
Newtons Method
An illustration of one iteration of Newton'smethod (the function is shown in blue and thetangent line is in red).
We see that xn+1 is a better approximation than xn
for the rootx
of the functionf.
http://en.wikipedia.org/wiki/File:Newton_iteration.svg8/6/2019 2 Roots of Equations
31/83
Newtons Method
The idea of the method is as follows: one startswith an initial guess which is reasonably close tothe true root, then the function is approximatedby its tangent line, and one computes the x-intercept of this tangent line.
This x-intercept will typically be a betterapproximation to the function's root than theoriginal guess, and the method can be iterated.
http://en.wikipedia.org/wiki/Tangent_linehttp://en.wikipedia.org/wiki/File:Newton_iteration.svghttp://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/File:Newton_iteration.svghttp://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/Tangent_linehttp://en.wikipedia.org/wiki/Tangent_linehttp://en.wikipedia.org/wiki/Tangent_line8/6/2019 2 Roots of Equations
32/83
Newtons Method
Derivation:f(x)
f(xi)
xi+1 xi
X
B
C A
AC
ABtan(
1
)()('
ii
ii
xx
xfxf
)(
)(1
i
iii
xf
xfxx
8/6/2019 2 Roots of Equations
33/83
Newtons Method
Derivation:
http://en.wikipedia.org/wiki/File:Newton_iteration.svg8/6/2019 2 Roots of Equations
34/83
Algorithm for Newtons Method
Step 1: Evaluate f(x) symbolically.
Step 2:
Use an initial guess of the root, xi , to
estimate the new value of the root, xi+1 , as
Step 3:
Find the absolute relative approximate error
i
iii
xf
xf-= xx 1
0101
1
x
- xx=
i
ii
a
8/6/2019 2 Roots of Equations
35/83
Algorithm for Newtons Method
Step 4: Compare the absolute relative approximate error
with the pre-specified error tolerance.
Note one should also check whether the number of
iterations is more than the maximum number ofiterations allowed. If so, one needs to terminatethe algorithm and notify the user about it.
8/6/2019 2 Roots of Equations
36/83
Newtons Method
Example: Consider the problem of finding the positive
number xwith cos(x) = x3. We can rephrase that as finding the zero of f(x) =
cos(x) x3.
We have f'(x) = sin(x) 3x2.
Since cos(x) 1 for all x and x3 > 1 for x > 1, we know thatour zero lies between 0 and 1.
We try a starting value of x0 = 0.5. (Note that a starting
value of 0 will lead to an undefined result.).
8/6/2019 2 Roots of Equations
37/83
Newtons Method
Answer:
8/6/2019 2 Roots of Equations
38/83
Newtons Method
Example: Consider the equation:
How many roots does the equation have? What are the intervals that contains the
roots?
Solve for the roots using Newtons method
with error of less than 0.000001.
03725.05
xx
8/6/2019 2 Roots of Equations
39/83
Newtons Method
Answer:
-2.17769,
-0.42909,
2.39690
8/6/2019 2 Roots of Equations
40/83
Newtons Method
Example: Consider the equation:
How many roots does the equation have? What are the intervals that contains the
roots?
Solve for the roots using Newtons method
with error of less than 0.000001.
034
cos2
xxx
8/6/2019 2 Roots of Equations
41/83
Newtons Method
Answer:
0.37995,
2.71298
8/6/2019 2 Roots of Equations
42/83
Newtons Method
Advantages : Converges fast (quadratic convergence),
if it converges. Requires only one guess.
8/6/2019 2 Roots of Equations
43/83
8/6/2019 2 Roots of Equations
44/83
Newtons Method
Drawbacks: Divergence at inflection
point for
0512.013
xxf
Iteration
Number
xi
0 5.0000
1 3.6560
2 2.7465
3 2.1084
4 1.6000
5 0.92589
6 30.119
7 19.746
18 0.2000
2
33
113
512.01
i
iii
x
xxx
8/6/2019 2 Roots of Equations
45/83
Newtons Method
Drawbacks: Division by zero For the equation
the Newton-Raphson method reduces to
For x=0 or x=0.02, the denominator will equal zero.
0104.203.0 623 xxxf
ii
ii
ii xx
xx
xx 06.03
104.203.02
623
1
8/6/2019 2 Roots of Equations
46/83
Newtons Method
Drawbacks: Oscillations near local maximum and minimum Results obtained from the Newton-Raphson method may
oscillate about the local maximum or minimum withoutconverging on a root but converging on the local maximum
or minimum.
Eventually, it may lead to division by a number close to zeroand may diverge.
For example for the equation has no real
roots.
022xxf
8/6/2019 2 Roots of Equations
47/83
Newtons Method
Drawbacks: Oscillations around local minima for 022
xxf
-1
0
1
2
3
4
5
6
-2 -1 0 1 2 3
f(x)
x
3
4
2
1
-1.75 -0.3040 0.5 3.142
Iteration
Number
0
1
23
4
5
6
7
89
1.0000
0.5
1.75
0.30357
3.1423
1.2529
0.17166
5.7395
2.69550.97678
3.00
2.25
5.0632.092
11.874
3.570
2.029
34.942
9.2662.954
300.00
128.571476.47
109.66
150.80
829.88
102.99
112.93175.96
8/6/2019 2 Roots of Equations
48/83
Newtons Method
Drawbacks: Root Jumping In some cases where the function f(x) is
oscillating and has a number of roots, one maychoose an initial guess close to a root. However,the guesses may jump and converge to someother root.
8/6/2019 2 Roots of Equations
49/83
Newtons Method
Drawbacks: Root Jumping Example For
Choose
It will converge to
instead of
-1.5
-1
-0.5
0
0.5
1
1.5
-2 0 2 4 6 8 10
x
f(x)
-0.06307 0.5499 4.461 7.539822
0sin xxf
53982.74.20x0x
2831853.62x
8/6/2019 2 Roots of Equations
50/83
SECANTMETHOD
8/6/2019 2 Roots of Equations
51/83
Secant Method
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of rootsof secant lines to better approximate a root of afunction f.
The first two iterationsof the secant method.
The red curve shows thefunction f and the blue
lines are the secants.
http://en.wikipedia.org/wiki/Numerical_analysishttp://en.wikipedia.org/wiki/Root-finding_algorithmhttp://en.wikipedia.org/wiki/Root-finding_algorithmhttp://en.wikipedia.org/wiki/Root_of_a_functionhttp://en.wikipedia.org/wiki/Secant_linehttp://en.wikipedia.org/wiki/Function_(mathematics)http://en.wikipedia.org/wiki/File:Secant_method.svghttp://en.wikipedia.org/wiki/Function_(mathematics)http://en.wikipedia.org/wiki/Secant_linehttp://en.wikipedia.org/wiki/Secant_linehttp://en.wikipedia.org/wiki/Secant_linehttp://en.wikipedia.org/wiki/Root_of_a_functionhttp://en.wikipedia.org/wiki/Root-finding_algorithmhttp://en.wikipedia.org/wiki/Root-finding_algorithmhttp://en.wikipedia.org/wiki/Root-finding_algorithmhttp://en.wikipedia.org/wiki/Root-finding_algorithmhttp://en.wikipedia.org/wiki/Root-finding_algorithmhttp://en.wikipedia.org/wiki/Numerical_analysishttp://en.wikipedia.org/wiki/Numerical_analysishttp://en.wikipedia.org/wiki/Numerical_analysis8/6/2019 2 Roots of Equations
52/83
Secant Method
The secant method is defined by the recurrencerelation
As can be seen from the recurrence relation, thesecant method requires two initial values, x0 and x1,which should ideally be chosen to lie close to theroot.
http://en.wikipedia.org/wiki/Recurrence_relationhttp://en.wikipedia.org/wiki/Recurrence_relationhttp://en.wikipedia.org/wiki/Recurrence_relationhttp://en.wikipedia.org/wiki/Recurrence_relationhttp://en.wikipedia.org/wiki/Recurrence_relation8/6/2019 2 Roots of Equations
53/83
Secant Method
Derivation: From Newtons Method
Approximate the derivative
The Secant method
1
1 )()()(
ii
iii
xx
xfxfxf
)(xf
)f(x-= xx
i
iii
'1
)()(
))((
1
11
ii
iiiii
xfxf
xxxfxx
8/6/2019 2 Roots of Equations
54/83
Secant Method
Derivation: Similar Triangles
The Secant method
f(x)
f(xi)
f(xi-1)
xi+1 xi-1 xi
X
B
C
E D A
DE
DC
AE
AB
11
1
1
)()(
ii
i
ii
i
xx
xf
xx
xf
)()())((
1
11
ii
iiiii
xfxfxxxfxx
8/6/2019 2 Roots of Equations
55/83
Algorithm for Secant Method
Step 1: Calculate the next estimate of the root from two
initial guesses
Step 2:
Find the absolute relative approximate error
)()(
))((
1
1
1
ii
iii
ii
xfxf
xxxfxx
0101
1 x
- xx
=i
iia
8/6/2019 2 Roots of Equations
56/83
Algorithm for Secant Method
Step 3: Find if the absolute relative approximate error
is greater than the prespecified relative errortolerance
If so, go back to step 1, else stop the algorithm. Also check if the number of iterations has
exceeded the maximum number of iterations
8/6/2019 2 Roots of Equations
57/83
8/6/2019 2 Roots of Equations
58/83
Secant Method
Answer: For cos(x) = x3.
8/6/2019 2 Roots of Equations
59/83
Secant Method
Example: Consider the equation:
How many roots does the equation have? What are the intervals that contains the
roots?
Solve for the roots using secant method with
error of less than 0.000001.
016
xx
8/6/2019 2 Roots of Equations
60/83
8/6/2019 2 Roots of Equations
61/83
Secant Method
Advantages : Converges fast, if it converges Requires two guesses that do not need to bracket
the root
8/6/2019 2 Roots of Equations
62/83
Secant Method
Drawbacks: Division by zero
10 5 0 5 102
1
0
1
2
f(x)
prev. guess
new guess
2
2
0
f x( )
f x( )
f x( )
1010 x x guess1 x guess2
8/6/2019 2 Roots of Equations
63/83
Secant Method
Drawbacks: Root Jumping
10 5 0 5 102
1
0
1
2
f(x)
x'1, (first guess )
x0, (previous guess)
Secant li ne
x1, (new gues s)
2
2
0
f x( )
f x( )
f x( )
secant x( )
f x( )
1010 x x 0 x 1' x x 1
8/6/2019 2 Roots of Equations
64/83
FIXED-POINTITERATIONMETHOD
8/6/2019 2 Roots of Equations
65/83
Fixed-point Iteration Method
Start from f(x) =0 and derive a relation x = g(x)
The fixed-point method is simply given by
8/6/2019 2 Roots of Equations
66/83
Fixed-point Iteration Method
Example: Compute zero for
Answer:
Derive a relation x = g(x)
The fixed-point method is simply given by
xexf x 24
8/6/2019 2 Roots of Equations
67/83
Fixed-point Iteration Method
When does it converge?
8/6/2019 2 Roots of Equations
68/83
Fixed-point Iteration Method
Example: Compute zero for
Answer:
xexf x 24
8/6/2019 2 Roots of Equations
69/83
Fixed-point Iteration Method
Example: Compute zero for
Answer:
xexf x 24
8/6/2019 2 Roots of Equations
70/83
Fixed-point Iteration Method
Example: Compute zero for
Answer:
xexf x 24
8/6/2019 2 Roots of Equations
71/83
Fixed-point Iteration Method
Example: Compute zero for
Answer:
xexf x 24
8/6/2019 2 Roots of Equations
72/83
Fixed-point Iteration Method
Example: Compute zero for
Answer:
xexf x 24
8/6/2019 2 Roots of Equations
73/83
FALSE-POSITIONMETHOD
8/6/2019 2 Roots of Equations
74/83
False-position Method
The false position method or regula falsi method isa root-finding algorithm that combines featuresfrom the bisection method and the secant method.
Like the bisection method, the false positionmethod starts with two points a0 and b0 such thatf(a0) and f(b0) are of opposite signs, which impliesby the IVT that the function fhas a root in theinterval [a0, b0].
The method proceeds by producing a sequence of
shrinking intervals [ak, bk] that all contain a root off.
8/6/2019 2 Roots of Equations
75/83
False-position Method
The first two iterations of the false positionmethod.
The red curve shows the function f and the bluelines are the secants.
http://en.wikipedia.org/wiki/File:False_position_method.svg8/6/2019 2 Roots of Equations
76/83
False-position Method
The graph used in this method is shown in thefigure.
8/6/2019 2 Roots of Equations
77/83
False-position Method
Advantage: Convergence is faster than bisection method.
Disadvantages:
1. It requires a and b.
2. The convergence is generally slow. 3. It is only applicable to f (x) of certain fixed
curvature in [a, b].
4. It cannot handle multiple zeros.
l h d
8/6/2019 2 Roots of Equations
78/83
False-position Method
Alternative formula: At iteration number k, the number
is computed. ck is the root of the secant line through (ak, f(ak))
and (bk, f(bk)).
If f(ak) and f(ck) have the same sign, then we set
ak+1 = ck and bk+1 = bk, otherwise we set ak+1 = ak andbk+1 = ck.
This process is repeated until the root isapproximated sufficiently well.
F l h d
8/6/2019 2 Roots of Equations
79/83
False-position Method If the initial end-points a0 and b0 are chosen such
that f(a0) and f(b0) are of opposite signs, then oneof the end-points will converge to a root of f.
Asymptotically, the other end-point will remainfixed for all subsequent iterations while the
converging endpoint becomes updated. As a result, unlike the bisection method, the width
of the bracket does not tend to zero.
As a consequence, the linear approximation to f(x),
which is used to pick the false position, does notimprove in its quality.
F l i i M h d
8/6/2019 2 Roots of Equations
80/83
False-position Method One example of this phenomenon is the function
f(x) = 2x3 4x2 + 3x
on the initial bracket [1,1].
The left end, 1, is never replaced and thus the
width of the bracket never falls below 1. Hence, the right endpoint approaches 0 at a linear
rate.
F l i i M h d
8/6/2019 2 Roots of Equations
81/83
False-position Method While it is a misunderstanding to think that the
method of false position is a good method, it isequally a mistake to think that it is unsalvageable.
The failure mode is easy to detect (the same end-point is retained twice in a row) and easily remedied
by next picking a modified false position, such as
or
down-weighting one of the endpoint values to forcethe next ck to occur on that side of the function.
F l i i M h d
8/6/2019 2 Roots of Equations
82/83
False-position Method
The factor of 2 above looks like a hack, but itguarantees superlinear convergence (asymptotically,the algorithm will perform two regular steps afterany modified step).
There are other ways to pick the rescaling whichgive even better superlinear convergence rates.
R
8/6/2019 2 Roots of Equations
83/83
Resources
Resources Numerical Methods Using Matlab, 4th Edition,
2004 by John H. Mathews and Kurtis K. Fink
Holistic Numerical Methods Institute by Autar
Kaw and Jai Pau. Numerical Methods for Engineers by Chapra
and Canale