1 Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say x), the idea is to find an x such that: f(x) = 0 Root Finding Examples Find real x such that: Find real x such that: Find roots of quadratic eq. use formula For many other fns not easy to determine their roots For example cannot be solved analytically. Use approximate solutions techniques. 2 4 3 0 x x + = ( cos 2 0 x - = 2 1,2 4 2 b b ac x a -± - = 2 0 ax bx c + = x e x f x - = - ) ( Graphical methods • If we plot the function f(x)=0, and observe where it crosses the x axis, we can have a rough estimate of the root. • Limited by the fact that the results are not precise, but useful to get an initial estimate which can be further refined by other method. • The graphical methods are also helpful to explore function behaviour. Trial and Error Use trial and error: • Guess a value of x and evaluate whether f(x) is zero!! • If not (as it almost always the case) make another guess, evaluate f(x) again and determine whether the new value provides a better estimate for the root. • Repeat process until a guess is obtained that results in f(x) being close to zero An Algorithmic Approach • Idea: find a sequence of x 1 ,x 2 ,x 3 ,x 4 …. so that for some N, x N is “close” to a root. • that is, |f(x N )| < tolerance • What do we need?
12
Embed
Root Finding Numerical Methodsstaff.fit.ac.cy/com.ps/ACSC285/-02-Root_Finding/Zero... · 2007. 11. 4. · Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Numerical Methods
Solving Non Linear 1-Dimensional Equations
Root Finding
Given a real valued function f of one variable (say x), the idea is to find an x such that:
f(x) = 0
Root Finding Examples
Find real x such that:
Find real x such that:
Find roots of quadratic eq. use formula
For many other fns not easy to determine their roots For example cannot be solved analytically.
Use approximate solutions techniques.
2 4 3 0x x+ + =( )c o s 2 0x − =
2
1,2
4
2
b b acx
a
− ± −=
2 0ax bx c+ + =
xexf x −= −)(
Graphical methods• If we plot the function f(x)=0, and
observe where it crosses the x axis, we can have a rough estimate of the root.
• Limited by the fact that the results are not precise, but useful to get an initial estimate which can be further refined by other method.
• The graphical methods are also helpful to explore function behaviour.
Trial and Error
Use trial and error: • Guess a value of x and evaluate whether f(x) is
zero!!• If not (as it almost always the case) make
another guess, evaluate f(x) again and determine whether the new value provides a better estimate for the root.
• Repeat process until a guess is obtained that results in f(x) being close to zero
An Algorithmic Approach
• Idea: find a sequence of x1,x2,x3,x4…. so that for some N, xN is “close” toa root.
• that is, |f(xN)| < tolerance
• What do we need?
2
Requirements for this to work
• Initial guess: x1
• Relationship between xn+1 and xn and possibly xn-1, xn-2 , xn-3, …
Fixed Point Iteration: Example 1…The rearrangement x = (x3 + 3)/7 leads to the iteration
For x0 = 3 the iteration will diverge from the upper root α.
n x0 31 4.285712 11.67393 227.7024 16865595 6.9E+17
0
2
4
6
8
10
0 2 4 6 8 10
x
y
The iteration diverges because g’(α) > 1.
α x0 x1
...,3,2,1,0,7
33
1 =+=+ nx
x nn
Fixed Point Iteration: Example 1…
Fixed-point Iteration: Example 2
-
Fixed-point Iteration: Example 2…
-
g1(x) converges, g2(x) diverges, g3(x) converges very quickly
Note (see later): g3(x) is the Newton-Raphson iteration function for 2)( 31 −−= xxxf
Bracketing Methods• This class of methods exploit the fact that
functions changes sign in the vicinity of a root. Recall the following:– The Intermediate Value Theorem tells us that if a
continuous function is positive at one end of an interval and is negative at the other end of the interval then there is a root somewhere in the interval.
• Called bracketing methods because two initial guesses for the root are required.
• The two guesses must “bracket” or be on either sides of the root.
4
Bracketing Bisection Method: Steps
1) Notice that if f(a)*f(b) <=0 then there is a root somewhere between a and b
2) Suppose we are lucky enough to be given a and b so that f(a)*f(b) <= 0
3) Divide the interval into two and test to see which part of the interval contains the root
4) Repeat
Step 1
Even though the left hand side could have a root it in we are going to drop it from our search.
The right hand side mustcontain a root!!!. So we are going to focus on it.
• Every time we split the interval we reduce the search interval by a factor of two.
• So the error in the value of the root of the function after n iterations is given by
−≤−≤
nnn
baba
200ε
nεε =
Bisection Method …• From this relationship we can also determine
the number of iterations needed to satisfy some given error criterion:
Regula Falsi Method• The regula falsi method start with two points,
(a, f(a)) and (b,f(b)), satisfying the condition that f(a)f(b)<0.
• The straight line through the points (a, f(a)), (b, f(b)) is
• The next approximation to the zero is the value of x where the straight line through the initial points crosses the x-axis.
)()()(
)( axab
afbfafy −
−−+=
)()(
)()()(
)()( afbf
abfbafaf
afbf
abax
−−=
−−−=
Regula Falsi Method (cont.)
• If there is a zero in the interval [a, c], we leave the value of a unchanged and set b = c.
• On the other hand, if there is no zero in [a, c], the zero must be in the interval [c, b]; so we set a = c and leave b unchanged.
• The stopping condition may test the size of y, the amount by which the approximate solution x has changed on the last iteration, or whether the process has continued too long.
• Typically, a combination of these conditions is used.
Regula Falsi Method: ExampleFinding the Cube Root of 2 Using Regula Falsi• Since f(1)= -1, f(2)=6, we take as
our starting bounds on the zero a=1 and b=2.
• Our first approximation to the zero is
• We then find the value of the function:
• Since f(a) and f(8/7) are both negative, but f(8/7) and f(b) have opposite signs, we set new a=8/7, and b=2 remains the same.
1429.17/87/62
)6(16
122))((
)()(
≈=−=+−−=
−−−= bf
afbf
abbx
5073.02)7/8()( 3 −≈−== xfy
2)( 3 −== xxfy
6
Bracketing Method: Advantages/Disadvantages
• Two advantages are:
1. the procedure always converges; that is, a root will always be found, provided that at least one root exists.
2. there is good information available about the error associated with the result.
• Two disadvantage are:
1. the method converges relatively slowly.
2. many iterations may be required.
Newton-Raphson Method
• Use the value and first derivative to extrapolate linearly.
Iterations converge to −2.84, 0.441 and 2.40 respectively (to 3 s.f.)
Newton’s Iteration: Example 1 …
Assume that A > 0 is a real number and let > 0 be an initial approximation to√A. Define the sequence using the recursive rule
Then this sequence converges to √A; that is,
Outline of Proof. Start with function f (x) = x2 − A, and notice that the roots of f (x) = 0 are ±√A. Now use f (x) and the derivative f’(x) to write down the Newton-Raphson iteration formula using
where
It can be proved that the generated sequence will converge for anystarting value generated > 0.
0p∞
=0}{ kkp
Apkk =∞→lim
Newton’s Iteration for Finding Square Roots
1( )k kp g p −=
0p
Convergence of Newton’s Method
• We will show that the rate of convergence is much faster than the bisection method.
• However – as always, there is a catch. The method uses a local linear approximation, which clearly breaks down near a turning point.
• Small makes the linear model very flat and will send the search far away …
)( nxf ′
(x1,f(x1))
(x2=x1-f(x1)/f’(x1))
Say we chose an initial x1 near a turning point. Then the linear fit shoots off into the distance!.
9
Newton’s Method: Advantages/Disadvantages
• The Newton-Raphson method generally provides rapid convergence to the root, provided the initial value x0 is sufficiently close to the root.
• How close is ‘sufficiently close’? That dependson the characteristics of the function itself. As illustrated by the following examples, certain initial values of x may cause the solution to diverge or not produce a valid result.
To solve the equation f(x) = 0, where f(x) = 1/x + 3, use the iteration :
To find the only root α, let initial approximation x0 = −1.
1)1/(1
3)1/(11
/1
3/122
0
001 =
−−+−−−=
−+−=x
xxx
...,3,2,1,0,/1
3/121 =
−+−=+ nx
xxx
n
nnn
The iteration quickly diverges, failing to give the rootα.
etc.etc.
51/1
31/11
/1
3/122
1
112 =
−+−=
−+−=x
xxx
855/1
35/15
/1
3/122
2
223 =
−+−=
−+−=x
xxx
Failure of Newton’s iteration: Example 1
-2
-1
0
1
2
3
4
5
6
-2 -1 0 1 2 3 4 5 6
x
f(x)
n x(n)0 -11 12 53 854 218455 1.43E+09
To solve the equation f(x) = 0, where f(x) = 1/x + 3, use the iteration :
To find the only root α, let initial approximation x0 = −1.
...,3,2,1,0,/1
3/121 =
−+−=+ nx
xxx
n
nnn
The iteration quickly diverges, failing to give the root α = -1/3.
x0 x1 x2α
y = 1/x + 3
Failure of Newton’s iteration: Example 1...
Newton’s iteration for f (x) = xe−x can produce a divergent sequence.
Failure of Newton’s iteration: Example 2
Newton’s iteration for f (x) = x3 − x − 3 can produce a cyclic sequence.
Failure of Newton’s iteration: Example 3
Newton’s iteration can produce a cyclic sequence
Failure of Newton’s iteration: Example 4
10
Newton’s iteration for f (x) = arctan(x) can produce a divergent oscillating sequence.
Failure of Newton’s iteration: Example 5 Newton’s Method: algorithm so far
• Choose initial guess
• Repeat
Until Failure / Convergence
0x
)(
)(1
k
kkk xf
xfxx
′−=+
How do we determine this?
Convergence Criteria
• Convergence checking will avoid searching to unnecessaryaccuracy
• Convergence checking can consider whether two successiveapproximations to the root are close enough to be consideredequal
• Convergence checking can examine whether f(x) is sufficiently close to zero at the current guess
A root-finding procedure needs to�monitor progress towards the root and�stop when current guess is close enough to the desired root
Convergence Criteria …
On the values of x:
On the values of f(x):
BOTH OF THESE CRITERIA NEED TO BE SATISFIED FOR THE ALGORITHM TO BE COMPLETE
Failure to converge• Check against before dividing by this
• Put a maximum on the number of iterations so as to prevent the sequence ‘wandering’ (use for loop instead of repeat-until)
• Check to see if or are growing rather than decreasing
• Check if = NaN or infinity and whether largest representable no. < < smallest
• Check if = infinity, then
0)( ≈′ kxf
)( 1+kxf )()( 1 kk xfxf −+
)( kxf
)()( 1 kk xfxf =+)( kxf ′
kx
11
Software requirements
• Function and its derivative
• An initial iterate
• Convergence tolerance tol (=tol1=tol2) – a small number
• A maximum number of possible iterations
User of such software should provide:
)( kxf )(xf ′0x
NOTE: if there are numerous solutions, different choices of might lead to different correct answers0x
Newton in Matlab
Speed of Convergence
then the sequence is said to converge to p with order of convergence R.
The number A is called the asymptotic error constant.
Assume that converges to p and set En = p − pn for n ≥ 0.If two positive constants A 0 and R > 0 exist, and≠
If R=1, the convergence of is called linear( new error proportional to old error)
If R=2, the convergence of is called quadratic ( new error proportional to old error squared)
Example Start with p0 = −2.4 and use Newton-Raphson iteration to find the root p = −2 of the polynomial f (x) = x3 − 3x + 2. The iteration formula for computing {pk } is
Newton’s Method: Quadratic Convergence at a Simple Root
Checking for quadratic convergence (R = 2 ), we get the values in the following table.
Taking a closer look: where A ≈ 2/3.
Convergence Rate for Newton-Raphson Iteration
Assume that Newton-Raphson iteration produces a sequence that converges to the root p of the function f (x).
If p is a multiple root of order M, convergence is linear and
If p is a simple root, convergence is quadratic and
REMINDER: Assume that and . We say that f (x) = 0 has a root of order M at x = p if and only if
],[ baCf M∈ ),( bap∈
A root of order M = 1 called a simple root, if M > 1 it is called a multiple root.
Convergence in Newton Iteration
• Let x be the exact root, xi the value in i-thiteration, and εi =xi-x the error, then
• Rate of convergence:
2 3
2
1( ) ( ) ( ) ( ) ( ),
2( ) ( ) ( ) ( )
f x f x f x f x O
f x f x f x O
ε ε ε ε
ε ε ε
′ ′′+ = + + +
′ ′ ′′+ = + +
1
2
1
2
( ),
( )
( ) ( ) ( ) / 2
( ) ( ) ( )
( )
2 ( )
ii i
i
i i ii i i
i i
i
f xx x
f x
f x f x f x
f x f x f x
f x
f x
ε εε ε εε
ε
+
+
= −′
′ ′′+= − ≈ −′ ′ ′′+
′′=
′ (Quadratic convergence for simple roots)
12
Secand Method
= slope of tangent line at is approximated by slope of the secant line L passing through points
The iterate xk+1 is the root of this secant line L:
)()(
)()(
1
111
−
−−+ −
−=kk
kkkkk xfxf
xfxxfxx
LSlope of L=
(i.e. to find xk+1 set y=0)
Finding the Square Root of 3 by Secand Method– To find a numerical approximation to , we seek the zero of
.– Since f(1)=-2 and f(2)=1, we take as our starting bounds on
the zero and .– Our first approximation to the zero is
– Calculation of using secant method.
33)( 2 −== xxfy
10 =x 21 =x
667.13
5)1(
)2(1
1221
01
0112 ≈=
−−−−=
−−−= y
yy
xxxx
3
Secand Method: Example 1
Consider the secant method used on f(x)=x3+x2-x-1 with x0=2, x1=0.5. Note this fn is continuous with roots +1,-1.
Secant method: Example 2 Secant Method: Advantages and Disadvantages
• The Secant method generally provides fairly rapid convergence to the root but not as rapidly as the Newton-Raphson method. However, except for the starting iteration, the secant method requires the evaluation of only one function per iteration while the Newton-Raphson method requires that two functions (the function and its derivative) be evaluated for each iteration.
• Similar to the Newton’s method, the secant method may also encounter runaway, flat spot, and cyclical non-convergence characteristics.
• Secant method : (local convergence) less risky, mid-speed (convergence rate R 1.6180334), similar to Newton’s method but avoids calculation of derivatives
Global convergence methods: converge starting from anywhere
Local convergence methods: converge if x is sufficiently close to root