Halley’s Method: A Cubically Converging Method of Root Approximation Gabriel Kramer Brittany Sypin December 3, 2011 Abstract This paper will discuss numerical ways of approximating the solutions to the equation f (x) = 0, where, for the purposes of this paper, f (x)= ln(x)+ x. In particular, this paper will discuss the bisection method, fixed- point iteration, Newtons method, and Halley’s method. Also, a convergence analysis will be done on each of these methods. 1 Introduction In mathematics it is often desirable to solve the equation f (x) = 0 analytically, but often this cannot be done. Equations of this form occur often in algebra, calculus, physics, differential equations, chemistry, among others. When an- alytic solutions are not possible for these equations, numerical approximation methods are useful in obtaining approximate solutions. 1
27
Embed
Halley’s Method: A Cubically Converging Method of Root Approximation · 2012. 1. 20. · Halley’s Method: A Cubically Converging Method of Root Approximation Gabriel Kramer Brittany
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Halley’s Method: A Cubically Converging
Method of Root Approximation
Gabriel Kramer
Brittany Sypin
December 3, 2011
Abstract
This paper will discuss numerical ways of approximating the solutions
to the equation f(x) = 0, where, for the purposes of this paper, f(x) =
ln(x)+x. In particular, this paper will discuss the bisection method, fixed-
point iteration, Newtons method, and Halley’s method. Also, a convergence
analysis will be done on each of these methods.
1 Introduction
In mathematics it is often desirable to solve the equation f(x) = 0 analytically,
but often this cannot be done. Equations of this form occur often in algebra,
calculus, physics, differential equations, chemistry, among others. When an-
alytic solutions are not possible for these equations, numerical approximation
methods are useful in obtaining approximate solutions.
1
2 Halley’s Method
Halley’s method is useful for finding a numerical approximation of the roots to
the equation f(x) = 0 when f(x), f ′(x), and f ′′(x) are continuous. The Halley’s
method n + 1 recursive algorithm is given by:
xn+1 = xn −2f(xn)f ′(xn)
2[f ′(xn)]2 − f(xn)f ′′(xn)
where x0 is an initial guess.
2.1 Halley’s Method Derivation
Halley’s method considers the function
g(x) =f(x)√|f ′(x)|
and its derivative
g′(x) =2[f ′(x)]− f(x)f ′′(x)
2f ′(x)√|f(x)|
then using Newton’s method1 on g(x) results in Halley’s method.2. This can be
done because, an assumption made in creating g(x) is that the roots of f(x) are
not equal to the roots of f ′(x). Also, it is easy to see that f(a) = 0 if and only
if g(a) = 0. The usefulness of considering g(x) instead of f(x), comes from the
fact that Newton’s method converges cubically when g(x) is used; as opposed to
using f(x), for which the approximation converges quadratically.
1See Section 5.3 for a discussion of Newton’s Method2This method only works when the roots of the function are not the same as the roots of
its derivative (f ′(r) 6= f(r) = 0).
2
3 Analytic Cubic Convergence Analysis of Hal-
ley’s Method
When using Halley’s Method on a function where f ′′′(x) is continuous, the
iterates satisfy
|xn+1 − a| ≤ K ∗ |xn − a|3
where f(a) = 0 and k is a non-negative number. This will be shown by con-
ducting a convergence analysis.
First consider r to be a root, such that f(r) = 0. Then by Taylor’s theorem,
The MATLAB code to approximate the solution to the equation ln(x) + x = 0
using Halley’s Method using an initial guess (p) and the the number of digits
correct (a tolerance, k) is:
function ans=halleytol(p,k)fx=inline(’y+log(y)’);dfx=inline(’1+1/y’);ddfx=inline(’0-1/y^2’);x=zeros(1,100);x(1)=x(p)-2*fx(x(p))*dfx(x(p))/(2*[dfx(x(p))]^2-fx(x(p))*ddfx(x(p)));x(2)=x(x(1))-2*fx(x(x(1)))*dfx(x(x(1)))/(2*[dfx(x(p))]^2-fx(x(p))*ddfx(x(p)));i=1while abs(x(p+1)-x(p))>5*10^(-k)
From these tables, it is easily observable that the fixed point iteration and
bisection methods converge rather slowly when compared to Halley and Newton’s
methods.
21
7 Appendix
7.1 Bisection Method Codes
7.1.1 Code to Approximate the solution of f(x)=0, where r∈ [a, b]
%Computes approximate solution of f(x)=0%Input: inline function f; a,b such that f(a)*f(b)<0,% and tolerance tol%Output: Approximate solution xcfunction xc = bisectnotol(f,a,b,k)if sign(f(a))*sign(f(b)) >= 0error(’f(a)f(b)<0 not satisfied!’) %ceases executionendfa=f(a);fb=f(b);for i=1:kc=(a+b)/2;fc=f(c);if fc == 0 %c is a solution, donebreakendif sign(fc)*sign(fa)<0 %a and c make the new intervalb=c;fb=fc;else %c and b make the new intervala=c;fa=fc;endendxc=(a+b)/2; %new midpoint is best estimate
7.1.2 Bisection Method Error Code
%This function displays a table of errors as shown in section 1.2%a is the beginning point of the interval%b is the ending point of the interval%k is the size of the desired number iterationsfunction ans=errb(a,b,k)fx=inline(’x+log(x)’);for i=1:kerr(i)=abs(0.5671432904-bisectnotol(fx,a,b,i));err(i+1)=abs(0.5671432904-bisectnotol(fx,a,b,i+1));ans(i)=err(i+1)/err(i);end
22
7.2 Fixed Point Iteration Codes
7.2.1 Code to Approximate the solution of g(x)=x
%Computes approximate solution of g(x)=x%Input: inline function g, starting guess x0,% number of steps k%Output: Approximate solution xcfunction xc=fpi(g,x0,k)x(1)=x0;for i=1:kx(i+1)=g(x(i));ende=x-1.81003792923*ones(1, length(x));for j=2:ks(j)=e(j)/e(j-1);endx’; %transpose output to a columnxc=x(k+1);
7.2.2 Fixed Point Iteration Error Code
%This function displays a table of errors as shown in section 1.2%k is the size of the desired number iterationsfunction ans=errf(k)gx=inline(’exp(-x)’);for i=1:ke(i)=abs(0.56714329040978387300-fpi(gx,1,i));e(i+1)=abs(0.56714329040978387300-fpi(gx,1,i+1));ans(i)=e(i+1)/e(i);end
23
7.3 Halley’s Method Codes
7.3.1 Code to Approximate the Solution of f(x)=0 With Specified
Number of Iterations
%This program approximates the roots to the equation f(x)=0 using Halley’s method%Where p is an initial guess%and k is the number of desired iterationsfunction halley (p,k)fx=inline(’y+log(y)’);dfx=inline(’1+1/y’);ddfx=inline(’0-1/y^2’);x=zeros(1,k);x(1)=p;for i=1:kx(i);x(i+1)=x(i)-2*fx(x(i))*dfx(x(i))/(2*[dfx(x(i))]^2-fx(x(i))*ddfx(x(i)));endx=zeros(1,k);p=1while abs(x(p+1)-x(p))>5*10^(-k)if abs(x(p+1)-x(p))>5*10^(-k)breakendx(p+1)=x(p)-2*fx(x(p))*dfx(x(p))/(2*[dfx(x(p))]^2-fx(x(p))*ddfx(x(p)));p=p+1;end
7.3.2 Code to Approximate the Solution of f(x)=0 With a Given
Tolerance
%This program approximates the roots to the equation f(x)=0 using Halley’s method%Where p is an initial guess%and k is the number of desired correct digitsfunction ans=halleytol(p,k)fx=inline(’y+log(y)’);dfx=inline(’1+1/y’);ddfx=inline(’0-1/y^2’);x=zeros(1,100);x(1)=(p)-2*fx(p)*dfx((p))/(2*[dfx((p))]^2-fx((p))*ddfx((p)));x(2)=x(1)-2*fx(x(1))*dfx(x(1))/(2*[dfx(x(1))]^2-fx(x(1))*ddfx(x(1)));i=2;whileabs(x(i)-x(i-1))>.5*10^(-k)x(i+1)=x(i)-2*fx(x(i))*dfx(x(i))/(2*[dfx(x(i))]^2-fx(x(i))*ddfx(x(i)));i=i+1;endians=x(i-1);
24
7.3.3 Halley’s Method Error Code
%This function displays a table of errors as shown in section 1.2%k is the size of the desired number iterations%This program is run with an initial guess of 1function ans= errh(k)for i=1:kerr(i)=abs(0.56714329040978387300-(halley(1,i)));err(i+1)=abs(0.56714329040978387300-halley(1,i+1));ans(i)=err(i+1)/err(i);end
25
7.4 Newton’s Method Codes
7.4.1 Code to Approximate the Solution of f(x)=0 With a Specified
Number of Iterations
%f is the objective function, which will be input inline%df is the derivative of f, which will also be input inline%p is an initial guess of r, where f(r)=0%k is the number of desired iterationsfunction ans=newtonnotol(fx,dfx,p,k)ans(1)=p-fx(p)/dfx(p);x(1)=p;for i=1:kx(i+1)=x(i)-fx(x(i))/dfx(x(i)); %Newton’s iteration stepans=x(i+1);end
7.4.2 Code to Approximate the Solution of f(x)=0 With a Given
Tolerance
%f is the objective function, which will be input inline%df is the derivative of f, which will also be input inline%x0 is an initial guess of r, where f(r)=0%tol is the backward tolarance%max is the maximum number of iterationsfunction [n, err, x0,fx]=newton(f,df,x0, tol, max)for n=1:maxfx=f(x0);dfx=df(x0);x1=x0-fx/dfx; %Newton’s iteration steperr=abs(x1-x0);x0=x1;if abs(fx)<tol || err<tolbreakendend
7.4.3 Newton’s Method Error Codes
%This function displays a table of errors as shown in section 1.2%p is an initial guess%k is the size of the desired number iterationsfunction ans=errn(p,k)for i=1:ke(i)=abs(0.56714329040978387300-newtonnotol(inline(’x+log(x)’),inline(’1+1/x’),p,i));e(i+1)=abs(0.56714329040978387300-newtonnotol(inline(’x+log(x)’),inline(’1+1/x’),p,i+1));ans(i)=e(i+1)/e(i);end
26
8 References
”Halleys Method.” Halleys Method n.pag. Wikipedia. Web. 3 Dec 2011.