-
M.Sc. (Mathematics)IV - Semester
311 43
Directorate of Distance Education
NUMERICAL ANALYSIS
ALAGAPPA UNIVERSITY[Accredited with ‘A+’ Grade by NAAC
(CGPA:3.64) in the Third Cycle
and Graded as Category–I University by MHRD-UGC]
(A State University Established by the Government of Tamil
Nadu)
KARAIKUDI – 630 003
-
All rights reserved. No part of this publication which is
material protected by this copyright noticemay be reproduced or
transmitted or utilized or stored in any form or by any means now
known orhereinafter invented, electronic, digital or mechanical,
including photocopying, scanning, recordingor by any information
storage or retrieval system, without prior written permission from
the AlagappaUniversity, Karaikudi, Tamil Nadu.
Information contained in this book has been published by VIKAS®
Publishing House Pvt. Ltd. and hasbeen obtained by its Authors from
sources believed to be reliable and are correct to the best of
theirknowledge. However, the Alagappa University, Publisher and its
Authors shall in no event be liable forany errors, omissions or
damages arising out of use of this information and specifically
disclaim anyimplied warranties or merchantability or fitness for
any particular use.
"The copyright shall be vested with Alagappa University"
Vikas® is the registered trademark of Vikas® Publishing House
Pvt. Ltd.
VIKAS® PUBLISHING HOUSE PVT. LTD.E-28, Sector-8, Noida - 201301
(UP)Phone: 0120-4078900 Fax: 0120-4078999Regd. Office: A-27, 2nd
Floor, Mohan Co-operative Industrial Estate, New Delhi 1100 44
Website: www.vikaspublishing.com Email:
[email protected]
Work Order No. AU/DDE/DE 12-02/Preparation and Printing of
Course Materials/2020 Dated 30.01.2020 Copies - 1000
Authors:Dr. N. Dutta, Professor of Mathematics, Head -
Department of Basic Sciences & Humanities, Heritage Institute
of Technology,KolkataUnits (2, 4, 6-8, 10-13)Vikas® Publishing
House: Units (1, 3, 5, 9, 14)
-
SYLLABI-BOOK MAPPING TABLENumerical Analysis
BLOCK - I : POLYNOMIAL EQUATIONS AND EIGEN VALUEPROBLEMS
UNIT - 1Transcendental and Polynomial Equations: Rate of
Convergenceof Iterative Methods.UNIT - 2Methods for Finding Complex
Roots - Polynomial Equations.UNIT - 3Birge - Vieta Method,
Bairstow’s Method, Graeffe’s Root SquaringMethod.UNIT - 4System of
Linear Algebraic Equations and Eigen Value Problems:Error Analysis
of Direct and Iteration Methods.
BLOCK - II : EIGEN VECTORS, INTERPOLATION,APPROXIMATION,
DIFFERENTIATIONAND INTEGRATION
UNIT - 5Finding Eigen Values and Eigen Vectors - Jacobi and
PowerMethods.UNIT - 6Interpolation and Approximation: Hermite
Interpolations -Piecewise and Spline Interpolation - Bivariate
Interpolation.UNIT - 7Approximation - Least Square Approximation
and BestApproximations.UNIT - 8Differentiation and Integration:
Numerical Differentiation -Optimum Choice of Step - Length -
Extrapolation Methods.
BLOCK - III : PDE, ODE AND EULER METHODSUNIT - 9Partial
Differentiation - Methods Based on UndeterminedCoefficient - Gauss
Methods.UNIT - 10Ordinary Differential Equations: Local Truncation
Error - Problems.UNIT - 11Euler, Backward Euler, Midpoint,
-Problems.
BLOCK - IV: TAYLOR’S METHOD, R.K METHODAND STABILITY
ANALYSIS
UNIT - 12Taylor’s Method -Related Problems.UNIT - 13Second Order
Runge Kutta Method - Stability Analysis.UNIT - 14Stability
Analysis.
Syllabi Mapping in Book
Unit 1: Transcendental andPolynomial Equations
(Pages 3-29);Unit 2: Methods for Finding Complex
Roots and Polynomial Equations(Pages 30-53);
Unit 3: Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring
Methods
(Pages 54-65);Unit 4: Solution of Simultaneous
Linear Equation(Pages 66-85);
Unit 5: Eigen Values andEigen Vectors
(Pages 86-106);Unit 6: Interpolation and
Approximation(Pages 107-146);
Unit 7: Approximation(Pages 147-171);
Unit 8: Numerical Integration andNumerical Differentiation
(Pages 172-220)
Unit 9: Partial Differential Equations(Pages 221-283);
Unit 10: Ordinary Differential Equations(Pages 284-299);
Unit 11: Euler’s Method(Pages 300-307)
Unit 12: Taylor’s Method(Pages 308-312);
Unit 13: Runge Kutta Method(Pages 313-321);
Unit 14: Stability Analysis(Pages 322-328)
-
BLOCK I: POLYNOMIAL EQUATIONS AND EIGEN VALUE PROBLEMS
UNIT 1 TRANSCENDENTAL AND POLYNOMIAL EQUATIONS 1-291.0
Introduction1.1 Objectives1.2 Transcendental and Polynomial
Equations1.3 Answers to Check Your Progress Questions1.4 Summary1.5
Key Words1.6 Self Assessment Questions and Exercises1.7 Further
Readings
UNIT 2 METHODS FOR FINDING COMPLEX ROOTSAND POLYNOMIAL EQUATIONS
30-53
2.0 Introduction2.1 Objectives2.2 Methods for Finding Complex
Roots2.3 Polynomial Equations2.4 Answers to Check Your Progress
Questions2.5 Summary2.6 Key Words2.7 Self Assessment Questions and
Exercises2.8 Further Readings
UNIT 3 BIRGE – VIETA, BAIRSTOW’S AND GRAEFFE’SROOT SQUARING
METHODS 54-65
3.0 Introduction3.1 Objectives3.2 Birge – Vieta Method3.3
Bairstow’s Method3.4 Graeffe’s Root Squaring Method3.5 Answers to
Check Your Progress Questions3.6 Summary3.7 Key Words3.8
Self-Assessment Questions and Exercises3.9 Further Readings
UNIT 4 SOLUTION OF SIMULTANEOUS LINEAR EQUATION 66-854.0
Introduction4.1 Objectives4.2 System of Linear Equations
4.2.1 Classical Methods4.2.2 Elimination Methods4.2.3 Iterative
Methods4.2.4 Computation of the Inverse of a Matrix by using
Gaussian Elimination Method
CONTENTS
-
4.3 Answers to Check Your Progress Questions4.4 Summary4.5 Key
Words4.6 Self Assessment Questions and Exercises4.7 Further
Readings
BLOCK II: EIGEN VECTORS, INTERPOLATION,
APPROXIMATION,DIFFERENTIATION AND INTEGRATION
UNIT 5 EIGEN VALUES AND EIGEN VECTORS 86-1065.0 Introduction5.1
Objectives5.2 Finding Eigen Values and Eigen Vectors5.3 Jacobi and
Power Methods5.4 Answers to Check Your Progress Questions5.5
Summary5.6 Key Words5.7 Self Assessment Questions and Exercises5.8
Further Readings
UNIT 6 INTERPOLATION AND APPROXIMATION 107-1466.0
Introduction6.1 Objectives6.2 Interpolation and Approximation6.3
Answers to Check Your Progress Questions6.4 Summary6.5 Key Words6.6
Self Assessment Questions and Exercises6.7 Further Readings
UNIT 7 APPROXIMATION 147-1717.0 Introduction7.1 Objectives7.2
Approximation7.3 Least Square Approximation7.4 Answers to Check
Your Progress Questions7.5 Summary7.6 Key Words7.7 Self Assessment
Questions and Exercises7.8 Further Readings
UNIT 8 NUMERICAL INTEGRATION AND NUMERICALDIFFERENTIATION
172-220
8.0 Introduction8.1 Objectives8.2 Numerical Integration8.3
Numerical Differentiation
-
8.4 Optimum Choice of Step Length8.5 Extrapolation Method8.6
Answers to Check Your Progress Questions8.7 Summary8.8 Key Words8.9
Self Assessment Questions and Exercises
8.10 Further ReadingsBLOCK III: PDE, ODE AND EULER METHODS
UNIT 9 PARTIAL DIFFERENTIAL EQUATIONS 221-2839.0 Introduction9.1
Objectives9.2 Partial Differential Equation of the First Order
Lagrange’s Solution9.3 Solution of Some Special Types of
Equations9.4 Charpit’s General Method of Solution and Its Special
Cases9.5 Partial Differential Equations of Second and Higher
Orders
9.5.1 Classification of Linear Partial Differential Equations of
Second Order9.6 Homogeneous and Non-Homogeneous Equations with
Constant Coefficients9.7 Partial Differential Equations
Reducible to Equations with
Constant Coefficients9.8 Answers to Check Your Progress
Questions9.9 Summary
9.10 Key Words9.11 Self Assessment Questions and Exercises9.12
Further Readings
UNIT 10 ORDINARY DIFFERENTIAL EQUATIONS 284-29910.0
Introduction10.1 Objectives10.2 Ordinary Differential Equations10.3
Answers to Check Your Progress Questions10.4 Summary10.5 Key
Words10.6 Self Assessment Questions and Exercises10.7 Further
Readings
UNIT 11 EULER’S METHOD 300-30711.0 Introduction11.1
Objectives11.2 Euler Method11.3 Answers to Check Your Progress
Questions11.4 Summary11.5 Key Words11.6 Self Assessment Questions
and Exercises11.7 Further Readings
-
BLOCK IV: TAYLOR’S METHOD, R.K METHOD AND STABILITY ANALYSIS
UNIT 12 TAYLOR’S METHOD 308-31212.0 Introduction12.1
Objectives12.2 Taylor’s Method12.3 Answers to Check Your Progress
Questions12.4 Summary12.5 Key Words12.6 Self Assessment Questions
and Exercises12.7 Further Readings
UNIT 13 RUNGE KUTTA METHOD 313-32113.0 Introduction13.1
Objectives13.2 Runge Kutta Method13.3 Answers to Check Your
Progress Questions13.4 Summary13.5 Key Words13.6 Self Assessment
Questions and Exercises13.7 Further Readings
UNIT 14 STABILITY ANALYSIS 322-32814.0 Introduction14.1
Objectives14.2 Stability Analysis14.3 Answers to Check Your
Progress Questions14.4 Summary14.5 Key Words14.6 Self Assessment
Questions and Exercises14.7 Further Readings
-
INTRODUCTION
Numerical analysis is the study of algorithms to find solutions
for problems ofcontinuous mathematics. It helps in obtaining
approximate solutions while maintainingreasonable bounds on errors.
Although numerical analysis has applications in allfields of
engineering and the physical sciences, yet in the 21st century life
sciencesand both the arts have adopted elements of scientific
computations. Ordinarydifferential equations are used for
calculating the movement of heavenly bodies,i.e., planets, stars
and galaxies. Besides, it evaluates optimization occurring
inportfolio management and also computes stochastic differential
equations to solveproblems related to medicine and biology.
Airlines use sophisticated optimizationalgorithms to finalize
ticket prices, airplane and crew assignments and fuel
needs.Insurance companies too use numerical programs for actuarial
analysis. The basicaim of numerical analysis is to design and
analyse techniques to compute approximateand accurate solutions to
unique problems.
In numerical analysis, two methods are involved, namely direct
and iterativemethods. Direct methods compute the solution to a
problem in a finite number ofsteps whereas iterative methods start
from an initial guess to form successiveapproximations that
converge to the exact solution only in the limit. Iterative
methodsare more common than direct methods in numerical analysis.
The study of errors isan important part of numerical analysis.
There are different methods to detect andfix errors that occur in
the solution of any problem. Round-off errors occur becauseit is
not possible to represent all real numbers exactly on a machine
with finitememory. Truncation errors are assigned when an iterative
method is terminated ora mathematical procedure is approximated and
the approximate solution differsfrom the exact solution.
This book, Numerical Analysis, is divided into four blocks that
are furtherdivided into fourteen units which will help you
understand how to solvetranscendental and polynomial equations,
rate of convergence of iterative methods,methods for finding
complex roots – polynomial equations, Birge-Vieta method,Bairstow’s
method, Graeffe’s root squaring method, system of linear
algebraicequations and eigenvalue problems, error analysis of
direct and iteration methods,finding eigenvalues and eigenvectors –
Jacobi and power methods, interpolationand approximation, Hermite
interpolations, piecewise and spline interpolation,approximation,
least square approximation and best approximations,
differentiationand integration, numerical differentiation, partial
differentiation, ordinary differentialequations, Euler, backward
Euler, Taylor’s method, second order Runge Kuttamethods, and
stability analysis.
The book follows the Self-Instruction Mode or the SIM format
whereineach unit begins with an ‘Introduction’ to the topic
followed by an outline of the‘Objectives’. The content is presented
in a simple, organized and comprehensiveform interspersed with
‘Check Your Progress’ questions and answers for betterunderstanding
of the topics covered. A list of ‘Key Words’ along with a
‘Summary’and a set of ‘Self Assessment Questions and Exercises’ is
provided at the end ofthe each unit for effective recapitulation.
Logically arranged topics, relevant solvedexamples and
illustrations have been included for better understanding of the
topics.
NOTES
Self-InstructionalMaterial
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 1
BLOCK - IPOLYNOMIAL EQUATIONS AND
EIGEN VALUE PROBLEMS
UNIT 1 TRANSCENDENTAL ANDPOLYNOMIAL EQUATIONS
Structure1.0 Introduction1.1 Objectives1.2 Transcendental and
Polynomial Equations1.3 Answers to Check Your Progress Questions1.4
Summary1.5 Key Words1.6 Self Assessment Questions and Exercises1.7
Further Readings
1.0 INTRODUCTION
In mathematics, a polynomial is an expression consisting of
variables (alsocalled indeterminate) and coefficients, that
involves only the operationsof addition, subtraction,
multiplication, and non-negative integer exponents ofvariables. An
example of a polynomial of a single indeterminate, x, is x2 – 4x +
7.An example in three variables is x3 + 2xyz2 – yz + 1.
Polynomials appear in many areas of mathematics and science. For
example,they are used to form polynomial equations, which encode a
wide range of problems,from elementary word problems to complicated
scientific problems; they are usedto define polynomial functions,
which appear in settings ranging frombasic chemistry and physics to
economics and social science; they are usedin calculus and
numerical analysis to approximate other functions. In
advancedmathematics, polynomials are used to construct polynomial
rings and algebraicvarieties, central concepts in algebra and
algebraic geometry.
In this unit, you will study about transcendental and polynomial
equations,and rate of convergence of iterative methods.
1.1 OBJECTIVES
After going through this unit, you will be able to:Understand
linear integral equations and some basic identities
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional2 Material
Reduce initial value problems to Volterra integral equationsKnow
the methods of successive approximation and successive
substitutionto solve Volterra equations of second kind, iterated
kernels and Neumannseries for Volterra equationsExpress resolvent
kernel as a series in Know Laplace transform method for a
difference kernelFind the solution of a Volterra equation of the
first kindReduce boundary value problems to Fredholm integral
equationsKnow the method of successive approximation and successive
substitutionto solve Fredholm equations of the second kindKnow
iterated kernels and Neumann series for Fredholm equationsExpress
resolvent kernel as a sum of series and Fredholm resolvent kernelas
a ratio of two seriesKnow Fredholm equations with separable
kernels, approximation of a kernelby a separable kernel and
Fredholm alternative
1.2 TRANSCENDENTAL AND POLYNOMIALEQUATIONS
In mathematics, an integral equation is an equation in which an
unknown functionappears under an integral sign.
An integral equation in u(x) is given by,
…(1.1)
where K(x, t) is called the kernel of the integral Equation
(1.1) and (x)and β(x) are the limits of integration. It can be
easily observed that the unknownfunction u(x) appears under the
integral sign. It is to be noted here that both thekernel K(x, t)
and the function f(x) in Equation (1.1) are given functions; and
isa constant parameter. We have to determine the unknown function
u(x) that willsatisfy Equation (1.1).
An integral equation can be classified as a linear or nonlinear
integral equation.The most frequently used integral equations fall
under two major classes, namelyVolterra and Fredholm integral
equations. In this unit we will distinguish followingintegral
equations:
Volterra integral equationsFredholm integral equations
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 3
Volterra Integral Equations
The most standard form of Volterra linear integral equations is
of the form
where the limits of integration are function of x and the
unknown functionu(x) appears linearly under the integral sign.
If the function (x) = 1, then equation becomes
and this equation is known as the Volterra integral equation of
the secondkind; whereas if (x) = 0, then the equation becomes
which is known as the Volterra equation of the first kind.
Fredholm Integral Equations
The most standard form of the Fredholm linear integral equations
is givenby,
1.2)
where the limits of integration a and b are constants and the
unknown functionu(x) appears linearly under the integral sign. If
the function (x) = 1, then Equation(1.2) becomes,
and this equation is called Fredholm integral equation of second
kind; whereasif (x) = 0, then Equation (1.2) gives,
which is called Fredholm integral equation of the first
kind.
Initial Value Problems Reduced to Volterra Integral
Equations
Consider the integral equation,
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional4 Material
The Laplace transform of f(t) is defined as
L{f(t)} =
Using this definition the above integral equation can be
transformed to
In a similar manner if y( ) = then
This is inverted by convolution theorem to give
If
Then . Using the convolution theorem, we getthe Laplace inverse
as
Thus the n-fold integrals can be expressed as a single integral
as,
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 5
Method of Successive Approximation to Solve Volterra
IntegralEquations of Second Kind
Volterra integral equation of the second kind is of the
form,
where K(x, t) is the kernel of the integral equation, f (x) a
continuous functionof x and a parameter. Here, f (x) and K(x, t)
are the given functions but u(x) isan unknown function that needs
to be determined. The limits of integral for theVolterra integral
equations are functions of x.
In this method of approximation, we replace the unknown function
u(x)under the integral sign of the Volterra equation by any
selective real valued continuousfunction u0(x), called the zeroth
approximation. This substitution will give the firstapproximation
u1(x) by
It is obvious that u1(x) is continuous if f (x), K(x, t) and
u0(x) are continuous.The second approximation u2(x) can be obtained
similarly by replacing u0(x) inthe above equation by u1(x) obtained
above. And we find,
Proceeding in a similar way, we obtain an infinite sequence of
functions
that satisfies the recurrence relation
for n = 1, 2, 3, . . . and u0(x) is equivalent to any selected
real valuedfunction. The most commonly selected function for u0(x)
are 0, 1 and x. Thus, atthe limit, the solution u(x) of the
Volterra equation is obtained as,
so that the resulting solution u(x) is independent of the choice
of the zerothapproximation u0(x). This process of approximation is
extremely simple. However,if we follow the Picard’s successive
approximation method, we need to set u0(x)= f (x), and determine
u1(x) and other successive approximation as follows:
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional6 Material
The last equation is the recurrence relation. Consider
Where,
Thus, it can be easily observed that,
if 0(x)=f (x), and further that
where m=1, 2, 3, . . . and hence,
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 7
The repeated integrals in may be considered as a double
integral
over the triangular region; thus interchanging the order of
integration, we obtain
Where,
Similarly,
Where the iterative kernels, aredefined by the recurrence
formula given by,
Thus, the solution for un(x) can be written as,
Resolvent Kernel as a Series in L
It is also possible that we should be led to the solution of
Volterra equation bymeans of the sum if it exists, of the infinite
series defined by un(x). Thus, we haveusing m(x)
hence it is also possible that the solution of Volterra equation
will be givenby as n
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional8 Material
Where,
is known as the resolvent kernel.
Laplace Transform Method for A Difference Kernel
Volterra integral equation of convolution type such as
where the kernel K(x–t) is of convolution type, can very easily
be solvedusing the Laplace transform method. To begin the solution
process, we first definethe Laplace transform of u(x)
Using the Laplace transform of the convolution integral, we
have
Thus, taking the Laplace transform of Volterra integral
equations ofconvolution type, we obtain
and the solution for L{u(x)} is given by
By inverting this transform, we obtain
where
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 9
Example 1: Solve the following Volterra integral equation of the
second kind ofthe convolution type using (a) the Laplace transform
method and (b) successiveapproximation method
Solution: (a) Taking the Laplace transforms, we obtain
and solving for L{u(x)} yields
The Laplace inverse of the above can be written immediately
as
where (x) is the Dirac delta.(b) Solution by successive
approximationLet us assume that the zeroth approximation is,u0(x) =
0Then the first approximation can be obtained asu1(x) = f (x)Hence,
the second approximation is given by
Proceeding in similar manner, the third approximation can be
obtained as
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional10 Material
In the double integration the order of integration is changed to
obtain thefinal result. In a similar manner, the fourth
approximation u4(x) can be at oncewritten as
Now, as n
Here, the resolvent kernel is H(x, t; ) = e(1+ )(x–t).
Method of Successive Substitution to Solve Volterra
IntegralEquations of Second Kind
In this method, we substitute successively for u(x) its value as
given by Volterraintegral equation of the second kind. We find
that
Where,
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 11
is the remainder after n terms.Now,
Accordingly, the general series for u(x) can be written as
Example 2: Solve the integral
Solution: By the method of successive substitution, we get
Iterated Kernels and Neumann Series for Volterra Equations
The integral,
where is the
resolvent kernel is the solution of the Volterra integral
equation of the second kind,given by
When both and f(x) are continuous then the resolvent kernel
canbe constructed in terms of the Neumann series
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional12 Material
Where is the iterated kernel which is evaluated as,
and .
For showing this, assume the following infinite series form for
the solutionu(x),
Substituting this in the Volterra integral equation of the
second kind andassuming good convergence which allows the exchange
of summation with theintegration operation, we get
Equating coefficients of on both sides, we have
By successive substitution, we get
And
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 13
Similarly,
So we can now write,
as the general term of the
iterated kernel and
Therefore,
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional14 Material
Solution of a Volterra Integral Equation of the First Kind
The first kind Volterra equation is usually written as,
If the derivatives exist and arecontinuous, then the solution of
this equation is found by reducing it to its secondkind and then
proceeding with the methods discussed above.
Differentiating the above Volterra equation and applying
Leibnitz rule, weget
If then
The second way to obtain the second kind Volterra integral
equation fromthe first kind is by using integration by parts, if we
set
Or
By integrating by parts, we have
which reduces to
Giving
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 15
(0) = 0, and dividing out by K(x, x) we have
In this method the function f(x) is not required to be
differentiable. But u(x)must finally be calculated by
differentiating the function (x) given by the formula
where H(x, t : 1) is the resolvent kernel corresponding to . To
do this
f (x) must be differentiable.
Boundary Value Problems Reduced to Fredholm Integral
Equations
A boundary value problem can be converted to an equivalent
Fredholm integralequation. But this method is complicated and so is
rarely used. This method isdemonstrated with the help of following
illustration:
Consider the differential equation
with boundary
conditions
Where and β are given constants. Make the transformation,
Integrating both sides from a to x, we get
Integrating with respect to x from a to x and applying the given
boundarycondition at x = a, we get
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional16 Material
And using the boundary condition at x = b gives,
And the unknown constant is determined as
Hence the solution can be rewritten as,
Therefore,
where and so y(x) can be determined. It is a
complicatedprocedure to determine the solution of a Boundary Value
Problem (BVP) byequivalent Fredholm equation.
If a = 0 and b = 1, i.e., 0 x 1, then
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 17
dtdttuyxxyx x
∫ ∫+′+=0 0
)()0()( α
And hence the unknown constant can be determined as
Thus,
Where the kernel is given by,,
It can be easily verified that confirming that the
kernel is symmetric. The Fredholm integral equation is given by
u(x).
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional18 Material
Example 3: Consider the boundary value problem,
Solution: Integrating the equation with respect to x from 0 to x
two times yields
To determine the unknown constant we use the condition at x =
1,i.e., y(1) = y1. Hence,
And
Therefore,
Where the kernel is given by,
If we specialize our problem with simple linear BVP y (x) = –
y(x), 0 < x< 1 with the boundary conditions y(0) = y0, y(1) =
y1, then y(x) reduces to thesecond kind Fredholm integral
equation,
where F(x) = y0 + x(y1 – y0). It can be easily verified that
K(x, t) =K(t, x) confirming that the kernel is symmetric.
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 19
Method of Successive Approximation to Solve Fredholm Equations
ofSecond Kind
The successive approximation method, which was successfully
applied to Volterraintegral equations of the second kind, can also
be applied to the basic Fredholmintegral equations of the second
kind:
We set u0(x) = f (x). Note that the zeroth approximation can be
any selectedreal valued function u0(x), a x b.
Accordingly, the first approximation u1(x) of the solution of
u(x) is definedby
The second approximation u2(x) of the solution u(x) can be
obtained byreplacing u0(x) by the previously obtained u1(x). Hence
we find
This process can be continued in the same manner to obtain the
nthapproximation. In other words, the various approximations can be
put in a recursivescheme given by
Even though we can select any real valued function for the
zerothapproximation u0(x), the most commonly selected functions for
u0(x) are u0(x) =0, 1 or x. With the selection of u0(x) = 0, the
first approximation u1(x) =f (x). The final solution u(x) is
obtained by
so that the resulting solution u(x) is independent of the choice
of u0(x). Thisis known as Picard’s method. The Neumann series is
obtained if we set u0(x) = f(x) such that
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional20 Material
Where,
The second approximation u2(x) can be obtained as,
Where,
The final solution u(x) known as Neumann series can be obtained
as,
Where,
Example 4: Solve the Fredholm integral equation
by using the successive approximation method.Solution: Let us
consider the zeroth approximation is u0(x) = 1, and then the
firstapproximation can be computed as
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 21
Thus,
And
is the solution.
Method of Successive Substitutions to Solve Fredholm Equations
ofSecond Kind
This method is almost analogous to the successive approximation
method exceptthat it concerns with the solution of the integral
equation in a series form throughevaluating single and multiple
integrals.
K(x, t) 0, is real and continuous in the rectangle R, for which
a x band a t b; f (x) 0 is real and continuous in the interval I,
for which a x b; and , a constant parameter.
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional22 Material
Substituting the value of u(t) in this equation, we get
or
Hence,
The unknown function u(x) is replaced by the known function
f(x).Example 5: Use the successive substitutions to solve the
Fredholm integral equation
Solution: Here, = 12, f (x) = cos x, and K(x, t) = sin x
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 23
Iterated Kernels and Neumann Series for Fredholm Equations
The Liouville-Neumann series is defined as,
It is a unique continuous solution of a Fredholm integral
equation of thesecond kind, defined as
If the nth iterated kernel is defined as
Then,
And the resolvent kernel is given by
.
Resolvent Kernel as a Sum Of Series
Let the Fredholm equation of the second kind be,
Where the range of the separable kernel
which consists of arbitrary linear combinations of the functions
fi is given by,
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional24 Material
Therefore,
To find uj define,
Consider the algebraic problem . If we replace h by a kernel
cK instead of the separable K then the equation becomes,
where, I is the identity matrix. Let D(c) be the determinant of
the matrix Mthen
If determinant D(c) is not zero then the matrix M has an
inverse,
Then the solution of the algebraic equation becomes or .
Now by substituting these values of ui in the Fredholm integral
equation andthen expressing hi in terms of h(x), we get
Where, is called the resolvent kernel.
Fredholm Resolvent Kernel as a Ratio of Two Series
If is a continuous kernel, not necessarily real then the
resolvent kernel
can be expressed as the ratio of two infinite series of powers
of such that both of these series converge for all values of .
Expressing the resolvent kernel as a ratio,
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 25
Where,
And
Here the coefficients Ci and the function can be determined
by,,
The solution of the equation given by,
now becomes
This method is preferable only when the kernel is separable.
Fredholm’s Equations with Separable Kernels
This section deals with the study of the homogeneous Fredholm
integral equationwith separable kernel given by,
This equation is obtained from the second kind Fredholm
equation
Setting f(x) = 0, it is easily seen that u(x) = 0 is a solution
which is known asthe trivial solution. The homogeneous Fredholm
integral equation with separablekernel may have nontrivial
solutions. We shall use the direct computational methodto obtain
the solution in this case. Without loss of generality, we assume
that
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional26 Material
So that,
For
We note that = 0 gives the trivial solution u(x) = 0. However,
to determinethe nontrivial solution, we need to determine the value
of the parameter byconsidering 0. Therefore,
Or
which gives a numerical value for 0 by evaluating the definite
integral.
Check Your Progress
1. What is an integral equation?2. Write the standard form of
the Volterra equation.3. What is first step in the method of
successive approximation to solve
Volterra equations?4. Write the Volterra integral equation of
convolution type.5. What are the two methods to reduce a Volterra
integral equation of the
first kind to a second kind?6. Express the resolvent kernel as a
ratio of two series.
1.3 ANSWERS TO CHECK YOUR PROGRESSQUESTIONS
1. An integral equation is an equation in which an unknown
function appearsunder an integral sign.
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 27
2. The most standard form of Volterra linear integral equations
is of the form
3. In successive method of approximation, we first replace the
unknown functionu(x) under the integral sign of the Volterra
equation by any selective realvalued continuous function u0(x).
4. Volterra integral equation of convolution type is
where the kernel K(x – t) is of convolution type.5. The two
methods to reduce a Volterra integral equation of the first kind to
a
second kind are differentiating the Volterra equation and
applying Leibnitzrule, and by using integration by parts.
6. Expressing the resolvent kernel as a ratio,
where, and
1.4 SUMMARY
An integral equation in u(x) is given by,
where K(x, t) is called the kernel of the integral equation, and
(x) and(x) are the limits of integration.
The most frequently used integral equations fall under two major
classes,namely Volterra and Fredholm integral equations.In Volterra
equation one of the limits of integration is variable while
inFredholm equation both the limits are constant.
-
Transcendental andPolynomial Equations
NOTES
Self-Instructional28 Material
1.5 KEY WORDS
Integral equation: It is an equation in which an unknown
function appearsunder an integral sign.
1.6 SELF ASSESSMENT QUESTIONS ANDEXERCISES
Short-Answer Questions
1. Write the two kinds of Volterra integral equations.2. What is
the basic difference between Volterra and Fredholm equations?3.
List the methods used to solve Fredholm and Volterra integral
equations of
the second kind.4. How can you find the solution of the Volterra
integral equation of the first
kind?5. Define iterated kernel for Fredholm and Volterra
integral equations.
Long-Answer Questions
1. Reduce the following initial value problem to an equivalent
Volterra equation:
2. Solve the following Volterra integral equations using methods
of successiveapproximation with five approximations with u0(x) =
0:
3. Find the solution of the following Volterra integral
equations of the first kind:
(a)
-
Transcendental andPolynomial Equations
NOTES
Self-InstructionalMaterial 29
(b)
(c)
(d)
4. Reduce the following initial value problem into an equivalent
Fredholmequation:
5. Solve the following linear Fredholm integral equations:
(a)
(b)
(c)
(d)
(e)
1.7 FURTHER READINGS
Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical
Methods forScientific and Engineering Computation. New Delhi: New
AgeInternational (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical
Analysis, 2nd Edition.US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations.
New Delhi:New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical
Analysis:An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical
Computingwith Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata
McGraw-Hill.Datta, N. 2007. Computer Oriented Numerical Methods.
New Delhi: Vikas
Publishing House Pvt. Ltd.
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional30 Material
UNIT 2 METHODS FOR FINDINGCOMPLEX ROOTS ANDPOLYNOMIAL
EQUATIONS
Structure2.0 Introduction2.1 Objectives2.2 Methods for Finding
Complex Roots2.3 Polynomial Equations2.4 Answers to Check Your
Progress Questions2.5 Summary2.6 Key Words2.7 Self Assessment
Questions and Exercises2.8 Further Readings
2.0 INTRODUCTION
In mathematics and computing, a root-finding algorithm is an
algorithm forfinding zeroes, also called roots, of continuous
functions. A zero of a function f,from the real numbers to real
numbers or from the complex numbers to the complexnumbers, is a
number x such that f(x) = 0. As, generally, the zeroes of a
functioncannot be computed exactly nor expressed in closed form,
root-finding algorithmsprovide approximations to zeroes, expressed
either as floating point numbers oras small isolating intervals, or
disks for complex roots (an interval or disk outputbeing equivalent
to an approximate output together with an error bound).
In this unit, you will study about the methods for finding
complex roots andpolynomial equations.
2.1 OBJECTIVES
After going through this unit, you will be able to:Explain the
various methods for finding complex rootsAnalyse the polynomial
equations
2.2 METHODS FOR FINDING COMPLEX ROOTS
In this section, we consider numerical methods for computing the
roots of anequation of the form,
f (x) = 0 (2.1)
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 31
Where f (x) is a reasonably well-behaved function of a real
variable x. The functionmay be in algebraic form or polynomial form
given by,
011
1 ...)( axaxaxaxfn
nn
n ++++=−
− (2.2)
It may also be an expression containing transcendental functions
such as cosx, sin x, ex, etc. First, we would discuss methods to
find the isolated real roots ofa single equation. Later, we would
discuss methods to find the isolated roots of asystem of equations,
particularly of two real variables x and y, given by,
f (x, y) = 0 , g (x, y) = 0 (2.3)A root of an equation is
usually computed in two stages. First, we find the
location of a root in the form of a crude approximation of the
root. Next we usean iterative technique for computing a better
value of the root to a desired accuracyin successive
approximations/computations. This is done by using an
iterativefunction.
Methods for Finding Location of Real RootsThe location or crude
approximation of a real root is determined by the use of anyone of
the following methods, (i) graphical and (ii) tabulation.
Graphical Method: In the graphical method, we draw the graph of
the functiony = f (x), for a certain range of values of x. The
abscissae of the points where thegraph intersects the x-axis are
crude approximations for the roots of the Equation(2.1). For
example, consider the equation,
f (x) = x2 + 2x – 1 = 0From the graph of the function y = f (x),
shown in Figure 2.1 we find that it
cuts the x-axis between 0 and 1. We may take any point in [0, 1]
as the crudeapproximation for one root. Thus, we may take 0.5 as
the location of a root. Theother root lies between – 2 and – 3. We
can take – 2.5 as the crude approximationof the other root.
Fig. 2.1 Graph of 122 −+= xxy
In some cases, where it is complicated to draw the graph of y =
f (x), we mayrewrite the equation f (x) = 0, as f1(x) = f2(x),
where the graphs of y = f1 (x) andy = f2(x) are standard curves.
Then we find the x-coordinate(s) of the point(s) of
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional32 Material
intersection of the curves y = f1(x), and y = f2(x), which is
the crude approximationsof the root (s).
For example, consider the equation,
02.132.153 =−− xx
This can be rewritten as,
2.132.153 += xx
Where it is easy to draw the graphs of y = x3 and y = 15.2 x +
13.2. Then, theabscissa of the point(s) of intersection can be
taken as the crude approximation(s)of the root(s).
10
20
y x = 3
y x = 15.2 + 13.2
Fig. 2.2 Graph of y = x3 and y = 15.2x + 13.2
Example 1: Find the location of the root of the equation 10log
1.x x =
Solution: The equation can be rewritten as 10
1log .xx
=
Now the curves 101log , and ,y x yx
= = can be easily drawn and are shown inFigure 2.3.
O 1 2 3
y =
y x = log 10
Y
X
1x
Fig. 2.3 Graph of 101 logy and y xx
= =
The point of intersection of the curves has its x-coordinates
value 2.5approximately. Thus, the location of the root is 2.5.
Tabulation Method: In the tabulation method, a table of values
of f (x) is madefor values of x in a particular range. Then, we
look for the change in sign in thevalues of f (x) for two
consecutive values of x. We conclude that a real root lies
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 33
between these values of x. This is true if we make use of the
following theorem oncontinuous functions.Theorem 1: If f (x) is
continuous in an interval (a, b) and f (a) and f(b) are ofopposite
signs, then there exists at least one real root of f (x) = 0,
between aand b.
Consider for example, the equation f (x) = x3 – 8x + 5 =
0Constructing the following table of x and f (x)
83251213227)(32101234
−−−−−−−
xfx
We observe that there is a change in sign of f (x) in each of
the sub-intervals (–3,–4), (0, 1) and (2, 3). Thus we can take the
crude approximation for the three realroots as – 3.2, 0.2 and
2.2.
Methods for Finding the Roots—Bisection and Simple Iteration
Methods
Bisection Method: The bisection method involves successive
reduction of theinterval in which an isolated root of an equation
lies. This method is based upon animportant theorem on continuous
functions as stated below.Theorem 2: If a function f (x) is
continuous in the closed interval [a, b] and f (a)and f (b) are of
opposite signs, i.e., f (a) f (b) < 0, then there exists at
least onereal root of f (x) = 0 between a and b.
The bisection method starts with two guess values x0 and x1.
Then this interval
[x0, x1] is bisected by a point ),(21
102 xxx += where .0)()( 10
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional34 Material
Step 5:Check if y0 y1 < 0, then go to Step 6else go to Step
2
Step 6:Compute x2 = (x0 + x1)/2Step 7:Compute y2 = f (x2)Step
8:Check if y0 y2 > 0, then set x0 = x2
else set x1 = x2Step 9:Check if |/)(| 101 xxx − > epsilon,
then go to Step 3Step 10: Write x2, y2Step 11: EndNext, we give the
flowchart representation of the above algorithm to get a
better understanding of the method. The flowchart also helps in
easy implementationof the method in a computer program.
Flow Chart for Bisection AlgorithmBegin
Define ( )f x
Read epsilon
Read , x x0 1
Compute = ( )y f x0 0
Compute = ( )y f x1 1
No
Yes
Compute = ( + )/2x x x2 0 1
Compute = ( )y f x2 2
x x0 2 =
x x1 2 =
print ‘root’ = x2
End
No
Yes
YesNo
Is > 1y y0 2
Is|( – ) / |
> epsilonx x x1 0 0
Is > 0y y0 1
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 35
Example 2: Find the location of the smallest positive root of
the equation x3 – 9x+ 1 = 0 and compute it by bisection method,
correct to two decimal places.Solution: To find the location of the
smallest positive root we tabulate the functionf (x) = x3 – 9x + 1
below:
1921)(3210
−−xfx
We observe that the smallest positive root lies in the interval
[0, 1]. Thecomputed values for the successive steps of the
bisection method are given in theTable.
053.011718.0125.0109375.07016933.0109375.0125.009375.06
155.009375.0125.00625.05437.00625.0125.004123.0125.025.00323.125.05.00237.35.0101
)( 2210
−
−−−
xfxxxn
From the above results, we conclude that the smallest root
correct to twodecimal places is 0.11.
Simple Iteration Method: A root of an equation f (x) = 0, is
determined usingthe method of simple iteration by successively
computing better and betterapproximation of the root, by first
rewriting the equation in the form,
x = g(x) (2.4)Then, we form the sequence {xn} starting from the
guess value x0 of the root
and computing successively,
1 0 2 1 1( ), ( ),.., ( )n nx g x x g x x g x −= = =
In general, the above sequence may converge to the root ,as ∞→nξ
or itmay diverge. If the sequence diverges, we shall discard it and
consider anotherform x = h(x), by rewriting f (x) = 0. It is always
possible to get a convergentsequence since there are different ways
of rewriting f (x) = 0, in the form x = g(x).However, instead of
starting computation of the sequence, we shall first test
whetherthe form of g(x) can give a convergent sequence or not. We
give below a theoremwhich can be used to test for
convergence.Theorem 3: If the function g(x) is continuous in the
interval [a, b] which containsa root ξ of the equation f (x) = 0,
and is rewritten as x = g(x), and 1|)(| ≤≤′ lxg inthis interval,
then for any choice of ],[0 bax ∈ , the sequence {xn} determined
bythe iterations,
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional36 Material
1 ( ), for 0, 1, 2,...k kx g x k+ = = (2.5)
This converges to the root of f (x) = 0.Proof: Since ,ξ=x is a
root of the equation x = g(x), we have
)(ξξ g= (2.6)The first iteration gives x1 = g(x0)
(2.7)Subtracting Equation (2.7) from Equation (2.6), we get
)()( 01 xggx −=− ξξ
Applying mean value theorem, we can write
ξξξ
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 37
Step 1: Input x0, epsilon, maxit, where x0 is the initial guess
of root, epsilon isaccuracy desired, maxit is the maximum number of
iterations allowed.
Step 2:Set i = 0Step 3:Set x1 = g (x0)Step 4:Set i = i + 1Step
5:Check, if |(x1 – x0)/ x1| < epsilon, then print ‘root is’,
x1
else go to Step 6Step 6:Check, if i < n, then set x0 = x1 and
go to Step 3Step 7:Write ‘No convergence after’, n,
‘iterations’Step 8:End
Example 3: In order to compute a real root of the equation x3 –
x – 1 = 0, nearx = 1, by iteration, determine which of the
following iterative functions can beused to give a convergent
sequence.
(i) x = x3 – 1 (ii) 21
xxx += (iii)
xxx 1+=
Solution:
(i) For the form ,13 −= xx g(x) = x3 – 1, and 23)( xxg =′ .
Hence, ,1|)(| >′ xgfor x near 1. So, this form would not give a
convergent sequence ofiterations.
(ii) For the form .1)(,1 22 xxxg
xxx +=+= Thus, .13|)1(|and21)( 32 >=′−−=′ gxx
xg
Hence, this form also would not give a convergent sequence of
iterations.
(iii)For the form, .11
21)(,1)( 2
21
−⋅
+=′
+=
−
xxxxg
xxxg
.122
1|)1(|
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional38 Material
For the form (i), 322)(,11)(x
xgx
xg −=′+−= and for x in (0, 1), 1|)(| >′ xg .
So, this form is not suitable. For the form
−−
−
=′ 11
111.
21)( 2x
x
xg (ii) and
1|)(| >′ xg for all x in (0, 1). Finally, for the form
(iii)
).1,0(infor1)(and
)1(
1.21)(
23 xxg
x
xg
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 39
In case of form (ii) 32292)(xx
xxg +−=′ and for x in [2, 4], 1|)(| >′ xg
In case of form (iii) 1|)(|and2
119)( 221
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional40 Material
)()(
.........
1n
nnn xf
xfxx
′−=+ (2.13)
If the sequence }{ nx converges, we get the root.Algorithm:
Computation of a root of f (x) = 0 by Newton-Raphson method.
Step 0:Define f (x), )(xf ′
Step 1: Input x0, epsilon, maxit[x0 is the initial guess of
root, epsilon is the desired accuracy of theroot and maxit is the
maximum number of iterations allowed]
Step 2:Set i = 0Step 3:Set f0 = f (x0)Step 4:Compute df0 = f ′
(x0)Step 5:Set x1 = x0 – f0/df0Step 6:Set i = i + 1Step 7:Check if
|(x1 – x0) |x1| < epsilon, then print ‘root is’, x1 and stop
else if i < n, then set x0 = x1 and go to Step 3Step 8:Write
‘Iterations do not converge’Step 9:End
Example 7: Use Newton-Raphson method to compute the positive
root of theequation x3 – 8x – 4 = 0, correct to five significant
digits.Solution: Newton-Raphson iterative scheme is given by,
1( ) , for 0, 1, 2, ...( )
nn n
n
f xx x nf x+
= − =′
For the given equation f (x) = x3 – 8x – 4First we find the
location of the root by the method of tabulation. The table for
f (x) is,
28112134)(43210
−−−−xfx
Evidently, the positive root is near x = 3. We take x0 = 3 in
Newton-Raphsoniterative scheme.
3
1 2
8 43 8
n nn n
n
x xx xx+− −
= −−
We get, 127 24 43 3.0526
27 8x − −= − =
−
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 41
Similarly, x2 = 3.05138, and x3 = 3.05138Thus, the positive root
is 3.0514, correct to five significant digits.
Example 8: Find a real root of the equation 3 27 9 0,x x+ + =
correct to fivesignificant digits.Solution: First we find the
location of the real root by tabulation. We observe thatthe real
root is negative and since f (–7) = 9 > 0 and f (–8) = – 55 <
0, a root liesbetween –7 and – 8.
For computing the root to the desired accuracy, we take x0 = –8
and useNewton-Raphson iterative formula,
3 2
1 2
7 9, for 0, 1, 2, ...
3 14n n
n nn n
x xx x nx x++ +
= − =+
The successive iterations give,x1 = –7.3125x2 = –7.17966x3 =
–7.17484x4 = –7.17483
Hence, the desired root is –7.1748, correct to five significant
digits.
Example 9: For evaluating ,a deduce the iterative formula
,21
1
+=+
nnn x
axx
by using Newton-Raphson scheme of iteration. Hence, evaluate 2
using this,correct to four significant digits.
Solution: We observe that a is the solution of the equation x2 –
a = 0.Now, using f (x) = x2 – a in the Newton-Raphson iterative
scheme
2
1 2n
n nn
x ax xx+−
= −
We have,2
1 2n
n nn
x ax xx+−
= −
2
1 2n
n nn
x ax xx+−
= −
i.e., 11 , for 0, 1, 2,...2n n n
ax x nx+
= + =
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional42 Material
Now, for computing ,2 we assume x0 = 1.4. The successive
iterations give,
41421.1414.12414.1
21
414.18.2
96.34.1
24.121
2
1
=
+=
==
+=
x
x
Hence, the value of 2 is 1.414 correct to four significant
digits.
Example 10: Prove that k a can be computed by the iterative
scheme,
.)1(1 11
+−=
−+ kn
nnx
axkk
x Hence evaluate ,23 connect to five significant digits.
Solution: The value k a is the positive root of xk – a = 0.
Thus, the iterative
scheme for evaluating k a is,
1 1
kn
n n kn
x ax xkx
+ −
−= −
1 1
1or, ( 1) , for 0, 1, 2,...n n kn
ax k x nk x+ −
= − + =
Now, for evaluating ,23 we take x0 = 1.25 and use the iterative
formula,
1 2
1 223n n n
x xx+
= +
1 2
2 3
1 2We have, 1.25 2 1.263 (1.25)
1.259921, 1.259921
x
x x
= × + =
= =
Hence, 3 2 = 1.2599, correct to five significant digits.
Example 11: Find by Newton-Raphson method, the real root of3x –
cos x – 1 = 0, correct to three significant figures.Solution: The
location of the real root of f (x) = 3x – cos x – 1 = 0, is [0, 1]
sincef (0) = – 2 and f (1) > 0.
We choose x0 = 0, and use Newton-Raphson scheme of
iteration.
13 cos 1, 0, 1, 2,...
3 sinn n
n nn
x xx x nx+
− −= − =
+
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 43
The results for successive iterations are,x1 = 0.667, x2 =
0.6075, x3 = 0.6071
Thus, the root is 0.607 correct to three significant
figures.Example 12: Find a real root of the equation xx + 2x – 6 =
0, correct to foursignificant digits.Solution: Taking f (x) = xx +
2x – 6, we have f (1) = –3 < 0 and f (2) = 2 > 0.Thus, a root
lies in [1, 2]. Choosing x0 = 2, we use Newton-Raphson
iterativescheme given by,
12 6 , for 0, 1, 2,...
(log 1) 2
n
n
xn n
n n xn e n
x xx x nx x+
+ −= − =
+ +
The computed results for successive iterations are,
72308.172321.1
72238.12)12(log4
6442
3
2
21
=
=
=++×
−+−=
xx
xx
e
Hence, the root is 1.723 correct to four significant
figures.Order of Convergence: We consider the order of convergence
of the Newton-Raphson method given by the formula,
)()(
1n
nnn xf
xfxx
′−=+
Let us assume that the sequence of iterations {xn} converges to
the root ξ .Then, expanding by Taylor’s series about xn, the
relation f (ξ ) = 0, gives
0...)()(21)()()( 2 =+′′−+′−+ nnnnn xfxxfxxf ξξ
)()(
.)(21
...)(')(
.)(21
)()(
21
2
n
nnn
n
nnn
n
n
xfxf
xx
xfxf
xxxfxf
′
′′−≈−∴
+′′
−+−=′
−∴
+ ξξ
ξξ
Taking n as the error in the nth iteration and writing n = n – ,
we have,
)()(
21 2
1 ξξεε
ff
nn ′′′
≈+ (2.14)
Thus, ,21 nkn εε =+ where k is a constant.
This shows that the order of convergence of Newton-Raphson
method is 2.In other words, the Newton-Raphson method has a
quadratic rate of convergence.
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional44 Material
The condition for convergence of Newton-Raphson method can
easily bederived by rewriting the Newton-Raphson iterative scheme
as 1 ( )n nx xϕ+ = with
( )( )( )
f xx xf x
ϕ = −′
Hence, using the condition for convergence of the linear
iteration method, we
can write 2)]([)()()(
xfxfxfx
′
′′=′ϕ
Thus, the sufficient condition for the convergence of
Newton-Raphson methodis,
,1)]([
)()(2
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 45
(Line AB meets x-axis alone)
Fig. 2.4 Secant Method
Algorithm: To find a root of f (x) = 0, by Secant method.Step
1:Define f (x).Step 2: Input x0, x1, error, maxit. [x0, x1, are
initial guess values, error is the
prescribed precision and maxit is the maximum number of
iterationsallowed].
Step 3:Set i = 1Step 4:Compute f0 = f (x0)Step 5:Compute f1 = f
(x1)Step 6:Compute x2 = (x0 f1 – x1 f0)/(f1 – f0)Step 7:Set i = i +
1Step 8:Compute accy = |x2 – x1| / |x1|Step 9:Check if accy <
error, then go to Step 14Step 10: Check if i ≥ maxit then go to
Step 16Step 11: Set x0 = x1Step 12: Set x1 = x2Step 13: Go to step
6Step 14: Print “Root =”, x2Step 15: Go to Step 17Step 16: Print
‘iterations do not converge’Step 17: Stop
Regula-Falsi MethodRegula-Falsi method is also a bracketing
method. As in bisection method, westart the computation by first
finding an interval (a, b) within which a real root lies.Writing a
= x0 and b = x1, we compute f (x0) and f (x1) and check if f (x0)
and f(x1) are of opposite signs. For determining the approximate
root x2, we find the
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional46 Material
point of intersection of the chord joining the points (x0, f
(x0)) and (x1, f (x1)) withthe x-axis, i.e., the curve y = f (x0)
is replaced by the chord given by,
)()()(
)( 001
010 xxxx
xfxfxfy −−−
=− (2.16)
Thus, by putting y = 0 and x = x2 in Equation (2.16), we get
)()()(
)(01
01
002 xxxfxf
xfxx −−
−= (2.17)
Next, we compute f (x2) and determine the interval in which the
root lies in thefollowing manner. If (i) f (x2) and f (x1) are of
opposite signs, then the root lies in(x2, x1). Otherwise if (ii) f
(x0) and f (x2) are of opposite signs, then the root liesin (x0,
x2). The next approximate root is determined by changing x0 by x2
in thefirst case and x1 by x2 in the second case.
The aforesaid process is repeated until the root is computed to
the desiredaccuracy , i.e., the condition
1( ) / ,k k kx x x ε+ − < should be satisfied.
Regula-Falsi method can be geometrically interpreted by the
following Figure2.5.
X
Y
x f x1 1, ( )
x f x2 2, ( )
x f x0 2, ( )
O
Fig. 2.5 Regula-Falsi Method
Algorithm: Computing root of an equation by Regula-Falsi
method.Step 1:Define f (x)Step 2:Read epsilon, the desired
accuracyStep 3:Read maxit, the maximum no. of iterationsStep 4:Read
x0, x1 two initial guess values of rootStep 5:Compute f0 = f
(x0)Step 6:Compute f1 = f (x1)Step 7:Check if f0 f1 < 0, then go
to the next step
else go to Step 4
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 47
Step 8:Compute x2 = (x0 f1 – x1 f0) / (f1 – f0)Step 9:Compute f2
= f (x2)Step 10: Check if |f2| < epsilon, then go to Step 18Step
11: Check if f2 f0 < 0 then go to the next Step
else go to Step 15Step 12: Set x1 = x2Step 13: Set f1 = f2Step
14: Go to Step 7Step 15: Set x0 = x2Step 16: Set f0 = f2Step 17: Go
to Step 7Step 18: Write ‘root =’ , x2, f3Step 19: End
Example 13: Use Regula-Falsi method to compute the positive root
of x3 – 3x –5 = 0, correct to four significant figures.Solution:
First we find the interval in which the root lies. We observe that
f (2) =–3 and f (3) = 13. Thus, the root lies in [2, 3]. For using
the Regula–Falsi method,we use the formula,
)()()(
)(01
01
002 xxxfxf
xfxx −−
−=
With x0 = 2, and x1 = 3, we have
1875.2)23(313
322 =−++=x
Again, since f (x2) = f (2.1875) = –1.095, we consider the
interval[2.1875, 3]. The next approximation is x3 = 2.2461. Also, f
(x3) = – 0.4128. Hence,the root lies in [2.2461, 3]
Repeating the iterations, we getx4 = 2.2684, f (x4) = – 0.1328x5
= 2.2748, f (x5) = – 0.0529x6 = 2.2773, f (x6) = – 0.0316x7 =
2.2788, f (x7) = – 0.0028x8 = 2.2792, f (x8) = – 0.0022
The root correct to four significant figures is 2.279.
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional48 Material
Check Your Progress
1. How will you compute the roots of the form f (x) = 0?2.
Define tabulation method.3. Explain bisection method.4. How is
order of convergence determined?5. Explain Newton-Raphson method.6.
Define secant method.7. Explain Regula-Falsi method.
2.3 POLYNOMIAL EQUATIONS
Polynomial equations with real coefficients have some important
characteristicsregarding their roots. A polynomial equation of
degree n is of the form pn(x) =anxn + an–1xn–1 + an–2xn–2 + ... +
a2x2 + a1x + a0 = 0.
(i) A polynomial equation of degree n has exactly n roots.(ii)
Complex roots occur in pairs, i.e., if βα i+ is a root of pn(x) =
0, then
βα i− is also a root.(iii) Descarte’s rule of signs can be used
to determine the number of possible
real roots (positive or negative).(iv) If x1, x2,..., xn are all
real roots of the polynomial equation, then we can
express pn(x) uniquely as,))...()(()( 21 nnn xxxxxxaxp −−−=
(v) pn(x) has a quadratic factor for each pair of complex
conjugate roots.Let, βα i+ and βα i− be the roots, then )}(2{ 222
βαα ++− xx is thequadratic factor.
(vi) There is a special method, known as Horner’s method of
syntheticsubstitution, for evaluating the values of a polynomial
and its derivativesfor a given x.
Descarte’s RuleThe number of positive real roots of a polynomial
equation is equal to the numberof changes of sign in pn(x), written
with descending powers of x, or less by aneven number.
Consider for example, the polynomial equation,5 4 3 23 2 2 2 0x
x x x x+ + − + − =
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 49
Clearly there are three changes of sign and hence the number of
positive realroots is three or one. Thus, it must have a real root.
In fact, every polynomialequation of odd degree has a real
root.
We can also use Descarte’s rule to determine the number of
negative roots byfinding the number of changes of signs in pn(–x).
For the above equation,
;02223)( 2345 =−−−−+−=− xxxxxxpn and it has two changes of sign.
Thus, ithas either two negative real roots or none.
Check Your Progress
8. Define polynomial equations.9. Give the statement of
Descarte’s rule.
2.4 ANSWERS TO CHECK YOUR PROGRESSQUESTIONS
1. We consider numerical methods for computing the roots of an
equation of theform,
f (x) = 0Where f (x) is a reasonably well-behaved function of a
real variable x.
2. In the tabulation method, a table of values of f (x) is made
for values of x ina particular range. Then, we look for the change
in sign in the values of f (x)for two consecutive values of x. We
conclude that a real root lies betweenthese values of x.
3. The bisection method involves successive reduction of the
interval in whichan isolated root of an equation lies.
The sub-interval in which the root lies is again bisected and
the aboveprocess is repeated until the length of the sub-interval
is less than the desiredaccuracy.
The bisection method is also termed as a bracketing method,
since themethod successively reduces the gap between the two ends
of an intervalsurrounding the real root, i.e., brackets the real
root.
4. The order of convergence of an iterative process is
determined in terms ofthe errors en and en+1 in successive
iterations.
5. Newton-Raphson method is a widely used numerical method for
finding aroot of an equation f (x) = 0, to the desired accuracy. It
is an iterative methodwhich has a faster rate of convergence and is
very useful when the expressionfor the derivative f (x) is not
complicated. To derive the formula for thismethod, we consider a
Taylor’s series expansion of f (x0 + h), x0 being aninitial guess
of a root of f (x) = 0 and h is a small correction to the root.
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional50 Material
6. Secant method can be considered as a discretized form of
Newton-Raphsonmethod. The iterative formula for this method is
obtained from formula ofNewton-Raphson method on replacing the
derivative )( 0xf ′ by the gradientof the chord joining two
neighbouring points x0 and x1 on the curve y = f (x).
7. Regula-Falsi method is also a bracketing method. As in
bisection method, westart the computation by first finding an
interval (a, b) within which a real rootlies. Writing a = x0 and b
= x1, we compute f (x0) and f (x1) and check if f (x0)and f (x1)
are of opposite signs. For determining the approximate root x2,
wefind the point of intersection of the chord joining the points
(x0, f (x0)) and (x1,f (x1)) with the x-axis, i.e., the curve y = f
(x0) is replaced by the chord givenby,
)()()(
)( 001
010 xxxx
xfxfxfy −−−
=−
8. A polynomial equation of degree n is of the form pn(x) = anxn
+ an–1xn–1 +an–2xn–2 + ... + a2x2 + a1x + a0 = 0.
9. The number of positive real roots of a polynomial equation is
equal to thenumber of changes of sign in pn(x), written with
descending powers of x, orless by an even number.
2.5 SUMMARY
A root of an equation is usually computed in two stages. First,
we find thelocation of a root in the form of a crude approximation
of the root. Nextwe use an iterative technique for computing a
better value of the root to adesired accuracy in successive
approximations/computations.
Tabulation Method: In the tabulation method, a table of values
of f (x) ismade for values of x in a particular range.The bisection
method involves successive reduction of the interval in whichan
isolated root of an equation lies.If a function f (x) is continuous
in the closed interval [a, b] and f (a) and f(b) are of opposite
signs, i.e., f (a) f (b) < 0, then there exists at least onereal
root of f (x) = 0 between a and b.The bisection method is also
termed as a bracketing method, since themethod successively reduces
the gap between the two ends of an intervalsurrounding the real
root, i.e., brackets the real root.If the function g(x) is
continuous in the interval [a, b] which contains a rootξ of the
equation f (x) = 0, and is rewritten as x = g(x), and 1|)(| ≤≤′ lxg
inthis interval, then for any choice of ],[0 bax ∈ , the sequence
{xn} determinedby the iterations,
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 51
1 ( ), for 0, 1, 2,...k kx g x k+ = =
This converges to the root of f (x) = 0.
Order of Convergence: The order of convergence of an iterative
process isdetermined in terms of the errors en and en+1 in
successive iterations. An
iterative process is said to have kth order convergence if ,lim
1 Me
ekn
nn
-
Methods for FindingComplex Roots andPolynomial Equations
NOTES
Self-Instructional52 Material
3. Define Newton-Rapshon method.4. What is meant by secant
method?5. Explain Regula-Falsi method.
Long-Answer Questions
1. Use graphical method to find the location of a real root of
the equation x3 +10x – 15 = 0.
2. Draw the graphs of the function f (x) = cos x – x, in the
range [0, /2) andfind the location of the root of the equation f
(x) = 0.
3. Compute the root of the equation x3 – 9x + 1 = 0 which lies
between 2 and 3correct upto three significant digits using
bisection method.
4. Compute the root of the equation x3 + x2 – 1 = 0, near 1, by
the iterativemethod correct upto two significant digits.
5. Use iterative method to find the root near x = 3.8 of the
equation 2x – log10x= 7 correct upto four significant digits.
6. Compute using Newton-Raphson method the root of the equation
ex = 4x,near 2, correct upto four significant digits.
7. Use an iterative formula to compute 7 125 correct upto four
significant digits.8. Find the real root of x log10x – 1.2 = 0
correct upto four decimal places using
Regula-Falsi method.9. Use Regula-Falsi method to find the root
of the following equations correct
upto four significant figures:(i) x3 – 4x – 1 = 0, the root near
x = 2
(ii) x6 – x4 – x3 – 1 = 0, the root between 1.4 and 1.510.
Compute the positive root of the given equation correct upto four
places of
decimals using Newton-Raphson method: x + loge x = 2
2.8 FURTHER READINGS
Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical
Methods forScientific and Engineering Computation. New Delhi: New
AgeInternational (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical
Analysis, 2nd Edition.US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations.
New Delhi:New Age International (P) Limited.
-
Methods for FindingComplex Roots and
Polynomial Equations
NOTES
Self-InstructionalMaterial 53
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical
Analysis:An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical
Computingwith Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata
McGraw-Hill.Datta, N. 2007. Computer Oriented Numerical Methods.
New Delhi: Vikas
Publishing House Pvt. Ltd.
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-Instructional54 Material
UNIT 3 BIRGE – VIETA,BAIRSTOW’S ANDGRAEFFE’S ROOTSQUARING
METHODS
Structure3.0 Introduction3.1 Objectives3.2 Birge – Vieta
Method3.3 Bairstow’s Method3.4 Graeffe’s Root Squaring Method3.5
Answers to Check Your Progress Questions3.6 Summary3.7 Key Words3.8
Self-Assessment Questions and Exercises3.9 Further Readings
3.0 INTRODUCTION
In mathematics, a polynomial is an expression consisting of
variables (also calledindeterminate) and coefficients that involves
only the operations of addition,subtraction, multiplication, and
non-negative integer exponents of variables. Thepolynomial of a
single indeterminate, x, is x2 – 4x + 7 while in three variables it
isx3 + 2xyz2 – yz + 1. A polynomial equation is, therefore, an
equation that hasmultiple terms made up of numbers and variables.
The degree tells us how manyroots can be found in a polynomial
equation. For example, if the highest exponentis 3, then the
equation has three roots. The roots of the polynomial equation
arethe values of x where y = 0. Principally, the polynomial
equation is an equation ofthe form f(x) = 0 where f(x) is a
polynomial in x.
The sixteenth century French mathematician Francois Vieta was
the pioneerto develop methods for finding approximate roots of
polynomial equations. Later,several other methods were developed
for solving polynomial equations. Innumerical analysis, Bairstow’s
method is an efficient algorithm for finding the rootsof a real
polynomial of arbitrary degree. The algorithm first appeared in the
appendixof the 1920 book ‘Applied Aerodynamics’ by Leonard
Bairstow. The algorithmfinds the roots in complex conjugate pairs
using only real arithmetic. The Graeffe’sroot squaring method is a
direct method to find the roots of any polynomial equationwith real
coefficients. Polynomials are used to form polynomial equations,
whichencode a wide range of problems, from elementary word problems
to complicatedscientific problems.
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-InstructionalMaterial 55
In this unit, you will study about the Birge-Vieta method, the
Bairstow’smethod, and the Graeffe’s root squaring method.
3.1 OBJECTIVES
After going through this unit, you will be able to:Discuss the
Birge-Vieta methodUnderstand the Bairstow’s methodElaborate on the
Graeffe’s root squaring method
3.2 BIRGE – VIETA METHOD
Birge-Vieta method is used for finding the real roots of a
polynomial equations.This method is based on an original method
developed by the two Englishmathematicians Birge and Vieta. Finding
and approximating the derivation of allroots of a polynomial
equation is a very significant. In the field of science
andengineering, there are numerous applications which require the
solutions of allroots of a polynomial equations for a particular
problem.
Newton-Raphson method is fundamentally used for finding the root
of analgebraic and transcendental equations. Since the rate of
convergence of this methodis quadratic, hence the Newton-Raphson
method can be used to find a root of apolynomial equation as
polynomial equation is an algebraic equation. Birge-Vietamethod is
based on the Newton-Raphson method or this method is a modifiedform
of Newton-Raphson method.
Consider the given polynomial equation of degree n, which has
the form,Pn(x) = anxn + . . . + alx + a0 = 0.Let x0 be an initial
approximation to the root . The Newton-Raphson
iterated formula for improving this approximation is,
To apply this formula, first evaluate both Pn(x) and P n(xi) at
any xi. Theutmost natural method is to evaluate,
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-Instructional56 Material
However, this is stated as the most inefficient method of
evaluating apolynomial, because of the amount of computations
involved and also because ofthe possible growth of round off
errors. Thus there must be some proficient andeffective method for
evaluating Pn(x) and P n(x).
Vieta’s formula is used for the coefficients of polynomial to
the sum andproduct of their roots, along with the products of the
roots that are in groups.Vieta’s formula defines the association of
the roots of a polynomial by means of itscoefficients. Following
example will make the concept clear that how to find apolynomial
with given roots.
Here we will discuss about the real-valued polynomials, i.e.,
the coefficientsof polynomials are real numbers.
Consider a quadratic polynomial. If the given two real roots are
r1 and r2,then find a polynomial.
Let the polynomial is a2x2 + a1x + a0. When the roots are given,
then wecan also write the polynomial equation in the form, k (x –
r1) (x – r2).
Since both the equations denotes the same polynomial, therefore
equateboth polynomials as,
a2x2 + a1x + a0 = k (x – r1) (x – r2) (3.1)On simplifying the
Equation (3.1), we have the following form of equation,
a2x2 + a1x + a0 = kx2 – k (r1 + r2) x + k (r1r2)Comparing the
coefficients of both the sides of the above equation, we
have,For x2, a2 = kFor x, a1 = – k (r1 + r2)For constant term,
a0 = k r1r2Which gives,
a2 = kTherefore,
(3.2)
(3.3)
Equations (3.2) and (3.4) are termed as Vieta’s formulas for a
second degreepolynomial.
As a general rule, for an nth degree polynomial, there are n
different Vieta’sformulas which can be written in a condensed form
as,
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-InstructionalMaterial 57
For
Example 1: Find all the roots of the given polynomial equation
P3(x) = x3 + x – 3= 0 rounded off to three decimal places. Stop the
iteration whenever {xi+1 – xi} <0.0001.Solution: The equation
P3(x) = 0 has three roots. Since there is only one changein the
sign of the coefficients, the equation can have as a maximum one
positivereal root. The equation has no negative real root since
P3(–x) = 0 has no change ofsign of coefficients. Since P3(x) = 0 is
of odd degree it has at least one real root.Hence the given
equation x3 + x – 3 = 0 has one positive real root and a
complexpair. Since P(1) = –1 and P(2) = 7, as per the intermediate
value theorem theequation has a real root lying in the interval
]1,2[.
Now we will find the real root using Birge-Vieta method. Let the
initialapproximation be 1.1.
First Iteration
Therefore x1 = 1.1 – (–0.569)/4.63 = 1.22289Similarly,
x2 = 1.21347x3 = 1.21341
Since, x2 x3 < 0.0001, we stop the iteration here. Hence the
requiredvalue of the root is 1.213, rounded off to three decimal
places.
Next we will find the deflated polynomial of P3(x). To obtain
the deflatedpolynomial, we have to first find the polynomial q2(x)
by using the final approximationx3 = 1.213, as shown in the
following table.
Here, P3 (1.213) = 0.0022, i.e., the magnitude of the error in
satisfyingP3(x3) = 0 is 0.0022.
We then find q2(x) = x2 + 1.213 x + 2.4714 = 0
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-Instructional58 Material
This is a quadratic equation and its roots are given by,
x =
=
= 0.6065 1.4505 i Hence the three roots of the equation rounded
off to three decimal places
are 1.213, 0.6065 + 1.4505 i and 0.6065 1.4505 i.
3.3 BAIRSTOW’S METHOD
In numerical analysis, Bairstow’s method is an efficient
algorithm for finding theroots of a real polynomial of arbitrary
degree. The algorithm was formulated byLeonard Bairstow and first
appeared in the appendix of the book ‘AppliedAerodynamics’ (1920).
The algorithm finds the roots in complex conjugate pairsusing only
real arithmetic.
Bairstow’s approach is to use Newton’s method to adjust the
coefficients uand v in the quadratic x2 + ux + v until its roots
are also roots of the polynomialbeing solved. The roots of the
quadratic may then be determined, and the polynomialmay be divided
by the quadratic to eliminate those roots. This process is
theniterated until the polynomial becomes quadratic or linear, and
all the roots havebeen determined.
Long division of the polynomial to be solved as,
By x2 + ux + v yields a quotient,
And a remainder cx + d such that,
A second division of Q(x) by x2 + ux + v is performed to yield a
quotient,
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-InstructionalMaterial 59
And a remainder gx + h with,
The variables c, d, g, h and the {bi}, {fi} are functions of u
and v. They canbe found recursively as follows,
The quadratic evenly divides the polynomial when, c (u, v) = d
(u, v) = 0Values of u and v for which this occurs can be discovered
by picking starting
values and iterating Newton’s method in two dimensions as,
This continues until convergence occurs. This method to find the
zeroes ofpolynomials can thus be easily implemented with a
programming language or evena spreadsheet.Example 2: The task is to
determine a pair of roots of the polynomial, f (x) = 6 x5 + 11 x4
33 x3 33 x2 + 11 x + 6Solution: As first quadratic polynomial we
can use the normalized polynomialformed from the leading three
coefficients of f(x),
The iteration then produces as shown in the following table.
Iteration Steps of Bairstow’s Method
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-Instructional60 Material
After eight iterations the method produced a quadratic factor
that containsthe roots –1/3 and –3 within the represented
precision. The step length from thefourth iteration on demonstrates
the superlinear speed of convergence.
3.4 GRAEFFE’S ROOT SQUARING METHOD
In mathematics, Graeffe’s method or Dandelin–Lobachesky–Graeffe
method isan algorithm typically used for finding all of the roots
of a polynomial. It wasdeveloped independently by Germinal Pierre
Dandelin in 1826 and Lobachevskyin 1834. In 1837 Karl Heinrich
Gräffe also discovered the principal idea of themethod. The method
separates the roots of a polynomial by squaring themrepeatedly.
This squaring of the roots is done implicitly, that is, only
working onthe coefficients of the polynomial. Finally, Viète’s
formulas are used in order toapproximate the roots.
Dandelin–Graeffe IterationLet p(x) be a polynomial of degree
n,
Then,
Let q(x) be the polynomial which has the squares as
itsroots,
Then we can denote as,
Next q(x) can now be computed by algebraic operations on the
coefficientsof the polynomial p(x) alone.
Let,
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-InstructionalMaterial 61
Then the coefficients are related by,
Graeffe observed that if one separates p(x) into its odd and
even parts,then
We now obtain a simplified algebraic expression for q(x) of the
form,
This expression involves the squaring of two polynomials of only
half thedegree, and is therefore used in most implementations of
the method.
Iterating this procedure several times separates the roots with
respect totheir magnitudes. Repeating k times gives a polynomial of
degree n, we have:
With roots,
If the magnitudes of the roots of the original polynomial were
separated by
some factor >1, that is, , then the roots of the k-th iterate
are
separated by a fast growing factor,
Next the Vieta relations are used as Classical Graeffe’s method
as shownbelow:
If the roots are sufficiently separated, say by a factor
, then the iterated powers of the
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-Instructional62 Material
roots are separated by the factor , which quickly becomes very
big. The
coefficients of the iterated polynomial can then be approximated
by their leadingterm,
Implying,
Finally, logarithms are used in order to find the absolute
values of the rootsof the original polynomial. These magnitudes
alone are already useful to generatemeaningful starting points for
other root-finding methods.
Check Your Progress
1. Why the Birge-Vieta method is used?2. How the Bairstow’s
approach uses Newton’s method for adjusting the
coefficients u and v in the quadratic x2 + ux + v?3. Explain
Graeffe’s method.
3.5 ANSWERS TO CHECK YOUR PROGRESSQUESTIONS
1. Birge-Vieta method is used for finding the real roots of a
polynomialequations.
2. Bairstow’s approach is to use Newton’s method to adjust the
coefficients uand v in the quadratic x2 + ux + v until its roots
are also roots of thepolynomial being solved. The roots of the
quadratic may then be determined,and the polynomial may be divided
by the quadratic to eliminate those roots.This process is then
iterated until the polynomial becomes quadratic orlinear, and all
the roots have been determined.
3. Graeffe’s method or Dandelin–Lobachesky–Graeffe method is an
algorithmtypically used for finding all of the roots of a
polynomial. It was developedindependently by Germinal Pierre
Dandelin in 1826 and Lobachevsky in1834. The method separates the
roots of a polynomial by squaring themrepeatedly. This squaring of
the roots is done implicitly, that is, only workingon the
coefficients of the polynomial. Finally, Viète’s formulas are used
inorder to approximate the roots.
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-InstructionalMaterial 63
3.6 SUMMARY
Birge-Vieta method is used for finding the real roots of a
polynomial equations.This method is based on an original method
developed by the two Englishmathematicians Birge and Vieta.Finding
and approximating the derivation of all roots of a polynomial
equationis a very significant. In the field of science and
engineering, there are numerousapplications which require the
solutions of all roots of a polynomial equationsfor a particular
problem.Newton-Raphson method is fundamentally used for finding the
root of analgebraic and transcendental equations.Since the rate of
convergence of this method is quadratic, hence the Newton-Raphson
method can be used to find a root of a polynomial equation
aspolynomial equation is an algebraic equation.Birge-Vieta method
is based on the Newton-Raphson method or this methodis a modified
form of Newton-Raphson method.The most inefficient method of
evaluating a polynomial, because of the amountof computations
involved and also because of the possible growth of roundoff
errors. Thus there must be some proficient and effective method
forevaluating Pn(x) and P n(x).Vieta’s formula is used for the
coefficients of polynomial to the sum andproduct of their roots,
along with the products of the roots that are in groups.Vieta’s
formula defines the association of the roots of a polynomial by
meansof its coefficients. Following example will make the concept
clear that howto find a polynomial with given roots.As a general
rule, for an nth degree polynomial, there are n different
Vieta’sformulas which can be written in a condensed form as,
For
In numerical analysis, Bairstow’s method is an efficient
algorithm for findingthe roots of a real polynomial of arbitrary
degree.The algorithm was formulated by Leonard Bairstow. The
algorithm findsthe roots in complex conjugate pairs using only real
arithmetic.Bairstow’s approach uses Newton’s method to adjust the
coefficients uand v in the quadratic x2 + ux + v until its roots
are also roots of thepolynomial being solved.
-
Birge – Vieta, Bairstow’sand Graeffe’s RootSquaring Methods
NOTES
Self-Instructional64 Material
The roots of the quadratic may then be determined, and the
polynomialmay be divided by the quadratic to eliminate those roots.
This process isthen iterated until the polynomial becomes quadratic
or linear, and all theroots have been determined.In mathematics,
Graeffe’s method or Dandelin–Lobachesky–Graeffemethod is an
algorithm typically used for finding all of the roots of
apolynomial. It was developed independently by Germinal Pierre
Dandelinin 1826 and Lobachevsky in 1834. In 1837 Karl Heinrich
Gräffe alsodiscovered the principal idea of the method.The
Graeffe’s method separates the roots of a polynomial by squaring
themrepeatedly. This squaring of the roots is done implicitly, that
is, only workingon the coefficients of the polynomial. Finally,
Viète’s formulas are used inorder to approximate the roots.In
Graeffe’s method, the logarithms are used in order to find the
absolutevalues of the roots of the original polynomial. These
magnitudes alone arealready useful to generate meaningful starting
points for other root-findingmethods.
3.7 KEY WORDS
Birae-Vieta method: This method is used for finding the real
roots of apolynomial equations.Bairstow’s method: This is an
efficient algorithm for finding the roots of areal polynomial of
arbitrary degree. The algorithm was formulated by LeonardBairstow
for finding the roots in complex conjugate pairs using only
realarithmetic.Graeffe’s method or Dandelin–Lobachesky–Graeffe
method: It isan algorithm typically used for finding all of the
roots of a polynomial.
3.8 SELF-ASSESSMENT QUESTIONS ANDEXERCISES
Short-Answer Questions
1. Why is Birge-Vieta metho