-
Preface
, graduate schoolso use our more
putational errorslofi‘error, which -2 systematically>r 3 and
Chaptersthe iteration, or
e main focus in
B and expect the
[ringthe material1d programming5 tutorial.
.ercise numbered
ating the process
nationto answer
:her. As a result,iettle for readingursue.
over the years to
11input is simply
‘3.
Chapter 1
Numerical Algorithms
This opening chapter introduces the basic concepts ofnumerical
algorithms and scientific comput—
ing.
more substantial Sections 1.2 and 1.3. Section 1.2 discusses the
basic errorsthat may be encountered
when applying numerical algorithms. Section 1.3 isconcerned with
essential properties of such
algorithms and the appraisal of the results they produce.We get
to the “meat” of the material in later chapters.
1 1.1 Scientific computing. Scientific cdmputing is a discipline
concerned with the development
and study of numerical al-
gorithms for solvingmathematical problems that arise in various
disciplinesin science and engin—
eering.»
Typically, the starting point is a given mathematicalmodel which
has been formulated in
an attempt to explain and understand an observed phenomenonin
biology, chemistry, physics, eco—
nomics, or any other scientific or engineering discipline.
Wewill concentrate on those mathematical
models which are continuous (or piecewise continuous) and are
difficult or impossibleto solve ana—
lytically; this is usually the case in practice. Relevant
applicationareas within computer science and
related engineering fields include graphics, vision andmotion
analysis, image and signal processing,
1
search engines and data mining, machine learning, and hybridand
embedded systems.
In order to solve such a model approximately on a computer, the
continuousor piecewise
[continuous problem is approximated by a discreteone. Functions
are approximated by finite arrays
of values. Algorithms are then sought which approximatelysolve
the mathematical problem effi—
ciently, accurately, and reliably. This is the heartof
scientific computing. Numerical analysis may
be viewed as- the theory behind such algorithms.The next step
after devising suitable algorithms is their implementation.
This leads to ques—
tions involving programming languages, data structures,
computingarchitectures, etc. The big pic—
ture is depictedin Figure 1.1. ,.
'
' “i‘
The set of requirements that good scientific computing
algorithmsmust satisfy, which seems
’
lementary and obvious, may actually pose ratherdifficult and
complex practical challenges. The
t»
main purpose of this book is to equip you withbasic methods and
analysis tools for handling such
7
challengesas they arise in futureendeavors.
~
. a
We begin with a general, brief introduction to the field
inSection 1.1. This is followed by the
I
l
ill
lll
1ll
-
Chapter 1. Numerical Algorithms
Figure 1.1. Scientificcomputing.
Problem solving environment
1, we will be using MATLAB:this is an interactive computer
language,
which
(1 as a convenient problem solving environment.V
guage based on simple data arrays;it is truly a complete
environment. Its interactivity and graphics capabilitiesmake it
more suitable and convenientin our
context than general-purpose languages suchas C++, Java, Scheme,
or
Fortran 90. In fact,
many of the algorithms that we Willlearn are already implemented
in MATLAB. ..
So why learn
them at all? Because they provide the basis for muchmore complex
tasks, not quite availablelthat
is to say, not already solved) in MATLABor anywhere else, which
you may
encounter in the future.
Rather than producing yet another MATLABtutorial or introduction
in this text (there
are
several very good ones available in othertexts as well as on the
Internet) we will
demonstrate the
use of this language on examples as we go along.
As a computing too
for our purposes may best beviewe
MATLAB is much more than a 1an
-
arical Algorithms‘
language, whichM
3:truly a complete
convenient in our
gran 90. In fact,So why learn
ite available (thatinter in the future.
iris text (there are
1 demonstrate the
iriNumerical algorithms and errors
'
NUmerical algorithms and errors
The most fundamental feature of numerical computing is the
inevitable presenceof error. The result
ofany interesting computation (andof many uninteresting ones) is
typically only approximate,
and
ur goal is to ensure that the resulting error is
tolerablysmall.
'
‘atiVe and absolute errors
,
ere-are in general two basic types of measured error.Given a
scalar quantity u and its approxima-
tion‘vt,
:- The absolute error in. u is‘ In a vl.
o The relative error (assuming it 750) is
lu—vllul
fiTherelative error is usually a more meaningful measure. Thisis
especially true for errors in floating
point representation, a point to which we returnin Chapter 2.
For example, we record absolute
and
_I'relativeerrors for various hypotheticalcalculations in the
following table:
_________—___—’——
‘u 1) Absolute Relative
error error
1 0.99 0.01 0.01
q
1 1.01 0.01 0.01i
—1.5 —1.2 0.3 0.2
100 99.99 0.01 0.0001
100 99 l 0.01
Evidently, when |u| % 1 there is not much differencebetween
absolute and relative error measures.
But when lul >> 1, the relative error is more
meaningful.In particular, we expect the approximation
in the last row of the above table to be similarin quality to
the one in the first row. This expectation
is borne out by the value of the relative error butis not
reflected by the value of the absolute error.
When the approximated value is small in magnitude, thingsare a
little more delicate, and here
is where relative errors may not be so meaningful. But letus not
worry about this at this early point.
Example 1.1. The Stirling approximationn n.
v=Sn= 27rn-(—>eis used to approximateu = n! = 1 -2 - - -n for
large n. The formula involves
the constant e = exp(1) =
'
2.7182818 The following MATLAB script computesand displays 11!
and S”, as well as their
absoluteand relative differences,for 1 ___ n g 10:
I
‘3‘e:=exp(1);m.n=1£10;
' % array
Sn=sqrt (2*pi*n) .* ( (n/e) .An) ;7% the Stirling
approximation.
-
\ vectors or matrices. Finally, our printing instructions
(thelast two in the script) are a bit primitive
4Chapter 1. Numerical Algorithms
fact_n=factoria1(n);abs_err=abs(Sn—fact_n);re1_err=abs_err./fact_n;format
short 9
,
[n; fact_n; Sn; abs_err; re1_err]’ % printout Values
instance5, Thus, it
,-_,_'_.resultin1§
At the 1';"3.....nothing
consider;two typi‘,
Approx
% absolute error
% relative error
Given that this is our first MATLAB script, let us providea few
additional details, though we
hasten to add that we will not make a habitout of this. The
commands exp, factorial,
and abs
use built—in'functions. The command n=1 : 10 (along with a
semicolon,which simply suppresses
screen output) defines an array of length10 containing the
integers 1,2,. . ., 10.
This illustrates a
fundamental concept in MATLAB of workingwith arrays whenever
possible. Along with
it come
array operations: for example, in the thirdline “.*” corresponds
to elementwise!multiplication
of Such er
_
i I_ .
.
evaluate
here, a sacrifice made for the sake of srmphcltyin this, our
first program.
*
The resulting output is
we Will I
1 1 0.922140.077863 0.077863
1
2 21.919 0.080996 0.040498
3 6 5.83620.16379 0.027298
4 24 23.5060.49382 0.020576
5 120 118.021.9808 0.0165071
6 720 710.089.9218 0.01378
7 5040 4980.459.604 0.011826 fin
8 40320 39902417.6 0.010357
9 3.6288e+005 3.5954e+0053343.1 0.0092128 Round(
10 3.6288e+006 3.5987e+00630104 0.008296
1 Any cor
The values of n! become very large very quickly,and so are the
values of the approximation error
is
S”. The absolute errors grow as n grows,but the relative errors
stay well behaved and
indicate that elimme
in fact the larger 11 is, the better the quality of the
approximationis. Clearly, the relative errors are
of the fil
much more meaningful as a measure of the qualityof this
approximation. I represer
Discreti:
Error types,
and we will 8
Knowing how errors are typically measured,we now move to discuss
their source;
There are seVeral ,..structure whk
types of error that maylimit the accuracy of a numerical
calculation.
‘
errors domina
1. Errors in the problem to be solved.‘
. ..
.
.
» Theorer
These may be approxrmation errorsin the mathematical model. For
instance: Assume
o Heavenly bodies are often approximatedby sphereswhen
calculating their properties;
Then
an example here is the approximate calculationof their motion
trajectory, attempting
to
answer the question (say) whether a particularasteroid will
collide with Planet Earth
before 11.12.2016.
“3“
o Relatively unimportant chemicalreactions are often discarded
in complex
chemical
modeling in order to obtain a mathematical problemof a
manageable size. I. ,. _, ‘
. .
.V
. .
.
1 where‘g
It IS important to realize, then,that often approx1mat10n errors
of
the type stated aboveare
deliberately made: the assumption isthat simplificationof the
problem is worthwhile
even if
it generates an error in themodel. Note, however, that we
are
still talking about the math- Discretizafi‘
ematical model itself; approximationerrors relatedto the
numerical solution of the problem _
Let us show 2
are discussed below.
-
etical Algorithms‘
2% Numerical algorithms and errors
1‘ 1Another-typical sourceof error in the problem is error in
the input data.
This may arise, for
"5 instance, from physical measurements, which are never
infinitelyaccurate.
,
. Thus, it may be that after a careful numericalsimulation of a
given mathematical problem, the
b;_ resulting solution wouldnot quite match observations on the
phenomenonbeing
examined.
Atthe level of numerical algorithms, which is the focusof our
interest here, there is really,
l. nothing we can do about the above-described
errors. Nevertheless, they should be taken into
consideration, for instance, when determiningthe accuracy
(tolerance with respect to
the next
two types of error mentioned below)to which the numerical
problem should be solved.
details,though we:orial, and abs
simply suppressesThis illustrates: a
dong with it come
:multiplic‘atidnof“are a bit primitive
.
2. Approximation errors
Such errors arise when an approximate formula is usedin place of
the actual function to be
evaluated.'
We will often encounter two types of approximationerrors:
0 Discretization errors arise from discretizations ofcontinuous
processes, such as inter-
polation, differentiation, and integration.
0 Convergence errors arise in iterativemethods. For instance,
nonlinear problems must
generally be solved approximatelybyan iterative process. Such a
process
would con—
verge, to the exact solutionin infinitely many iterations, but
we cut it off
after a finite
(hopefully small!) number of suchiterations. Iterative methods
in fact often
arise in
linear algebra.
(3. Roundofferrors
error is produced (as in the direct evaluationof a straight
line, or the solution by Gaussian
elimination of a linear system of equations), roundofferrors are
present. These arise because
of the finite precision representation of real numberson any
computer, which affects
both data
representation and computer arithmetic.‘
the approximation:1and indicate that
j relative errors are'
Discretization and convergence errors maybe assessed by an
analysis of the method used,
and We will see a lot of that in thistext. Unlike roundoff
errors, they have a relatively'smooth
structure which may occasionally be exploited. Ourbasic
assumption will be that approximation
errors dominate roundoff errors in magnitude in
actual,successful calculations._.
There are several
Theorem:Taylor Series. ,Assume that f (x) has k +1
derivatives in an interval containing the points x0and x0 +
h.
Then '5
‘;ince:
ig their properties;
izory,attempting tohz hk
iWlth PlanetEarth f(xo + h) = f(xo) + hf'(xo) + —2-f”(xo>
+ - - - + g f(k)(xo)
I
I»I
ilk“ (k-ll—l)’
yzomplex chemical+ ——--f . (S),
:,
l
t i
(k 1).l
,,size.
l, Stated above arewhere; is some point between x0
and x0 + h. .
vorthwhile even if
g about the math— r- Discretization errors in actionon of the
problem
-
\
Let us show an example that illustrates the behaviorof
discretization errors.
4“
Any computation with real numbersinvolves roundoff error. Even
when no approximation
-
6 - _ Chapter 1. Numerical Algorithms
'
Example 1.2. Consider the problem of approximatingthe
derivative
'
f’(x0) of a given smooth
function f (x) at the point x = x0. For instance, let f (x)=
sin(x) be defined on the real line
—00 < x < 00, and set x0 = 1.2. Thus, f(x0) = sin(1.2)%
0.932....
)
,
Further, consider a situation where f (x) may be evaluatedat any
point x near x0, but f’ (x0)
may not be directly available or is computationally expensiveto
evaluate. Thus, we seek ways to
;
approximate f’(x0) by evaluating f at x near x0.
I
.
A simple algorithm may be constructed'using. Taylor’s
series.This fundamental theorem is
given on the preceding page. For some small, positivevalue h
that we will choose in a moment, write
/’12 ll [’13 Ill II/l
f(Xo +11) = f(xo) +hf (X0) + if (x0) + —6-f(x0) + if (xo)+-
“-
Then
h 5
Our algorithm for approximating f’ (x0) is to calculate
f(x0 +h) — f(x0)h
'
The obtained approximation has the discretization error
1,13I — h ‘12x
f/(XQ)__f(x0+ f(X0)
Ef//(x'0)+l6_f///(XO)+flfII/I(XO)+I _ . ‘-
Geometrically, we approximate the slope of the tangentat the
point x0 by the slope of the chord
through neighboring points of f. In Figure 1.2,the tangent is in
blue and the chord
is in red.
h __ ] h2 h3_
:
f/(XO)=f(X0+ ) f(x0).—
-
nerical Algorithm Numerihalalgorithms and errors7
is bbunded,and it is @(h) if also f’ ’
(x0) # O. In any case, even without knowing f’ ’
(x) we
the discretizationerror to decrease at leastas fast" as h when h
is decreased.
of a given smoothed on the real line
near x0, but f’ (x0) ,yl
'S’ we seek ways to.
considerivariouscomputationalerrorsdependin'gon‘adisCretizaL
as'
ow. ey decreaseash1decreases._In other:instances;such as
H
particularalgorithm”we'
are interested_ inT’alboundon
ncreaseS‘nnboundedlyxeigqan;=3h). i ’ziw‘lixI
we'denote‘
'
'
"
amental theorem is'
:in a moment, write'
' e.=(9(hq)
w=f0(ii16gn)~= r
H _.
.
Ottawa'
‘
LSIOPCof the chordm C"? log”
Irdisinred.
.
ext whichof thesiertwo:‘I‘neaningsisI,
i.
g , stronger}relation than“the (9 notation::;aafunction ¢(h)fo
1” is @(1/I(h))\(resp.,@(1/r(n)))if; q);is asymptotically
For our ‘particularinstance, f (x) = sin(x), we have the exact
value f’ (x0): cos(1.2)
362357754476674. . .. Carrying out our short algorithm we
obtainfor h = 0.1 the approxi—
on f’ (x‘o)% (sin(l .3) - sin(1.2))/0.1 = 0.315.. .. The
absolute
error thus equals approximately
47. The relative error is not qualitatively different here.
.
This approximation of f’(x0) using h = 0.1 is not very accurate.
We
therefore apply the same
llgorithmusing several increasingly smaller valuesof h. The
resulting errors are as follows:
h Absolute error
0.1 4.7166766—2
0.01 4.666196e—3
0.001 4.660799e-4
1.e—4 4.660256e—5
I,1.e-7 4.619326e—8
f’ (x0) is approxi—
iiscretization error
Indeed, the error appears to decrease like h.More
specifically(and less importantly), using
‘
our explicit knowledge of f”
(x) = f (x) =— sin(x), in this case we have that %f
”(m) m —0.466.
Therquantity 0:466h is seen to provide a ratheraccurate estimate
for the above-tabulated absolute
3 (90!) SO long-'33I
error values.
-
8.
'
Chapter 1. Numerical AlgorithmsmThe damagingeffect of roundoff
errorsThe calculations in Example 1.2, and the ones reportedbelow,
were carried out using MATLAB’sstandard arithmetic. Let us stay
for'a few more moments with the approximation algorithm featuredin
that example and try to push the envelope a little further. ‘ '
Absoluteerror
sin(l.2 + h) —' sin(l .2)h
< 10—10.cos( 1.2) —-
Can’t we just set It 5 10—10/0.466in our algorithm?Not quite!
Let us record results for very small, positive values of h:
12 Absolute error
1.e—8 4.361050e—10’
1.e-9 5 .594726e-8
1.e—10 1.669696e—7 P’erpolatesthe‘1.e-11 7.938531e—61._e—l3
4.250484e—4
1.e-15 8.173146e-2
1.e-16 3.623578e—1_ _ _‘ rrorwhenh istA log-log plot1 of the
error versus 11 is provided In Flgure 1.3. We can clearly see that
as h 18decreased, at first (from right to left in the figure) the
error decreases along a straight line?but thistrend is altered and
eventually reversed. The MATLAB script that generates the plot
ingFlgure1.3
A is given next. ‘
x0 = 1_2-I
r-
. creasesashdelf0 = sin(x0); - ''-
. errorwhensolvjfp = cos (x0); 'i = -20:0'.5:0;
-
i
L
h = 10.*i,~ ‘_
V
pular theornerr = abs (fp — (sin(x0+h) — f0) ./h );d_err =
f0/2*h;loglog (h, err, ' —1=’ ) ;
‘
513‘ Other Pollhold on-
age. They areloglog (h,d_err, ’ r— . ’ );xlabel(’h')
V
ylabel (’Absolute error’)
Perhapsthe most mysterious line in this script is that defining
d4err: it calculatesélf'txonh- I.
7
:3 Algal.I this section vs
‘lGraphingerror values using a logarithmic scale is rather
common in scientific computing,: because a logarithmic umerical
algorscale makes it easier to trace values that are close to zero.
As you will use such plotting often, let us mention at this ould
have.early stage the MATLAB commands plot, semilogy, and
loglog.
-
merical Algorithm“
1
tusinglVIATLAlB’In algorithmifeature
'
‘Absoluteerror
—15
10
10-20 .5 0
10 10
1I
T
Figure1.3. The combined efi‘ectof discretization and
roundofi‘errors. The solid curveerpolates the computed values of
lf
’(x0) — W |for f (x) = sin(x), x0 = 1.2. Also shown
Hashédot style is a straight-line depicting the discretization
error without roundofi error
The reason the error “bottoms out” at about h = 10—8in the
combined Examples 1.2—1.3that the total, measured error consists of
contributions of both discretization and roundoff errors.e
discretization error decreases in an orderlyfashion as h decreases,
and it dominates the roundoffor iii/henlhis relatively large. But
When h gets below approximately 10—8the discretization error
e‘
meslvery small and roundoff error starts to dominate (i.e., it
becomes larger in magnitude).;, Roundoff error has a somewhat
erratic behavior, as is evident from the small oscillations
that
present in the graph in a few places.Moreover, for the algorithm
featured in the last two examples, overall the roundoff error
in—
,reases as h decreases. This is one reason why we want it always
dominated by the discretizationrror when solving problems involving
numerical differentiation such as differential equations.
rlyseethatas his‘raightline, but this1e plot in Figure 1.3l
m:
o ular theorems from calculusL
7
e Taylor Series Theorem, given on page 5, is by far the most
cited theorem in numerical anal—ysis. Other popular calculus
theorems that we will use in this text are gathered on the
followingpage. They are all elementary and not difficult to prove:
indeed, most are special cases of Taylor’sTheorem.
Specific exercises for this section: Exercises 1—3. Iv
rr: it calculates -. ':'
€11.63.- Algorithm propertiesIn this section we briefly discuss
performance features that may or may not be expected from a
goodnumerical algorithm, and we define some basic properties, or
characteristics, that such an algorithm
-
'
should have.
because a logarithmiclet us mention at this
-
10___—_______.—_—_-——-——-——-—————
Chapterl. Numerical Algorithms . - Algorithm
’
t
Theorem: Useful Calculus Results.
0 Intermediate Valueevaluauon
If f e C[a,b] and s is a valuevsuch that f(&) 5 s 5 f0?) for
two numbers6,5 Ex 95 ASSw[(1,1)], then there exists a real number c
6 [a,b] for which f (c) = s.
v
g‘
% in 2
'
I
% (X,
0 Mean Value . p E cIf f E C [a,b] and f. is differentiable on
the open interval (a,b), then thjereiexistsa for 3real number c 6
(a,b) for which f
’(c) = (1713;!(a).
' i “i a=
,
en
0 Rolle’s- -
_
If f ‘E C [a,b] and f is differentiable on (0,1)), and in
addition f (a) f'(b)= 0,
then there is a real number c 6 (a,b)‘ for which f’ (c) = O.
The “onic
introducir
It is imp01
. . . .
a ', algorithmCriteria for assessmg an algorithm , ' they do
n
An assessment of the quality and usefulness of an algorithm may
be based on a number of criteria:-~ ablY-Fun
' I' well. Cur
0 Accuracy integral p
This issue is intertwined with the issue of error types and was
discussed at the start of Sec—-
ago. Ind‘
tion 1.2 and in Example 1.2. (See also Exercise 3.) The
important point is that the accuracy of'
other par:used for i
2See page 7 for the (9 notation.
a numerical algorithm is a crucial parameter in its assessment,
and when designing numerical
algorithms it is necessary to be able to point out what
magnitude of error is to be expectedThose, 111
when the computation is carried out.this text"
.
portanceEfficrencyA good computation is one that terminates
before we lose our patience. A numerical algo—
° RObusm
rithm that features great theoretical properties is useless if
carrying it out takes an unreason-Often, th
able amount of computational time. Efficiency depends on both
CPUltime and storage spaceLAB for
requirements. Details of an algorithm implementation within a
given computer languageand tegration
an underlying hardware configurationmay-play an important role
in yieldingcode efficiency.would w
Other theoretical properties yield indicators of efficiency, for
instance, the rate of conver—result to
gence. We return to this in later chapters.with aw
Often a machine-independent estimate of the number of elementary
operationsrequired, namelyThere an
additions, subtractions, multiplications, and divisions, gives
an idea of the algorithm’seffi—'
algorithr'
must be.ciency. Normally, a floating point representation is
used for,real numbers and then the costs
of these different elementary floating point operations, called
flops, may be assumed to be
roughly equal to one another. roblem cont
view of the 1
Example 1.4. A polynomial of degree n, given as arises
regardin
17,106)= co+61x + - - - +Cnx”,
requires (9(n2)operations2to evaluate at a fixed point x, if
done in a brute force way withouintermediate storing of p0wers of
x. But using the nested form, also known as Homer’s rul
and given by191105)= ' ‘ (@1115+ Cn—l)x+ Cn—Z)x
' )X + Cf),
-
suggests an evaluation algorithm which requires only
(9(n)elementary operations,i.e., re—
quiring linear (in n) rather than quadratic computationtime. A
MATLAB script for nested
evaluation follows:'
I, “Hf-U]. IV: -A 112.
numbers“’17E % Assume the polynomialcoefficients are already
stored
% in array 0 such that for anyreal x,
I
% p(x) = ,c(l) + c(2)x + C(3)XA2+ + c(n+l)x"n
.t . p=cmnh
fi‘thereexistsa for j= n:-1:l
p=p*x+c(j);'J end
The “onion shell” evaluation formula thus unravels quite
simply.Note also the manner of
introducing comments into the script. Il)=f(b)=0,
It is important to note that while operationcounts as in Example
1.4 often give a rough idea
of
* algorithm efficiency, they) do not give the complete
pictureregarding execution speed, since
.they do not take into account the price (speed)of memory access
which may vary
consider—
, ably. Furthermore, any, setting of parallel computingis
ignored in a simple operation count
as
well. Curiously, this is part of the reason theMATLAB command f
lops, which had been
an
integral part of this language for many years,was removed from
further releases several years
ago. Indeed, in modern computers,cache access, blocking and
vectorization features,
and
other parametersare cruCial in the determinationof execution
time. The computer language
% used for implementationcan also affect the comparative
timingof algorithm implementations.
Thofse, unfortunately, are'much moredifficult to assess compared
to an operation count.
In
this'text‘we 'will not get into the gory details of these
issues, despitetheir relevance and im—
'
portanCe:
number of criteria:
‘ at thestart of. Sec—that the accuracy of
'
esigning numericalit is to be expected
Robustness
Often, the major effort in writing numerical software,such as
the routines available in MAT-
LAB for solving linear systems of algebraic equationsor for
function approximation and in-
tegration, isspent not on implementingthe essence of an
algorithm but on ensuring that
it
would work under all weather conditions Thus,the routine should
either yield the correct
result to within an acceptable error tolerance level,or it
should fail gracefully (i.e., terminate
with aiwarning) if it does not succeed to guaranteea “correct
result.”
A numerical algo-'
‘
takes an unreason-
‘: and storage spaceIPuter language and -
jng code efficiency. -16 rate of conver-
PnsreQUifed,namely Thereare intrinsic numerical properties that
account for the
robustness and reliability of an
516algonthm’seffi-‘ '
algOrithm. Chief among these is therate of accumulation of
errors. In particular, the algorithm
and then the COStS mubt be stable; see Example 1.6.
f be assumed to beiProblem'cov'nditioningand algorithm
stability
I
In View of the fact that the problem and the numerical
algorithmboth yield errors, a natural question
l’arises regardingthe appraisal of a given computedsolution.
Here notions such as problem sensitivity
and algorithm stability play an importantrole. If the problem is
too sensitive, or
ill-conditioned,
vtmeaningithat even a small perturbationin the data produces a
large difference in
the result,3 then no
salgorithrrinlaybe found for that problem whichwould meet our
requirement of solution robustness;
'
see Figure 11.4for an illustration. Somemodification in the
problem definition may be
called for in
» such cases.
«5force way without
In as Homer’s rule
g ,H3HereWe refer to intuitive notions of “large”vs. “small”
quantities and of values being
“close to” vs. “far from”
one another. While these notions can be quantifiedand thus
bemade more precise, such a move would typically
make
'
definitions cumbersome and harder to understandat this
preliminary stage of the discussion.
-
12 Chapter 1. Numerical Algorithms,
y
Figure 1.4. An ill-conditioned problem of computing output
values y given in terms ofinput values x by y = g(x): when the
input x is slightly perturbed to x, the result y = g(x) is farfrom
y. If the problem were well-conditioned, we would be expecting the
distance betWeen y and y lto be more comparable in magnitude to the
distance between x and x.
I
For instance, the problem of numerical differentiation depicted
in Examples 1.2 and 13 turns »out to be ill—conditioned
when‘extreme accuracy (translating to very small values of h) is
required.
The job of a stable algorithm for a given problem is to yield a
numerical solution which is‘
the exact solution of an only slightly perturbed problem; see
the illustration in Figure 1.5. Thus,‘
if the algorithm is stable and the problem is well-conditioned
(i.e., not ill—conditioned),then the lcomputed result 52is close to
the exact y.
‘1
Figure 1.5. An instance of a stable algorithm for computing y =
g(x): the output 5: istheexact result, 52= g(x'), for a slightly
perturbed input, Le, x which is close to the input x. Thus, if
thealgorithm is stable and the problem is well-conditioned, then
the computed result y is close'to theexact y.
‘ '
Example 1.5. The problem of evaluating the square root function
for an argument near the value 1is well—conditioned,as we show
below.
-
lmericalAlgorithms.\- l3
t:g(x) = «N +xand note that g’(x) = LAT;Suppose we fix x so that
le
-
'1F
l4
Chapterl. Numerical AlgorithmAn algorithm which may come to mind
is therefore as follows:
1. Evaluateyo = ln(11) —ln(10).2. Form = 1,...,30, evaluate
by 10 each time the recursion is applied. Thus, there is
exponential error growth with c1 = 10. InMATLAB (which
automatically employs the IEEE double precision floatingpoint
arithmetic seeSection 2.4) we obtain yo = 9.53106 — 02, ylg =
'—9.1694e+01, y19 = 9.1694e+02,...,y3o =—9.1694e + 13. It is not
difficult to see that the exact values all satisfy 0 < y” <
l, and hence thecomputed solution, at least for n 3 18, is
meaningless! I ‘ s
Thankfully, such extreme instances of instability as illustrated
in Example1.6 will not occur "in any of the algorithms developed in
this text from here on. ‘
Specific exercises for this section: Exercises .4—5.
1.4 Exercises
0. Review questions 'V
'
i
(a) Whatris the difference, according to Section 1.1, between
scientific computing and nu—merical analysis?(b) Give a simple
example where relative error is a more suitable measure than?
absolute 5 t
error, and another example where the absolute error measure is
more suitable.(c) State a major difference between the nature of
roundoff errors and discretization errors.((1) Explain brieflywhy
accumulation of roundoff errors is inevitable when arithmetic
opera- Ltions are performed in a floatingpoint system. Under which
circumstances is it tolerable 1;.in numerical computations?(e)
Explain the differences between accuracy, efficiency,and robustness
as criteria for eval—
uating an algorithm.
(i) Show that nested evaluation of a polynomial of degree 11
requires only 211 elementaryoperations and hence has_0(n)
complexity. ' ‘
(g) Distinguish between problem conditioning and algorithm
stability.1. Carry out calculations similar to those of Example 1.3
for approximating the derivative of thefunction f (x) = e‘2x
evaluated at x0 = 0.5. Observe similarities and differences by
comparing
'
your graph against that in Figure 1.3. '
2. Carry out derivation and calculations analogous to those in
Example 1.2, using the expression
f(xo+h)-f(xo—h)2h
for approximating the first derivative f’ (x0). Show that the
error is (9012).More precisely, theleading term of the error is — g
f ”’ (x0) when f ’ ” (x0) 75 O.
-
iericaiAlgorithms
off errors were not
ors getsmultiplied1 with c] = 10. Inint arithmetic;see
.,y30 =: Land hence the
1.6 will not occur
mputing and nu-
re than; absolute‘
itable..
etization errors.
rithmetic opera-es is it tolerable
riten'a for eval-
Zn elementary
:rivative of the: by .comparin g
the expression
precisely, the
- Additional notes . . ‘ 15m
33. Carry out similar calculationsto those of Example 1.3 using
the approximation from Exer—cise ’2.
,_ Observe similarities and differences by comparing your graph
against that in Fig—ure 1.3;
’4.FollOwing Example1.5, assess the conditioning of the problem
of evaluating
exp(cx) — exp(—cx)g(x) = tanmcx) =
exp(cx) + exp(—cx)
near x = 0 as the positive parameter c grows.
,5. Consider the problem presented in Example 1.6. There we saw
a numerically unstable proce-dure for carrying out the task.
(a) Derive a formula for approximately computing these integrals
based on evaluating yn_1given y".
(b) Show that for any given value 8 > 0 and positive integer
no, there exists an integer. n1 3 no such that taking y,” = O as a
starting value will produce integral evaluations 32,,
with an absolute error smaller than 8 for all 0 < n 5 no.
(0) Explain why your algorithm is stable.
(d) Write a MATLAB function that computes the value of yzo
within an absolute error ofat most 10—5.Explain how you choose 11]
in this case. -
5 Additionalnotes
lay with experiment and theory. On the one hand, improvements in
computing power allow for
ornputersare still not (and may never be) powerful enough to
handle.A potentiallysurprising amount of attention hasbeen given
throughout the years to the defini-
ons of scientificcomputing and numerical analysis. An
interesting account of the evolution of thisseemingly esoteric but
nevertheless important issue can be found in Trefethen and Bau
[70].‘
The conceptof problem conditioning is both fundamental and
tricky to discuss so early ine game. If you feel a bit lost
somewhere around Figures 1.4 and 1.5, then rest assured that
these’conceptsieventuallywill become clearer as we gain experience,
particularly in the more specificcontexts of Sections 5.8, 8.2, and
14.4. A
‘'
Many‘computerscience theory books deal extensively with 0 and 8
notations and complexityissues. One widely used such book is
Graham, Knuth, and Patashnik [31].
and what’s in Mathworks. One helpful survey of some of those can
be found at http://www.cs.ubc.ca/Nmitchell/matlabResources.html
.
s
There are many printed books and Internet introductions to
MATLAB. Check out wikipedia I