29 Approximation 3.1 Sources of approximation error 3 Approximation in Scientific Computing 3.1 Sources of approximation error 3.1.1 Error sources that are under our control MODELLING ERRORS – some physical entities in the model are simplified or even not taken into account at all (for example: air resistance, viscosity, friction etc) (Usually it is OK but sometimes not... ) (You may want to look: http://en.wikipedia.org/wiki/Spherical_cow :-)
78
Embed
29Approximation3.1 Sources of approximation error 3 … · 2017. 2. 13. · 42Approximation3.2 Floating-Point Numbers 3.2 Floating-Point Numbers The number 3:1416in scientific notation
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
29 Approximation 3.1 Sources of approximation error
3 Approximation in Scientific Computing
3.1 Sources of approximation error
3.1.1 Error sources that are under our control
MODELLING ERRORS – some physical entities in the model are simplified or evennot taken into account at all (for example: air resistance, viscosity, friction etc)
(Usually it is OK but sometimes not... )(You may want to look: http://en.wikipedia.org/wiki/Spherical_cow :-)
30 Approximation 3.1 Sources of approximation error
MEASUREMENT ERRORS – laboratory equipment has its precision
Errors come also out of
• random measurement deviation
• backward noise
As an example, Newton and Planck constants are used with 8-9 decimal places whilelaboratory measurements are performed with much less precision!
THE EFFECT OF PREVIOUS CALCULATIONS – the input for calculations isoften already output of some previous calculation with some computational errors
31 Approximation 3.1 Sources of approximation error
3.1.2 Errors created during the calculations
Discretisation
As an example:
• replacing derivatives with finite differences
• finite sums used instead of infinite series
• etc
Round-off errors – error created during the calculations due to limited available pre-cision, which the calcualtions are performed with
32 Approximation 3.1 Sources of approximation error
Example 4.1
Suppose, a computer program can find function f value f (x) for arbitrary x.Task: find an algorithm for calculating approximation to the derivative f ′(x)Algorithm: Choose small h > 0 and approximate:
f ′(x)≈ [ f (x+h)− f (x)]/h
Discretisation error is:
T := | f ′(x)− [ f (x+h)− f (x)]/h|.
Using Taylor series, we get an estimation:
T ≤ h2
∥∥ f ′′∥∥
∞. (1)
33 Approximation 3.1 Sources of approximation error
Computational error is created using finite precision arithmetics approximatingthe real f (x) with an approximation f (x). Computational error C is:
C =
∣∣∣∣ f (x+h)− f (x)h
− f (x+h)− f (x)h
∣∣∣∣=
∣∣∣∣ [ f (x+h)− f (x+h)]− [ f (x)− f (x)]h
∣∣∣∣ ,which gives an estimate:
C ≤ 2h‖ f − f‖∞. (2)
The resulting error is ∣∣∣∣ f ′(x)− f (x+h)− f (x)h
∣∣∣∣ ,which can be estimated using (1) and (2):
T +C ≤ h2‖ f ′′‖∞ +
2h‖ f − f‖∞. (3)
34 Approximation 3.1 Sources of approximation error
=⇒ if h is large – the discretisation error is dominating, if h is small, computationalerror starts dominating.
3.1.3 Forward error (arvutuslik viga e. tulemuse viga) and backward error(algandmete viga)
Consider computing y = f (x). Usually we can only compute an approximation ofy, we denote the approximately calculated value by y. We can observe two measuresof the error associated with this computation.
Forward error
The forward error is a measure of the difference between the approximation y andthe true value y:
35 Approximation 3.1 Sources of approximation error
The forward error would be a natural quantity to measure, but usually (since wedon’t know the actual value of y) we can only get an upper bound on it. Moreover,tight upper bounds on it can be very difficult.
Backward error
The question we might want to ask: For what input data we actually performed thecalculations? We would like to find the smallest ∆x for which
y = f (x+∆x)
– Here we have y as the exact value of f (x+∆x). The value |∆x| (or |∆x||x| ) is called
backward error. This means, backward error is the one which we have in the input.(Like the forward error is the error we observe in the output of the calculations or analgorithms.)
36 Approximation 3.1 Sources of approximation error
Condition number – upper limit of their ratio:
forward error≤ condition number×backward error
From (2) it follows that in Example 4.1 the value of condition number is: 2/h.In given calculations all the values are absolute: actual values of the approximated
entities are not considered. Relative forward error and relative backward error are inthis case:
C|( f (x+h)− f (x))/h|
and‖ f − f‖∞
‖ f‖∞
.
Assuming that minx | f ′(x)|> 0, it follows easily from (2), that:
C|( f (x+h)− f (x))/h|
≤{
2h‖ f‖∞
minx | f ′(x)|
}‖ f − f‖∞
‖ f‖∞
.
37 Approximation 3.1 Sources of approximation error
The value in the brackets {·} is called relative condition number of the problem.In general:
• If (absolute or relative) condition number is small,
– then (absolute or relative) error in the input data can produce only a smallerror in the result.
• If condition number is large
– then large error in the result can be caused even by a small error in theinput data
– such problems are said to be ill-conditioned
38 Approximation 3.1 Sources of approximation error
• Sometimes, in the case of finite precision arithmetics:
– backward error is much more simple to estimate than forward error
– Backward error combined with condition number makes it possibe to esti-mate the forward error (absolute or relative)
39 Approximation 3.1 Sources of approximation error
Example 4.2 (One of the key problems in Scientific Computing)Consider solving the system of linear equations:
Ax = b, (4)
where the input consists of
• nonsingular n×n matrix A
• b ∈ Rn
The task is to calculate – an approximate solution: x ∈ Rn.Suppose, instead of exact matrix A – given its approximation A = A+δA, but (for
simplicity) b known exactly. The solution x = x+δx satisfies the system of equations
(A+δA)(x+δx) = b. (5)
40 Approximation 3.1 Sources of approximation error
Then from (4),(5) it follows that
(A+δA)δx =−(δA)x.
Multiplying it with (A+δA)−1 and taking norms, we estimate
‖δx‖ ≤ ‖(A+δA)−1‖‖δA‖‖x‖.
It follows that if x 6= 0 and A 6= 0, we have:
‖δx‖‖x‖
≤ ‖(A+δA)−1‖‖A‖‖δA‖‖A‖
∼= ‖A−1‖‖A‖‖δA‖‖A‖
, (6)
which is satisfied with δA sufficiently small.
41 Approximation 3.1 Sources of approximation error
• =⇒for calculation of x an important factor is relative condition numberκ(A) := ‖A−1‖‖A‖.
– It is usually called as condition number of matrix A.
– Depends on norm ‖ · ‖
Therefore, common practice for forward error estimation is to:
• find an estimate to the backward error
• use the estimate (6)
42 Approximation 3.2 Floating-Point Numbers
3.2 Floating-Point Numbers
The number −3.1416 in scientific notation is −0.31416× 101 or (as computeroutput) -0.31416E01.
sign
exponent
−.31416 101
mantissa base
– floating point numbers in computer notation. Usually, base is 2 (with a few excep-tions like IBM 370 had a base 16; base 10 in most of hand-held calculators; 3 in anill-fated Russian computer).
For example, .101012×23 = 5.2510.(-: There are 10 kinds of people in the world – those who understand binary – and those who don’t :-)
Formally, a floating-point number system F, is characterised by four integers:
• Base (or radix) β > 1
• Precision p > 0
• Exponent range [L,U ]: L < 0 <U
43 Approximation 3.2 Floating-Point Numbers
Any floating-point number x ∈ F has the form
x =±{d0 +d1β−1 + ...+dp−1β
1−p}β E , (7)
where integers di satify
0≤ di ≤ β −1, i = 0, ..., p−1,
and E ∈ [L,U ] (E is positive, zero or negative integer). The number E is called anexponent and in the part in the brackets {·} is called mantissa
Example. In arithmetics with precision 4 and base 10 the number 2347 is repre-sented as
{2+3×10−1 +4×10−2 +7×10−3}103.
Is it possible to represent 2345 in precision 3 and base 10?Note that exact representation of 2347 in precision 3 and base 10 is not possible!
A number is normalised if d0 > 0Example. The number .101012×23 is normalised, but .0101012×24 is notFloating point systems are usually normalised because:
• Representation of each number is then unique
• No digits are wasted on leading zeros
• In normalised binary (β = 2) system, the leading bit always 1 =⇒ no need tostore it!
Smallest positive normalised number in form (7) is 1×β L – underflow threashold.(In case of underflow, the result is smaller than the smallest representable floating-point number)
• One bit for sign, 52 for mantissa and 11 for exponent:
1 52 11
– 64-bit word
• IEEE arithmetics standard – rounding towards the nearest element in F.
• (If the result is exactly between the two elements, the rounding is towards thenumber which has the least significant bit equal to 0 – rounding towards theclosest even number)
IEEE subnormal numbers - unnormalised numbers with minimal possible expo-nent.
• Between 0 and the smallest normalised floating point value.
• Guarantees that f l(x− y) (the result of operation x− y in floating point arith-metics) in case x 6= y never zero – to avoid underflow in such situatons
IEEE symbols Inf and NaN – Inf (±∞), NaN (Not a Number)
• Inf - in case of overflow
– x/±∞ = 0 in case of arbitrary finite floating/point x
– +∞+∞ =+∞, etc.
• NaN is returned when operation does not have a well/defined finite or ininitevalue, for example
16 G=G+text(’Blue - original line -2*x+3’, (0.7, 3.5), color=’blue’) G=
G+text(’Red - line fitted to data’, (0.3, 0.5), color=’red’)
17 show(G) # note: retype symbols "’" when copy-pasting code to sage�
52 Python in SC 4.1 Numerical Python (NumPy)
Resulting plot:
53 Python in SC 4.1 Numerical Python (NumPy)
4.1.1 NumPy: making arrays� �>>> from numpy import *>>> n = 4
>>> a = zeros(n) # one-dim. array of length n
>>> print a # str(a), float (C double) is default type
[ 0. 0. 0. 0.]
>>> a # repr(a)
array([ 0., 0., 0., 0.])
>>> p = q = 2
>>> a = zeros((p,q,3)) # p*q*3 three-dim. array
>>> print a
[[[ 0. 0. 0.]
[ 0. 0. 0.]]
[[ 0. 0. 0.]
[ 0. 0. 0.]]]
>>> a.shape # a’s dimension
(2, 2, 3)�
54 Python in SC 4.1 Numerical Python (NumPy)
4.1.2 NumPy: making float, int, complex arrays� �>>> a = z e r o s ( 3 )>>> p r i n t a . dtype # a ’ s da ta t y p ef l o a t 6 4>>> a = z e r o s ( 3 , i n t )>>> p r i n t a , a . dtype[0 0 0 ] i n t 6 4( or i n t 3 2 , depend ing on a r c h i t e c t u r e )>>> a = z e r o s ( 3 , f l o a t 3 2 ) # s i n g l e p r e c i s i o n>>> p r i n t a[ 0 . 0 . 0 . ]>>> p r i n t a . dtypef l o a t 3 2>>> a = z e r o s ( 3 , complex ) ; aarray ( [ 0 . + 0 . j , 0 . + 0 . j , 0 . + 0 . j ] )>>> a . dtypedtype ( ’ complex128 ’ )�
55 Python in SC 4.1 Numerical Python (NumPy)
• Given an array a, make a new array of same dimension and data type:
� �>>> x = zeros(a.shape, a.dtype)�
4.1.3 Array with a sequence of numbers
• linspace(a, b, n) generates n uniformly spaced coordinates, startingwith a and ending with b
� �>>> x = linspace(-5, 5, 11)
>>> print x
[-5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5.]�
56 Python in SC 4.1 Numerical Python (NumPy)
• arange works like range� �>>> x = arange(-5, 5, 1, float)
>>> print x # upper limit 5 is not included
[-5. -4. -3. -2. -1. 0. 1. 2. 3. 4.]�
4.1.4 Warning: arange is dangerous
• arange’s upper limit may or may not be included (due to round-off errors)
4.1.5 Array construction from a Python list
array(list, [datatype]) generates an array from a list:� �>>> pl = [0, 1.2, 4, -9.1, 5, 8]
>>> a = array(pl)�
57 Python in SC 4.1 Numerical Python (NumPy)
• The array elements are of the simplest possible type:
� �>>> z = array([1, 2, 3])
>>> print z # int elements possible
[1 2 3]
>>> z = array([1, 2, 3], float)
>>> print z
[ 1. 2. 3.]�• A two-dim. array from two one-dim. lists:
� �>>> x = [0, 0.5, 1]; y = [-6.1, -2, 1.2] # Python lists
>>> a = array([x, y]) # form array with x and y as rows�
58 Python in SC 4.1 Numerical Python (NumPy)
• From array to list:� �alist = a.tolist()�
4.1.6 From “anything” to a NumPy array
• Given an object a,� �a = asarray(a)�
converts a to a NumPy array (if possible/necessary)
• Arrays can be ordered as in C (default) or Fortran:� �a = asarray(a, order=’Fortran’)
isfortran(a) # returns True of a’s order is Fortran�
59 Python in SC 4.1 Numerical Python (NumPy)
• Use asarray to, e.g., allow flexible arguments in functions: