Numerical Analysis, lecture 11: Approximation (textbook sections 9.1-3) • Problem formulation • Least squares fitting f f* 0 1 0 1 f f *
Numerical Analysis, lecture 11: Approximation
(textbook sections 9.1-3)
• Problem formulation
• Least squares fitting
f
f*
0 10
1f
f*
Numerical Analysis, lecture 11, slide ! 2
How to approximate f by a simple continuous function? (p. 261-263)
discrete f f*
We want to determine (the parameters of) f * so that f * is “close to” f
continuous f f*f
two curve-fitting problems:
Numerical Analysis, lecture 11, slide ! 3
We’ll need a few abstract concepts (p. 264)
f * is an element of a n+1-dimensional subspace
!n = polynomials of degree " n
f is an element of a linear space
Rm
space of continuous functions on [a,b]C[a,b]
space of m-dimensional vectors
f*f
f*
span !0,…,!n{ }" Rm (n < m)linear combinations of a set of n+1 vectors
span !0,…,!n{ }" C[a,b]linear combinations of a set of n+1 functions:
!n G = polynomials evaluated at G = x1,…, xm{ }
Numerical Analysis, lecture 11, slide ! 4
The norm of a linear space element (p. 265-266)
4 axiomsf ! 0, f = 0" f = 0, # f = # $ f , f + g % f + g
euclidean norm on C[a,b]
f = w(x) f (x)( )2 dxa
b! where w > 0 in (a,b)
euclidean norm on Rm
f = wi fi2
i=1m! where wi > 0 for i = 1,…,m
usually we use w(x) ≡ 1
usually we use wi ≡ 1
approximation problemFind f ! in the subspace that minimizes f " f !
f
f*
f*f
f*
f*
f
1
Numerical Analysis, lecture 11, slide ! 5
The euclidean norm is associated toa scalar product (p. 267, 271)
3 axiomsA scalar product is a real-valued function of f and g [denoted ( f ,g)] such that( f ,g) = (g, f ), ( f ,!h + "g) = !( f ,h) + "( f ,g), f # 0 $ ( f , f ) > 0
(pythagorean law)
a scalar product in C[a,b]
( f ,g) = w(x) f (x)g(x) dxa
b
!
a scalar product in Rm
( f ,g) = wi figii=1
m
! = f TWg
2 factsThe euclidean norm f is defined as ( f , f )
( f ,g) = 0 ! f + g 2 = f 2 + g 2
W is the diagonal matrix with wi on the diagonal
Numerical Analysis, lecture 11, slide ! 6
The best fit’s error is orthogonal to the subspace (p. 269-270)
proof (⇐)g = cj! j
j=0
n
" # ( f $ f %, f % $ g) = 0
f $ g 2 = f $ f % + f % $ g2= f $ f %
2+ f % $ g
2& f $ f %
2
f
f*
theoremf ! is the unique element of span "0,…,"n{ } that is closest to f
if and only if ( f # f !, "k ) = 0 for k = 0,1,2…,n.
proof (⇒)0 = !
!ck" f # f "
2=
!!ck
" f # cj"$ j
j=0
n
%2
= 2( f " # f , $k )
Numerical Analysis, lecture 11, slide ! 7
The approximation’s functions should be linearly independent (p. 268-269)
linear independence definition:
cj! j
j=0
n
" = 0 # c0 =! = cn = 0
the standard polynomial basis set is linearly indept.
If !i (x) = xi (x "[a,b])then {!0, !1,…, !n} is a linearly independent set in C[a,b]
the sampled std. poly. basis set is lin. indepent.
If !i (x) = xi (x "G = {x1,…, xm}) and n < G
then {!0 G , !1 G ,…, !n G} is a linearly independent set of Rm vectors
|G| is the number of distinct xi
Numerical Analysis, lecture 11, slide ! 8
Linear independence ensures that the normal equations have a unique solution (p. 270-271)
proof0 = f ! cj" j
j=0
n
# , "k$
%&
'
() = ( f , "k ) ! cj (" j ,
j=0
n
# "k )
theoremIf !0,…,!n are linearly independent then there is a unique [c0,…,cn ]
such that ( f " cj ! jj=0
n
# , !k ) = 0 for k = 0,1,2…,n.
cj (! j ,
j=0
n
" !k ) = 0 # cj! jj=0
n
"2
= 0 # cj! jj=0
n
" = 0 # c0 =! = cn = 0
Normal equations coefficient matrix is non-singular when ϕ’s are lin. indept because
ck
k! cj (" j ,"k )
j!
! "## $##
normal equations
(!n ,!n ) ! (!0,!n )" # "
(!n ,!0 ) ! (!0,!0 )
"
#
$$$
%
&
'''
cn"c0
"
#
$$$
%
&
'''=( f ,!n )"
( f ,!0 )
"
#
$$$
%
&
'''
Numerical Analysis, lecture 11, slide ! 9
The least squares approximation can be found by solving the normal equations (p. 272)
! f "(x) = 0.1148 +1.0437x
normal equations
(!n ,!n ) ! (!0,!n )" # "
(!n ,!0 ) ! (!0,!0 )
"
#
$$$
%
&
'''
cn"c0
"
#
$$$
%
&
'''=( f ,!n )"
( f ,!0 )
"
#
$$$
%
&
'''
1 3 1 21 2 1!
"#
$
%&c1c0!
"#
$
%& =
4 ' 2
2 '!
"#
$
%& (
c1c0!
"#
$
%& =
1.04370.1148!
"#
$
%&
i.e. "find c0 and c1 to minimize f ! c0"0 ! c1"1
sin(# x2
)!c0 !c1x$%&
'()
2dx
0
1
*
! "## $##"
example (p. 272)
0 10
1f
f*
f !C[0,1], f (x) = sin(" x 2) , f # !$1, w % 1
Numerical Analysis, lecture 11, slide ! 10
The discrete least squares approximation can be found by solving the normal equations (p. 273)
0.3438 0.50.5 1
!
"#
$
%&c1c0!
"#
$
%& =
0.41050.6284!
"#
$
%& '
c1c0!
"#
$
%& =
1.02750.1147!
"#
$
%&
this is interpolation when m = n + 1
f =
sin(0)sin(! 8)sin(! 4)sin(3! 8)sin(! 2)
"
#
$$$$$$
%
&
''''''
, W =14diag 1
2 ,1,1,1,12
"# %&, A =
0 11 4 11 2 13 4 11 1
"
#
$$$$$$
%
&
''''''
normal equation ATWAc = ATW f
c =cn!c0
!
"
###
$
%
&&&, f =
f (x1)!
f (xm )
!
"
###
$
%
&&&, W =
w1"
wm
!
"
###
$
%
&&&, A =
'n (x1) # '0(x1)! " !
'n (xm ) # '0(xm )
!
"
###
$
%
&&&
example
i.e. find c1,c0 to minimize wi sin(! xi2 ) " c1xi " c0( )i=1
5
#2
G = {0, 14 ,12 ,34 ,1}, f !C[0,1]G , f " !#1 G , w1:5 =
14
12 ,1,1,1,
12
$% &'
0 10
1f
f*f ! = 0.1147 +1.0275x
Numerical Analysis, lecture 11, slide ! 11
The discrete least squares polynomial fitin Matlab
>> m = 5; >> n = 1;>> x = linspace(0,1,m)';>> f = sin(pi*x/2);>> A = vander(x); A = A(:,end-n:end)>> w = [0.5 ones(1,m-2) 0.5]'; W = diag(w);>> c = (A'*W*A)\(A'*W*f) c = 1.0275 0.1147
0 10
1f
f*
f !(x) = 1.0275x + 0.1147
here’s the previous slide’s example
Numerical Analysis, lecture 11, slide ! 12
what happened, what’s next
• seek to minimize ||f – f*||
• continuous and discrete euclidean norms
• ||f – f*|| is minimized when (f – f*,φk)=0
• discrete polynomial normal equation is nonsingular if degree < # distinct x-values
Next lecture: orthogonal polynomials (§9.4 – 6)
seek to minimize ||f – f*||
Numerical Analysis, lecture 12: Approximation II(textbook sections 9.4-6)
• orthogonal functions
• orthogonal polynomials
P0( x)
!1 0 1
P1( x)
P2( x)
P3( x)
Numerical Analysis, lecture 12, slide ! 2
The best least-squares fit is determined by orthogonality (normal equations)
approximation problemGiven f , find f ! "span{#0,…,#n}
that minimizes f $ f ! = f $ f !, f $ f !( )where ( f ,g) = w(x) f (x)g(x) dx
a
b
% or ( f ,g) = wi figii=1
m
&f
f*its solution
If !0,…,!n are linearly independent then f " = cj ! jj=0
n
#
where ( f $ cj ! jj=0
n
# , !k ) = 0 i.e. (!n ,!n ) ! (!0,!n )" # "
(!n ,!0 ) ! (!0,!0 )
%
&
'''
(
)
***
cn"
c0
%
&
'''
(
)
***=
( f ,!n )"
( f ,!0 )
%
&
'''
(
)
***
Numerical Analysis, lecture 11, slide ! 3
Discrete LS-fitting with repeated x-values
examplek xk fk1 0.8 7.972 1.6 10.23 2.4 14.24 2.4 14.15 3.2 16.06 4.0 21.27 4.0 21.2 0 40
2056.96 18.418.4 7
!
"#
$
%&c1c0!
"#
$
%& =
311.416104.870!
"#
$
%&
c1 = 4.1606, c0 = 4.0449
f !(x) = 4.1606x + 4.0449
f ! "#1 G , w1:7 = 1,1,…,1[ ]
f need not be a “sampled C[0,1] function” —the normal eqn. is nonsingular if |G| > n(ie. if the number of distinct nodes is > the degree of fitting polynomial)
Numerical Analysis, lecture 12, slide ! 4
The standard polynomial basis gives an ill-conditioned Gram matrix (p. 275)
>> hilb(4)
ans = 1.0000 0.5000 0.3333 0.2500 0.5000 0.3333 0.2500 0.2000 0.3333 0.2500 0.2000 0.1667 0.2500 0.2000 0.1667 0.1429
>> cond(hilb(4))
ans = 1.5514e+04
>> cond(hilb(10))
ans = 1.6025e+13
example (p. 275) f !C[0,1], f " !#n , w $ 1
i.e. find c0,…,cn to minimize c0 +!cnx
n ! f (x)( )2 dx0
1"
!i ,! j( ) = xix j dx0
1
" =1
i + j +1
Numerical Analysis, lecture 12, slide ! 5
The standard polynomial basis functionsare “nearly” linearly dependent
x0
x1
x2
0 1
x3
x
Numerical Analysis, lecture 12, slide ! 6
Least-squares approximation is best done with orthogonal basis functions (p. 275)
orthogonality i ! j " #i ,# j( ) = 0
Fourier coefficientsbest LS approximationIf !0,…,!n are nonzero & orthogonal then f " = cj ! j
j=0
n
# where c j = ( f ,! j )(! j ,! j )
(!n ,!n ) 0 0 00 ! " "0 # (!1,!1) 00 # 0 (!0,!0 )
"
#
$$$$
%
&
''''
cn"c1c0
"
#
$$$$
%
&
''''
=
( f ,!n )"
( f ,!1)( f ,!0 )
"
#
$$$$
%
&
''''
normal eqns
Numerical Analysis, lecture 12, slide ! 7
Orthogonal basis = set of smallest monic polynomials = Gram-Schmidt method (p. 283-284)
theorem (p. 283)
For k = 0,1,…n, let Pk be the smallest monic polynomial of degree k.Then {P0,…,Pn} is orthogonal.
proofPk is the smallest monic polynomial of degree k
! Pk is the monic polynomial of degree k that minimizes xk " (xk " Pk )! xk " Pk is the polynomial of degree # k "1 that is closest to xk
The polynomial of degree ! k "1 that is closest to xk is (Pj , x
k )(Pj ,Pj )j=0
k"1
# PjGram-Schmidt
and so Pk = xk !
(Pj , xk )
(Pj ,Pj )j=0
k!1
" Pj
! xk " xk " Pk( ) is orthogonal to every polynomial of degree # k "1
f
f*
A set {P0,…,Pn} of polynomials with deg(Pk ) = k is orthogonal if and only if each Pk is orthogonal to all polynomials of degree < k.
Numerical Analysis, lecture 12, slide ! 8
Orthogonal polynomials can be found using the Gram-Schmidt procedure
P3(x) = x3 ! c0P0(x) + c1P1(x) + c2P2(x)( )
c0 =(x3,P0 )(P0,P0 )
= 0, c1 =(x3,P1)(P1,P1)
=2 52 3
=35, c2 =
(x3,P2 )(P2,P2 )
= 0
P0 (x) = 1P1(x) = x ! c0P0 (x)
P2(x) = x2 ! c0P0(x) + c1P1(x)( )
c0 =(x2,P0 )(P0,P0 )
=13, c1 =
(x2,P1)(P1,P1)
= 0
c0 =x,P0( )(P0,P0 )
= 0 ! P1(x) = x
! P2 (x) = x2 "
13
! P3(x) = x3 "
35x
If !0,…,!n are linearly independent then P0,…,Pn defined by
Pk = !k "(Pj ,!k )(Pj ,Pj )j=0
k"1
# Pj (k = 0,1,…,n)
are orthogonal and span{P0,…,Pn} = span{!0,…,!n}.
example P0( x)
!1 0 1
P1( x)
P2( x)
P3( x)
For inner product f ,g( ) = f (x)g(x)dx!1
1
" ,
Numerical Analysis, lecture 12, slide !
Legendre polynomials are orthogonal on [-1,1] with unit weight (p. 281–283)
9
Recursion
14 9. Approximation
Orthogonal Polynomials
Given a scalar product and the leading coefficients A0, A1, . . ., the polynomialsPk(x) = Akxk + · · · constructed by the recurrence
P0(x) = A0
P1(x) = (α0x− β0)P0(x)
Pk+1(x) = (αkx− βk)Pk(x)− γkPk−1(x), k = 1, 2, . . . ,
where
αk =Ak+1
Ak, k = 0, 1, 2, . . . ,
βk =αk(xPk, Pk)
(Pk, Pk), k = 0, 1, 2, . . . ,
γk =αk(Pk, Pk)
αk−1(Pk−1, Pk−1), k = 1, 2, . . . ,
form an orthogonal system. In the discrete case, with the grid x1, x2, . . . , xm, thelast polynomial in the sequence is Pm−1.
Transformation of variable between a ≤ x ≤ b and −1 ≤ t ≤ 1,
t =2x− (b + a)
b− a, x = 1
2(b− a)t + 12(a + b) .
Legendre Polynomials
� 1
−1
Pk(x)Pn(x) dx =
0 for k �= n ,
2
2n + 1for k = n .
Pn(x) =1
2n · n!
dn
dxn(x2 − 1)n .
Recurrence,
P0(x) = 1 , P1(x) = x ,
Pn+1(x) =2n + 1
n + 1xPn(x)− n
n + 1Pn−1(x), n = 1, 2, . . . .
First five Legendre polynomials
P0(x) = 1, P1(x) = x, P2(x) =1
2(3x2 − 1) ,
P3(x) =1
2(5x2 − 3x), P4(x) =
1
8(35x4 − 30x2 + 3) .
14 9. Approximation
Orthogonal Polynomials
Given a scalar product and the leading coefficients A0, A1, . . ., the polynomialsPk(x) = Akxk + · · · constructed by the recurrence
P0(x) = A0
P1(x) = (α0x− β0)P0(x)
Pk+1(x) = (αkx− βk)Pk(x)− γkPk−1(x), k = 1, 2, . . . ,
where
αk =Ak+1
Ak, k = 0, 1, 2, . . . ,
βk =αk(xPk, Pk)
(Pk, Pk), k = 0, 1, 2, . . . ,
γk =αk(Pk, Pk)
αk−1(Pk−1, Pk−1), k = 1, 2, . . . ,
form an orthogonal system. In the discrete case, with the grid x1, x2, . . . , xm, thelast polynomial in the sequence is Pm−1.
Transformation of variable between a ≤ x ≤ b and −1 ≤ t ≤ 1,
t =2x− (b + a)
b− a, x = 1
2(b− a)t + 12(a + b) .
Legendre Polynomials
� 1
−1
Pk(x)Pn(x) dx =
0 for k �= n ,
2
2n + 1for k = n .
Pn(x) =1
2n · n!
dn
dxn(x2 − 1)n .
Recurrence,
P0(x) = 1 , P1(x) = x ,
Pn+1(x) =2n + 1
n + 1xPn(x)− n
n + 1Pn−1(x), n = 1, 2, . . . .
First five Legendre polynomials
P0(x) = 1, P1(x) = x, P2(x) =1
2(3x2 − 1) ,
P3(x) =1
2(5x2 − 3x), P4(x) =
1
8(35x4 − 30x2 + 3) .
14 9. Approximation
Orthogonal Polynomials
Given a scalar product and the leading coefficients A0, A1, . . ., the polynomialsPk(x) = Akxk + · · · constructed by the recurrence
P0(x) = A0
P1(x) = (α0x− β0)P0(x)
Pk+1(x) = (αkx− βk)Pk(x)− γkPk−1(x), k = 1, 2, . . . ,
where
αk =Ak+1
Ak, k = 0, 1, 2, . . . ,
βk =αk(xPk, Pk)
(Pk, Pk), k = 0, 1, 2, . . . ,
γk =αk(Pk, Pk)
αk−1(Pk−1, Pk−1), k = 1, 2, . . . ,
form an orthogonal system. In the discrete case, with the grid x1, x2, . . . , xm, thelast polynomial in the sequence is Pm−1.
Transformation of variable between a ≤ x ≤ b and −1 ≤ t ≤ 1,
t =2x− (b + a)
b− a, x = 1
2(b− a)t + 12(a + b) .
Legendre Polynomials
� 1
−1
Pk(x)Pn(x) dx =
0 for k �= n ,
2
2n + 1for k = n .
Pn(x) =1
2n · n!
dn
dxn(x2 − 1)n .
Recurrence,
P0(x) = 1 , P1(x) = x ,
Pn+1(x) =2n + 1
n + 1xPn(x)− n
n + 1Pn−1(x), n = 1, 2, . . . .
First five Legendre polynomials
P0(x) = 1, P1(x) = x, P2(x) =1
2(3x2 − 1) ,
P3(x) =1
2(5x2 − 3x), P4(x) =
1
8(35x4 − 30x2 + 3) .
First five
Orthogonality
Numerical Analysis, lecture 12, slide ! 10
Orthogonal polynomials can also be generated by a three-term recurrence formula (p. 277-279)
formula (p. 277)
14 9. Approximation
Orthogonal Polynomials
Given a scalar product and the leading coefficients A0, A1, . . ., the polynomialsPk(x) = Akxk + · · · constructed by the recurrence
P0(x) = A0
P1(x) = (α0x− β0)P0(x)
Pk+1(x) = (αkx− βk)Pk(x)− γkPk−1(x), k = 1, 2, . . . ,
where
αk =Ak+1
Ak, k = 0, 1, 2, . . . ,
βk =αk(xPk, Pk)
(Pk, Pk), k = 0, 1, 2, . . . ,
γk =αk(Pk, Pk)
αk−1(Pk−1, Pk−1), k = 1, 2, . . . ,
form an orthogonal system. In the discrete case, with the grid x1, x2, . . . , xm, thelast polynomial in the sequence is Pm−1.
Transformation of variable between a ≤ x ≤ b and −1 ≤ t ≤ 1,
t =2x− (b + a)
b− a, x = 1
2(b− a)t + 12(a + b) .
Legendre Polynomials
� 1
−1
Pk(x)Pn(x) dx =
0 for k �= n ,
2
2n + 1for k = n .
Pn(x) =1
2n · n!
dn
dxn(x2 − 1)n .
Recurrence,
P0(x) = 1 , P1(x) = x ,
Pn+1(x) =2n + 1
n + 1xPn(x)− n
n + 1Pn−1(x), n = 1, 2, . . . .
First five Legendre polynomials
P0(x) = 1, P1(x) = x, P2(x) =1
2(3x2 − 1) ,
P3(x) =1
2(5x2 − 3x), P4(x) =
1
8(35x4 − 30x2 + 3) .
Ak = 1 gives monic polynomials
Numerical Analysis, lecture 12, slide ! 11
Discretely-orthogonal polynomials can be computed using the 3-term recurrence (p. 279)
exampleFind monic polynomials of degree 0, 1, 2 that are orthogonal with respect to
f ,g( ) = 12 f (0)g(0) + f ( 1
4 )g( 14 ) + f (1
2 )g(12 ) + f ( 3
4 )g( 34 ) + 1
2 f (1)g(1)
and use them to approximate f (x) = sin! x 2 on [0,1].
P0 = 1
(P0,P0 ) = 4, (xP0,P0 ) = 2, !0 =12, P1(x) = x " 1
2
(P1,P1) =38, (xP1,P1) =
316, !1 =
12, # 1 =
332, P2(x) = (x " 1
2 )(x "12 ) "
332
c0 =( f ,P0 )(P0,P0 )
= 0.6284, c1 =( f ,P1)(P1,P1)
= 1.0275, c2 =( f ,P2 )(P2,P2 )
= !0.8248
f !(x) = 0.6284 +1.0275(x " 0.5) " 0.8248 (x - 0.5)2 - 0.09375( )
Numerical Analysis, lecture 12, slide ! 12
Discretely-orthogonal polynomials in Matlab (p. 279-280)
function [b,g,c] = orthpolfit(x,y,w,n)x = x(:); y = y(:); w = w(:);m = length(x);b = zeros(1,max(1,n)); g = b; P = [zeros(m,1) ones(m,1)];s = [sum(w) zeros(1,n)];c = [sum(w.*y)/s(1) zeros(1,n)]; for k = 1:n b(k) = sum( w .* x .* P(:,2).^2) / s(k); if k == 1 g(k) = 0; else g(k) = s(k)/s(k-1); end P = [P(:,2) (x-b(k)).*P(:,2)-g(k)*P(:,1)]; s(k+1) = sum( w .* P(:,2).^2); c(k+1) = sum( w .* P(:,2) .* y) / s(k+1); end
>> m = 5;>> x = linspace(0,1,m);>> f = sin(pi*x/2);>> w = [0.5 ones(1,m-2) .5];>> [b,g,c] = orthpolfit(x,f,w,2)b = 0.5000 0.5000
g = 0 0.09375
c = 0.6284 1.0275 -0.8248
f !(x) = 0.6284 +1.0275(x " 0.5)
" 0.8248 (x - 0.5)2 - 0.09375( )
Numerical Analysis, lecture 12, slide ! 13
The linear combination of orthogonal polynomials can be efficiently computed (p. 280-281)
function u = orthpolval(b,g,c,x)n = length(c)-1;u = c(end)*ones(size(x));if n > 0 ujp1 = u; u = c(end-1) + (x-b(n)).*ujp1; for j = n-2:-1:0 ujp2 = ujp1; ujp1 = u; u = c(j+1) + (x-b(j+1)).*ujp1 ... - g(j+2)*ujp2; endend
>> t = linspace(0,1,50);>> plot(t,sin(pi*t/2)-orthpolval(b,g,c,t))
0 1−0.02
0
0.02
example (cont’d)
Clenshaw’s algorithmc0 + ckPk (x) = u0
k=1
n
! where un = cn , un"1 = cn"1 + (x " #n"1)un ,
and u j = cj + (x " # j )u j+1 " $ j+1u j+2 for j = n " 2, n "1,…, 0.
Numerical Analysis, lecture 12, slide ! 14
what happened, what’s next
• least-squares approximation is best done with orthogonal basis functions‣ standard polynomial basis gives normal equation
with ill-conditioned full matrix
• orthogonal polynomials can be found using‣ Gram-Schmidt or‣ 3-term recurrence
Next lecture: solving differential equations (§10.1 – 4)