Top Banner

Click here to load reader

Solution Manual of Adaptive Filter Theory by Simon Haykin 4th edition eBook

Feb 17, 2022

Download

Documents

jack Girlish

https://gioumeh.com/product/adaptive-filter-theory-solution/

-------------------------------------------------------------------------
Authors: Simon Haykin
 Published: Prentice 2001
 Edition: 4th
 Pages: 338
 Type: pdf
 Size: 1MB
Welcome message from author
welcome to solution manual
Transcript
(3)
Hence, substituting Eq. (3) into (2), and then using Eq. (1), we get
1.2 We know that the correlation matrix R is Hermitian; that is
Given that the inverse matrix R-1 exists, we may write
where I is the identity matrix. Taking the Hermitian transpose of both sides:
Hence,
1.3 For the case of a two-by-two matrix, we may
ru k( ) E u n( )u* n k–( )[ ]=
ry k( ) E y n( )y* n k–( )[ ]=
y n( ) u n a+( ) u n a–( )–=
ry k( ) E u n a+( ) u n a–( )–( ) u* n a k–+( ) u* n a– k–( )–( )[ ]=
2ru k( ) ru 2a k+( )– ru 2a– k+( )–=
R H
With r12 = r21 for real data, this condition reduces to
Since this is quadratic in , we may impose the following condition on for nonsingu- larity of Ru:
where
r11 r12
r21 r22
σ2 0
0 σ2 +=
+( ) r12r21 0>–=
4r
(Positive definiteness is stronger than nonnegative definiteness.)
But the matrix R is singular because
Hence, it is possible for a matrix to be positive definite and yet it can be singular.
1.5 (a)
(1)
Let
(2)
where a, b and C are to be determined. Multiplying (1) by (2):
where IM+1 is the identity matrix. Therefore,
(3)
(4)
(5)
(6)
– 0= =
b+ 1=
(8)
Correspondingly,
(9)
(10)
As a check, the results of Eqs. (9) and (10) should satisfy Eq. (6).
We have thus shown that
b RM 1– ra–=
(b) (11)
Let
(12)
where D, e and f are to be determined. Multiplying (11) by (12):
Therefore
(13)
(14)
(15)
(16)
(18)
Correspondingly,
r BT
T =
(20)
As a check, the results of Eqs. (19) and (20) must satisfy Eq. (16). Thus
We have thus shown that
where the scalar f is defined by Eq. (18).
1.6 (a) We express the difference equation describing the first-order AR process u(n) as
where w1 = -a1. Solving this equation by repeated substitution, we get
e –= RM
1– r
BT RM
u n( ) v n( ) w1v n 1–( ) w1u n 2–( )++=
or equivalently
Taking the expected value of both sides of Eq. (1) and using
for all n,
we get the geometric series
This result shows that if , then E[u(n)] is a function of time n. Accordingly, the AR process u(n) is not stationary. If, however, the AR parameter satisfies the condition:
or
then
Under this condition, we say that the AR process is asymptotically stationary to order one.
…=
v n( ) w1v n 1–( ) w1 2 v n 2–( ) … w1
n-1 v 1( )+ + + +=
u 0( ) 0=
n-1µ+ + + +=
Substituting Eq. (1) into (2), and recognizing that for the white noise process
(3)
When |a1| < 1 or |w1| < 1, then
for large n
(c) The autocorrelation function of the AR process u(n) equals E[u(n)u(n-k)]. Substituting Eq. (1) into this formula, and using Eq. (3), we get
var v n( )[ ] σ v 2
=
n( )[ ] .=
n k=
1 w1 2
w1 4 … w1

=
w1 k

=
For |a1| < 1 or |w1| < 1, we may therefore express this autocorrelation function as
for large n
Case 1: 0 < a1 < 1
In this case, w1 = -a1 is negative, and r(k) varies with k as follows:
Case 2: -1 < a1 < 0
In this case, w1 = -a1 is positive and r(k) varies with k as follows:
1.7 (a) The second-order AR process u(n) is described by the difference equation:
Hence
Accordingly, we write the Yule-Walker equations as
r k( ) E u n( )u n k–( )[ ]=
σv 2 w1
r(k)
u n( ) u n 1–( ) 0.5u n 2–( )– v n( )+=
w1 1=
w2 0.5–=
a1 1–=
a2 0.5=
Solving the first relation for r(1):
(1)
(2)
(c) Since the noise v(n) has zero mean, so will the AR process u(n). Hence,
We know that
(3)
Substituting (1) and (2) into (3), and solving for r(0), we get
1.8 By definition,
1
0.5–
var u n( )[ ] E u 2 n( )[ ]=
r 0( ).=
σv 2
r 0( ) σv