-
Hindawi Publishing CorporationMathematical Problems in
EngineeringVolume 2013, Article ID 601623, 7
pageshttp://dx.doi.org/10.1155/2013/601623
Research ArticleSubband Adaptive Filtering with 𝑙1-Norm
Constraint forSparse System Identification
Young-Seok Choi
Department of Electronic Engineering, Gangneung-Wonju National
University, Gangneung 210-702, Republic of Korea
Correspondence should be addressed to Young-Seok Choi;
[email protected]
Received 27 September 2013; Revised 26 November 2013; Accepted
26 November 2013
Academic Editor: Yue Wu
Copyright © 2013 Young-Seok Choi. This is an open access article
distributed under the Creative Commons Attribution License,which
permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper presents a new approach of the normalized subband
adaptive filter (NSAF) which directly exploits the sparsity
conditionof an underlying system for sparse system
identification.The proposed NSAF integrates a weighted 𝑙
1-norm constraint into the cost
function of the NSAF algorithm. To get the optimum solution of
the weighted 𝑙1-norm regularized cost function, a subgradient
calculus is employed, resulting in a stochastic gradient based
update recursion of the weighted 𝑙1-norm regularized NSAF. The
choice of distinct weighted 𝑙1-norm regularization leads to two
versions of the 𝑙
1-norm regularized NSAF. Numerical results clearly
indicate the superior convergence of the 𝑙1-norm regularized
NSAFs over the classical NSAF especially when identifying a
sparse
system.
1. Introduction
Over the past few decades, the relative simplicity and
goodperformance of the normalized least mean square (NLMS)algorithm
have made it a popular tool for adaptive filter-ing applications.
However, its convergence performance issignificantly deteriorated
in case of correlated input signals[1, 2]. As a popular solution,
adaptive filtering in the subbandhas been recently developed, which
is referred to as subbandadaptive filter (SAF) [3–7]. Its distinct
feature is based on theproperty that the LMS-type adaptive filters
converge faster forwhite input signals than colored ones [1, 2].
Thus, carryingout a prewhitening on colored input signals, it
results in theaccelerated convergence compared to the LMS-type
adaptivefilters. Recently, the use of multiple-constraint
optimizationcriteria into formulation of a cost function has
resulted in thenormalized SAF (NSAF) with its computational
complexityclose to that of the NLMS algorithm [6, 7].
In the context of a system identification, the unknownsystem to
be identified is sparse in common scenarios, suchas echo paths [8]
and digital TV transmission channels [9].Namely, the unknown system
consists of many near-zerocoefficients and a small number of large
ones. However, theadaptive filtering algorithms suffer from poor
convergence
performance in case of identifying the sparse system [8].Indeed,
the capability of the NSAF is faded in a sparse
systemidentification scenario. To deal with this issue, a variety
ofproportionate adaptive algorithms have been presented forNSAF,
which utilize proportionate step sizes to distinct filtertaps
[10–12]. However, these algorithms have not exploitedthe sparsity
condition of an underlying system.
Recently, motivated by compressive sensing framework[13, 14] and
the least absolute shrinkage and selection oper-ator (LASSO) [15],
a number of adaptive filtering algorithmswhich make use of the
sparsity condition of an underlyingsystem have been developed
[16–20]. The core idea behindthis approach is to incorporate the
sparsity condition ofunderlying system by imposing an a
sparsity-inducing con-straint term. Adding the sparsity constraint
using 𝑙
0or 𝑙1-
norm constraint to the cost function makes the least
relevantweights of the filter shrink to zeros. However, to the best
ofthe author’s knowledge, adaptive filtering in subband
whichexploits the sparsity condition has not been studied yet.
With regard, this paper presents a novel approach of
thesparsity-regularized NSAFs, which incorporates the
sparsitycondition of the system directly into the cost function
viaa sparsity-inducing constraint term. This is carried out
byregularizing a weighted 𝑙
1-norm of the filter weights estimate
-
2 Mathematical Problems in Engineering
u(n)
w∘
H0(z)
H1(z)
HN−1(z)
H0(z)
↓N
↓N
↓N
↓N
↓N
↓N
d0(n) d0(k)�(n)
d(n)d1(n) d1(k)
H1(z)
......
HN−1(z)dN−1(n) dN−1(k)
u0(n) y0(n) y0(k)
u1(n) y1(n) y1(k)
uN−1(n) yN−1(n yN−1(k))w(k)
w(k)
w(k)...
......
G0(z)e0(n) e0(k)
↑N
G1(z)e1(n) e1(k)
↑N
GN−1(z)eN−1(n) eN−1(k)
↑N
e(n)...
...
+
+
+
+
+
−
−
−
Figure 1: Subband structure with the analysis filters and
synthesis filters and the subband desired signals, subband filter
outputs, and subbanderror signals.
to the cost function. Considering the two choices of theweighted
𝑙
1-norm regularization, two stochastic gradient-
based 𝑙1-norm regularized NSAF algorithms are derived.
First, the 𝑙1-norm NSAF (𝑙
1-NSAF) is obtained by uti-
lizing the identity matrix as a weighting matrix. Second,the
reweighted 𝑙
1-norm NSAF (𝑙
1-RNSAF) which uses the
current estimate of the system as a weighted 𝑙1-norm is
developed. Through numerical simulations, the
resultantsparsity-regularized NSAFs have proven their
superiorityover the classical NSAFs, especially when the sparsity
of theunderlying system becomes severe.
The remainder of the paper is organized as follows.Section 2
introduces the classical NSAF, followed by thederivation of the
proposed sparsity-regularized NSAFs inSection 3. Section 4
illustrates the computer simulationresults and Section 5 concludes
this study.
2. Conventional NSAF
Consider a desired signal 𝑑(𝑛) that arises from the
systemidentification model
𝑑 (𝑛) = u (𝑛)w∘ + V (𝑛) , (1)
where w∘ is a column vector for the impulse response of
anunknown system that we wish to estimate, V(𝑛) accounts
formeasurement noise with zeromean and variance𝜎2V , and
u(𝑛)denotes the 1 ×𝑀 input vector,
u (𝑛) = [𝑢 (𝑛) 𝑢 (𝑛 − 1) ⋅ ⋅ ⋅ 𝑢 (𝑛 −𝑀 + 1)] . (2)
Figure 1 shows the structure of the NSAF, where thedesired
signal 𝑑(𝑛) and input signal 𝑢(𝑛) are partitioned into
𝑁 subbands by the analysis filters𝐻0(𝑧),𝐻
1(𝑧), . . . , 𝐻
𝑁−1(𝑧).
The resulting subband signals, 𝑑𝑖(𝑛) and 𝑦
𝑖(𝑛) for 𝑖 =
0, 1, . . . , 𝑁 − 1, are critically decimated to a lower
samplingrate commensurate with their bandwidth. Here, the variable𝑛
to index the original sequences and 𝑘 to index the
decimatedsequences are used for all signals. Then, the
decimatedfilter output signal at each subband is defined as 𝑦
𝑖,𝐷(𝑘) =
u𝑖(𝑘)w(𝑘), where u
𝑖(𝑘) is 1 ×𝑀 row vector, such that
u𝑖(𝑘) = [𝑢
𝑖(𝑘𝑁) , 𝑢
𝑖(𝑘𝑁 − 1) , . . . , 𝑢
𝑖(𝑘𝑁 −𝑀 + 1)] , (3)
andw(𝑘) = [𝑤0(𝑘), 𝑤
1(𝑘), . . . , 𝑤
𝑀−1(𝑘)]𝑇 denotes an estimate
for w∘ with length 𝑀. Thus the decimated subband errorsignal is
given by
𝑒𝑖,𝐷
(𝑘) = 𝑑𝑖,𝐷
(𝑘) − 𝑦𝑖,𝐷
(𝑘) = 𝑑𝑖,𝐷
(𝑘) − u𝑖(𝑘)w (𝑘) , (4)
where 𝑑𝑖,𝐷(𝑘) = 𝑑
𝑖(𝑘𝑁) is the decimated desired signal at
each subband.In [6], the authors have formulated the
Lagrangian-
based multiple-constraint optimization problem, which
isformulated as
𝐽NSAF (𝑘) = ‖w (𝑘 + 1) − w (𝑘)‖2
+
𝑁−1
∑𝑖=0
𝜆𝑖[𝑑𝑖,𝐷
(𝑘) − u𝑖(𝑘)w (𝑘 + 1)] ,
(5)
where 𝜆𝑖for 𝑖 = 0, 1, . . . , 𝑁 − 1 denote the Lagrange
multipli-
ers. Solving the cost function (5), the update recursion of
theNSAF algorithm is given by [6, 7]. Consider
w (𝑘 + 1) = w (𝑘) + 𝜇𝑁−1
∑
𝑖=0
u𝑇𝑖(𝑘)
u𝑖 (𝑘)2𝑒𝑖,𝐷
(𝑘) , (6)
where 𝜇 is the step-size parameter.
-
Mathematical Problems in Engineering 3
3. Weighted 𝑙1-Norm Regularized NSAF
3.1. Derivation of the Proposed Algorithm. To reflect
thesparsity condition of the true system, that is, w∘, a
weighted𝑙1-norm of the filter weight estimate is regularized on the
cost
function of the NSAF, which is given by
𝐽𝑙1−NSAF (𝑘) = ‖w (𝑘 + 1) − w (𝑘)‖
2
+
𝑁−1
∑
𝑖=0
𝜆𝑖[𝑑𝑖,𝐷
(𝑘) − u𝑖(𝑘)w (𝑘 + 1)]
+𝛾‖Πw (𝑘 + 1)‖1,
(7)
where ‖Πw(𝑘 + 1)‖ accounts for the weighted 𝑙1-norm of the
filter weight vector w(𝑘 + 1) and is written as
‖Πw (𝑘 + 1)‖1 =𝑀−1
∑
𝑚=0
𝜋𝑚
𝑤𝑚 (𝑘 + 1) , (8)
where Π is a 𝑀 × 𝑀 weighting matrix whose diagonalelements are
𝜋
𝑚and other elements are equal to zero, and
𝑤𝑚(𝑘 + 1) denotes the 𝑚th tap weight of w(𝑘 + 1), for 𝑚 =
0, 1, . . . ,𝑀 − 1. In addition, 𝛾 is a positive value
parameterwhich plays a role in compromising the error related term
andthe weighted 𝑙
1-norm regularization in the right-hand side of
(7).To find the optimal weight vector w(𝑘 + 1) which
minimizes the cost function (7), the derivative of (7)
withrespect to w(𝑘 + 1) is taken and set to zero. Note that
theweighted 𝑙
1-norm regularization term, that is, ‖Πw(𝑘 + 1)‖
1,
is not differentiable at any point in case 𝑤𝑚(𝑘 + 1) = 0. To
address this issue, a subgradient calculus [21] is carried
out.Thus, taking the derivative of (7) with respect to the
weight vector w(𝑘 + 1) and letting the derivative be equal
tozero, it leads to
w (𝑘 + 1) = w (𝑘) + 12
𝑁−1
∑
𝑖=0
𝜆𝑖u𝑇𝑖(𝑘) −
𝛾
2∇𝑠
w‖Πw (𝑘 + 1)‖1,
(9)
where ∇𝑠w𝑓(⋅) denotes a subgradient vector of a function
𝑓(⋅)with respect to w(𝑘 + 1). An available subgradient vector∇𝑠
w‖Πw(𝑘 + 1)‖1 is obtained as [21]. Consider
∇𝑠
w‖Πw (𝑘 + 1)‖1 = Π𝑇 sgn (Πw (𝑘 + 1))
= Π sgn (w (𝑘 + 1)) ,(10)
sinceΠ is assumed as a diagonal matrix with
positive-valuedelements, where sgn(⋅) is a componentwise sign
functiondefined by
sgn (𝑥) ={
{
{
𝑥
|𝑥|, 𝑥 ̸= 0
0, elsewhere.(11)
Substituting (10) into (9) and assuming sgn[w(𝑘 + 1)]
≈sgn[w(𝑘)], it is given by
w (𝑘 + 1) = w (𝑘) + 12
𝑁−1
∑
𝑖=0
𝜆𝑖u𝑇𝑖(𝑘) −
𝛾
2Π sgn (w (𝑘)) . (12)
Substituting (12) into the multiple constraints of theNSAF, that
is, 𝑑
𝑖,𝐷(𝑘) = u
𝑖(𝑘)w(𝑘 + 1), 𝑖 = 0, 1, . . . , 𝑁 − 1
and rewriting as a matrix form, it leads to
Λ = 2[U (𝑘)U𝑇 (𝑘)]−1
e𝐷(𝑘)
+𝛾[U (𝑘)U𝑇 (𝑘)]−1
U (𝑘)Π sgn (w (𝑘)) ,(13)
where Λ = [𝜆0, 𝜆1, . . . , 𝜆
𝑁−1]𝑇 is the𝑁 × 1 Lagrange vector,
U (𝑘) = [[[
u0(𝑘)
...u𝑁−1
(𝑘)
]]
]
, e𝐷(𝑘) =
[[
[
𝑒0,𝐷
(𝑘)
...𝑒𝑁−1,𝐷
(𝑘)
]]
]
. (14)
By neglecting the off-diagonal elements ofU(𝑘)U𝑇(𝑘) [6],the
components of Λ in (13) can be simplified to
𝜆𝑖= 2
𝑒𝑖,𝐷
(𝑘)
u𝑖 (𝑘)2+ 𝛾
u𝑖(𝑘)
u𝑖 (𝑘)2Π sgn (w (𝑘)) , (15)
for 𝑖 = 0, 1, . . . , 𝑁 − 1.Consequently, combining (12) and
(15), the update recur-
sion of the sparsity-regularized NSAF is given by
w (𝑘 + 1) = w (𝑘)
+ 𝜇
𝑁−1
∑
𝑖=0
[u𝑇𝑖(𝑘)
u𝑖 (𝑘)2𝑒𝑖,𝐷
(𝑘)
+1
2𝛾
u𝑖(𝑘)
u𝑖 (𝑘)2Π sgn (w (𝑘)) u𝑇
𝑖(𝑘)]
−𝜇𝛾
2Π sgn (w (𝑘)) ,
(16)
where 𝜇 is the step-size parameter.
3.2. Determination of the Weighted 𝑙1-Norm Regularization.
Here, by choosing the weighting matrix Π, two versions ofthe
sparsity-regularized NSAF are developed. First, the use ofthe
identity matrix as the weighting matrix, that is, Π = I
𝑀,
results in the following update recursion:
w (𝑘 + 1) = w (𝑘)
+ 𝜇
𝑁−1
∑
𝑖=0
[u𝑇𝑖(𝑘)
u𝑖 (𝑘)2𝑒𝑖,𝐷
(𝑘)
+1
2𝛾
u𝑖(𝑘)
u𝑖 (𝑘)2sgn (w (𝑘)) u𝑇
𝑖(𝑘)]
−𝜇𝛾
2sgn (w (𝑘)) ,
(17)
which is referred to as the 𝑙1-norm NSAF (𝑙
1-NSAF) as
an unweighted case. The 𝑙1-NSAF uniformly attracts the
-
4 Mathematical Problems in Engineering
Table 1: Computational complexity.
NSAF 𝑙1-NSAF 𝑙
1-RNSAF
Multiplications 3𝑀 + 3𝑁𝐿 6𝑀 + 3𝑁𝐿 7𝑀 + 3𝑁𝐿Divisions 1 2 2
+𝑀/𝑁(𝑀: filter length,𝑁: number of subbands, and 𝐿: length of the
analysis filtersand synthesis filters).
tap coefficients of w(𝑘) to zero. The zero attraction
processleads to the improved convergence of the 𝑙
1-NSAF when the
majority of entries of a system are zero; that is, a system
issparse.
Second, to approximate the actual sparsity condition of
anunderlying system, that is, 𝑙
0-norm of the system, the weights
of Π are chosen inversely proportional to magnitude of theactual
coefficients of the system as given by
𝜋𝑚={
{
{
1𝑤𝑚
, 𝑤𝑚
̸= 0
∞, 𝑤𝑚= 0,
(18)
where 𝑤𝑚denotes the 𝑚th coefficients of the system, that
is, w∘. However, since the actual coefficients of the systemare
unavailable, the estimate of the current filter weights isutilized
instead of the actual weights, which is referred to asthe
reweighting scheme [22], as follows:
𝜋𝑚(𝑘) =
1𝑤𝑚 (𝑘)
+ 𝜖for 𝑚 = 0, 1, . . . ,𝑀 − 1, (19)
where 𝑤𝑚(𝑘) denotes the𝑚th tap weight of the w(𝑘) and 𝜖 is
a small positive value to avoid singularity when |𝑤𝑚(𝑘)| =
0.
Then, the weighting matrix Π consists of the values of 𝜋𝑚(𝑘)
as the𝑚th diagonal elements and has a time-varying
feature.Finally, the update recursion is given by
w (𝑘 + 1) = w (𝑘)
+ 𝜇
𝑁−1
∑
𝑖=0
[u𝑇𝑖(𝑘)
u𝑖 (𝑘)2𝑒𝑖,𝐷
(𝑘)
+1
2𝛾
u𝑖(𝑘)
u𝑖 (𝑘)2sgn (w (𝑘)) u𝑇
𝑖(𝑘)]
−𝜇𝛾
2
sgn (w (𝑘))|w (𝑘)| + 𝜖
,
(20)
where u𝑖(𝑘) = u
𝑖(𝑘)Π and the vector division operation in
last term accounts for a componentwise division. Then,
thisrecursion is called the reweighted 𝑙
1-normNSAF (𝑙
1-RNSAF)
Table 1 lists the number of multiplications and divisionsof the
NSAF [6], 𝑙
1-NSAF, and 𝑙
1-RNSAF per iteration. As
shown in Table 1, the use of 𝑙1-norm constraint leads to an
acceptable increase in computation.
4. Numerical Results
Theperformance of the proposed sparsity-regularizedNSAFsis
validated by carrying out computer simulations in a system
identification scenario in which the unknown channel israndomly
generated. The lengths of the unknown system are𝑀 = 128 and 512 in
experiments where 𝑆 of them arenonzero. The nonzero filter weights
are positioned randomlyand their values are taken from a Gaussian
distributionN(0, 1/𝑆). Here, 𝑆 = 4 is used in the simulations
exceptFigure 5 in which various 𝑆 values are considered.
Theadaptive filter and the unknown system are assumed to havethe
same number of taps. The input signals are obtainedby filtering a
white, zero-mean, Gaussian random sequencethrough a first-order
system𝐺(𝑧) = 1/(1−0.9𝑧−1).The signal-to-noise ratio (SNR) is
calculated by
SNR = 10 log10(𝐸 [𝑦2(𝑛)]
𝐸 [V2 (𝑛)]) , (21)
where 𝑦(𝑛) = u(𝑛)w∘. The measurement noise V(𝑛) is addedto𝑦(𝑛)
such that SNR= 10, 20, and 30 dB. In order to comparethe
convergence performance, the normalized mean squaredeviation
(MSD),
NormalizedMSD = 𝐸[w∘− w(𝑘)
2
‖w∘‖2] , (22)
is taken and averaged over 50 independent trials.The
cosine-modulated filter banks [23] with the subband number of𝑁 =4
are used in the simulations. The prototype filter of length𝐿 = 32
is used. For comparison purpose, the proportionateNSAF (PNSAF) [12]
is considered, which has been developedfor sparse system
identification.The step-size is set to 𝜇 = 0.5for SAF algorithms
except the PNSAFwhere the step sizes𝜇 =0.6 (Figure 2) and 𝜇 = 0.65
(Figure 6) are chosen to achievesimilar steady-state MSD with the
𝑙
1-RNSAF for comparison
purpose. For the 𝑙1-RNSAF, 𝜖 = 0.01 is chosen. In addition,
𝜌 = 0.05 is used for the PNSAF. The 𝛾 values are obtained
byrepeated trials to minimize the steady-state MSD.
Figure 2 shows the normalizedMSD curves of the NLMS,NSAF, 𝑙
1-NSAF, and 𝑙
1-RNSAF, in cases of𝑁 = 4 and SNR =
30 dB. For the 𝑙1-NSAF and 𝑙
1-RNSAF, 𝛾 = 3×10−5 is chosen.
As shown in Figure 2, the not only 𝑙1-RNSAFoutperforms the
conventional NLMS, NSAF, PNSAF, and 𝑙1-NSAF, but also
the 𝑙1-NSAF has better performance than other conventional
algorithms, in terms of the convergence rate and the
steady-state misalignment.
In Figure 3, to verify the effect of 𝛾 on
convergenceperformance, the normalized MSD curves of the 𝑙
1-RNSAF
for different 𝛾 values are illustrated, in case of 𝑁 = 4 andSNR
= 30 dB. For different 𝛾 values (𝛾 = 1×10−4, 1×10−5, 5×10−5, and 1
× 10−6), the 𝑙
1-RNSAF is not excessively sensitive
to 𝛾.The analysis of an optimal 𝛾 value remains a future
work.Next, the performance of the proposed 𝑙
1-norm regular-
izedNSAFs is compared to the original NSAF under differentSNR
conditions. Figure 4 depicts the normalizedMSD curvesof the NSAF,
𝑙
1-NSAF, and 𝑙
1-RNSAF under SNR = 10
and 20 dB, respectively. The 𝛾 value for the 𝑙1-NSAF and 𝑙
1-
RNSAF is set to 5 × 10−5. It is clear that both the 𝑙1-NSAF
and 𝑙1-RNSAF are superior to the NSAF under several SNR
cases. Furthermore, the 𝑙1-RNSAF performs well compared
to 𝑙1-NSAF.
-
Mathematical Problems in Engineering 5
0 200 400 600 800 1000 1200 1400
−35
−30
−25
−20
−15
−10
−5
0
5
Number of iterations
Nor
mal
ized
mea
n sq
uare
dev
iatio
n (d
B)
NLMSNSAF (N = 4)PNSAF (N = 4)
l1-NSAF (N = 4)l1-RNSAF (N = 4)
Figure 2: Normalized MSD curves of the NLMS, NSAF, PNSAF,
𝑙1-
NSAF, and 𝑙1-RNSAF (𝑁 = 4).
0 200 400 600 800 1000 1200 1400
−35
−30
−25
−20
−15
−10
−5
0
Number of iterations
Nor
mal
ized
mea
n sq
uare
dev
iatio
n (d
B)
𝛾 = 1 × 10−3
𝛾 = 1 × 10−4
𝛾 = 1 × 10−5
𝛾 = 5 × 10−5
𝛾 = 7 × 10−5
Figure 3: Normalized MSD curves of the 𝑙1-RNSAF for various
𝛾
values (𝑁 = 4).
In Figure 5, the convergence properties of the NSAF and𝑙1-RNSAF
are compared under various sparsity conditions of
an underlying system. With the same length of the system,that
is, 𝑀 = 128, different sparsity conditions (𝑆 = 4, 8, 16,and 32)
are considered under SNR = 30 dB. The value of 𝛾is set to 3 × 10−5
for the 𝑙
1-RNSAF. Figure 5 shows that the
NSAF is independent of the sparsity condition. On the otherhand,
the results indicate that the more sparse the underlyingsystem, the
better the 𝑙
1-RNSAF.
The comparison of performance of the NSAF, 𝑙1-NSAF,
and 𝑙1-RNSAF with a long system, here, the filter length𝑀 =
0 200 400 600 800 1000 1200 1400−25
−20
−15
−10
−5
0
5
10
Number of iterations
Nor
mal
ized
mea
n sq
uare
dev
iatio
n (d
B)
(a) NSAF (SNR = 10 dB)(b) l1-NSAF (SNR = 10 dB)(c) l1-RNSAF (SNR
= 10 dB)
(d) NSAF (SNR = 20 dB)(e) l1-RNSAF (SNR = 20 dB)(f) l1-RNSAF
(SNR = 20 dB)
Figure 4: Normalized MSD curves of the NSAF, 𝑙1-NSAF, and 𝑙
1-
RNSAF under various SNR conditions (SNR = 10, 20, and 30 dB,𝑁 =
4).
0 200 400 600 800 1000 1200−35
−30
−25
−20
−15
−10
−5
0
Number of iterations
Nor
mal
ized
mea
n sq
uare
dev
iatio
n (d
B)
NSAF (S = 4)l1-RNSAF (S = 4)NSAF (S = 8)l1-RNSAF (S = 8)
NSAF (S = 16)l1-RNSAF (S = 16)NSAF (S = 32)l1-RNSAF (S = 32)
Figure 5: Normalized MSD curves of the NSAF and 𝑙1-RNSAF
under various sparsity conditions (𝑆 = 4, 8, 16, and 32,𝑁 =
4).
512, is presented in Figure 6. For the 𝑙1-NSAF and 𝑙
1-RNSAF,
𝛾 = 5×10−5 is chosen. A similar result of Figure 2 is
observed
in Figure 6.Finally, the tracking capabilities of the algorithms
of a
sudden change in the system are tested for 𝑁 = 4 andSNR = 30 dB.
Figure 7 shows the results when the unknownsystem is right-shifted
for 20 taps. Same value of 𝛾 in Figure 2is used. As can be seen,
the 𝑙
1-NSAF and 𝑙
1-RNSAF keep track
of weight change without losing the convergence rate nor
-
6 Mathematical Problems in Engineering
0 1000 2000 3000 4000 5000−40
−35
−30
−25
−20
−15
−10
−5
0
Number of iterations
Nor
mal
ized
mea
n sq
uare
dev
iatio
n (d
B)
NLMSNSAF (N = 4)PNSAF (N = 4)
l1-NSAF (N = 4)l1-RNSAF (N = 4)
Figure 6: Normalized MSD curves of the NLMS, NSAF, PNSAF,
𝑙1-
NSAF, and 𝑙1-RNSAF for long system of𝑀 = 512 (𝑁 = 4).
0 500 1000 1500 2000 2500 3000−40
−35
−30
−25
−20
−15
−10
−5
0
5
Number of iterations
Nor
mal
ized
mea
n sq
uare
dev
iatio
n (d
B)
NSAFPNSAF
l1-NSAFl1-RNSAF
Figure 7: Normalized MSD curves of the NSAF, PNSAF, 𝑙1-NSAF,
and 𝑙1-RNSAF in case of a time-varying unknown system (𝑁 =
4).
The system is right-shifted for 20 taps at 1500 iterations.
the steady-state misalignment compared to the conventionalNLMS,
NSAF, and PNSAF. To be specific, the 𝑙
1-RNSAF
achieves better performance than the 𝑙1-NSAF in terms of
both convergence rate and steady-state misalignment.
5. Conclusion
A new family of the NSAFs which takes into accountthe sparsity
condition of an underlying system has beenpresented by
incorporating a weighted 𝑙
1-norm constraint of
filter weights in the cost function. The update recursion
isobtained by employing subgradient calculus on the weighted𝑙1-norm
constraint term. Subsequently, two sparsity regular-
ized NSAFs, that is, the unweighted 𝑙1-NSAF and 𝑙
1-RNSAF
have been developed. The numerical results indicate that
theproposed 𝑙
1-NSAF and 𝑙
1-RNSAF achieve highly improved
convergence performance over the conventional algorithmsfor
sparse system identification.
Conflict of Interests
The author declares that there is no conflict of
interestsregarding the publication of this paper.
Acknowledgment
This research was supported by new faculty research pro-gram
funded by Gangneung-Wonju National University(2013100162).
References
[1] S. Haykin, Adaptive Filter Theory, Prentice Hall, Upper
SaddleRiver, NJ, USA, 4th edition, 2002.
[2] A. H. Sayed, Fundamentals of Adaptive Filtering, Wiley,
NewYork, NY, USA, 2003.
[3] A. Gilloire and M. Vetterli, “Adaptive filtering in
subbandswith critical sampling: analysis, experiments, and
applicationto acoustic echo cancellation,” IEEE Transactions on
SignalProcessing, vol. 40, no. 8, pp. 1862–1875, 1992.
[4] M. De Courville and P. Duhamel, “Adaptive filtering in
sub-bands using a weighted criterion,” IEEE Transactions on
SignalProcessing, vol. 46, no. 9, pp. 2359–2371, 1998.
[5] S. S. Pradhan and V. U. Reddy, “A new approach to
subbandadaptive filtering,” IEEE Transactions on Signal Processing,
vol.47, no. 3, pp. 655–664, 1999.
[6] K. A. Lee andW. S. Gan, “Improving convergence of the
NLMSalgorithm using constrained subband updates,” IEEE
SignalProcessing Letters, vol. 11, no. 9, pp. 736–739, 2004.
[7] K. A. Lee and W. S. Gan, “Inherent decorrelating and
leastperturbation properties of the normalized subband
adaptivefilter,” IEEE Transactions on Signal Processing, vol. 54,
no. 11, pp.4475–4480, 2006.
[8] D. L. Duttweiler, “Proportionate normalized
least-mean-squares adaptation in echo cancelers,” IEEE Transactions
onSpeech and Audio Processing, vol. 8, no. 5, pp. 508–518,
2000.
[9] W. F. Schreiber, “Advanced television systems for
terrestrialbroad-casting: some problems and some proposed
solutions,”Proceedings of IEEE, vol. 83, no. 6, pp. 958–981,
1995.
[10] S. L. Gay, “Efficient, fast converging adaptive filter for
networkecho cancellation,” in Proceedings of the 32nd Asilomar
Confer-ence on Signals, Systems & Computers, pp. 394–398,
November1998.
[11] H. Deng and M. Doroslovački, “Improving convergence of
thePNLMS algorithm for sparse impulse response identification,”IEEE
Signal Processing Letters, vol. 12, no. 3, pp. 181–184, 2005.
[12] M. S. E. Abadi, “Proportionate normalized subband
adaptivefilter algorithms for sparse system identification,” Signal
Process-ing, vol. 89, no. 7, pp. 1467–1474, 2009.
-
Mathematical Problems in Engineering 7
[13] D. L. Donoho, “Compressed sensing,” IEEE Transactions
onInformation Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
[14] E. J. Candès, J. Romberg, and T. Tao, “Robust
uncertaintyprinciples: exact signal reconstruction from highly
incompletefrequency information,” IEEE Transactions on
InformationThe-ory, vol. 52, no. 2, pp. 489–509, 2006.
[15] R. Tibshirani, “Regression shrinkage and selection via the
lasso,”Journal of Royal Statistical Society B, vol. 58, pp.
267–288, 1996.
[16] Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for
systemidentification,” in Proceedings of the International
Conference onAcoustics, Speech, and Signal Process (ICASSP ’09),
pp. 3125–3128, April 2009.
[17] Y. Gu, J. Jin, and S. Mei, “𝑙1norm constraint LMS algorithm
for
sparse system identification,” IEEE Signal Processing Letters,
vol.16, no. 9, pp. 774–777, 2009.
[18] J. Jin, Y. Gu, and S. Mei, “A stochastic gradient approach
oncompressive sensing signal reconstruction based on
adaptivefiltering framework,” IEEE Journal on Selected Topics in
SignalProcessing, vol. 4, no. 2, pp. 409–420, 2010.
[19] E. M. Eksioglu and A. K. Tanc, “RLS algorithm with
convexregularization,” IEEE Signal Processing Letters, vol. 18, no.
8, pp.470–473, 2011.
[20] N. Kalouptsidis, G. Mileounis, B. Babadi, and V.
Tarokh,“Adaptive algorithms for sparse system identification,”
SignalProcessing, vol. 91, no. 8, pp. 1910–1919, 2011.
[21] D. P. Bertsekas, Convex Analysis and Optimization,
AthenaScientific, Cambridge, Mass, USA, 2003.
[22] E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing
sparsityby reweighted 𝑙
1minimization,”The Journal of Fourier Analysis
and Applications, vol. 14, no. 5-6, pp. 877–905, 2008.[23] P. P.
Vaidyanathan,Multirate Systems and Filterbanks, Prentice-
Hall, Englewood Cliffs, NJ, 1993.
-
Submit your manuscripts athttp://www.hindawi.com
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Probability and StatisticsHindawi Publishing
Corporationhttp://www.hindawi.com Volume 2014
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
OptimizationJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
CombinatoricsHindawi Publishing
Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing
Corporationhttp://www.hindawi.com Volume 2014
International Journal of Mathematics and Mathematical
Sciences
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
The Scientific World JournalHindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014
Stochastic AnalysisInternational Journal of