Top Banner
An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm Anum Ali 1 , Anis-ur-Rehman 1 , Rana Liaqat Ali 1 1 COMSATS Institute of Information Technology, Islamabad, Pakistan. [email protected] Abstract The Recursive Least Squares (RLS) algorithm is renowned for its rapid convergence but in some scenarios it fails to show swiftness required by several applications. Such failure may result due to different limiting conditions. Gain vector plays an essential role in the performance of RLS algorithm. This paper proposes a modification in Gain vector that results in RLS algorithm performing much better in perspective of convergence, without adding significant complexity. Simulation results are presented which prove the authenticity of the finding, and comparison with conventional RLS algorithm is presented. Keywords: Gain Vector, Adaptive Array Signal Processing, Convergence Rate 1. Introduction The adaptive filtering is implemented in numerous applications. RLS is well known for its superiority on Least Mean Square Algorithm (LMS) in misadjustment and convergence [1]. The computational complexity of RLS algorithm is O(N ) 2 operations per iteration, where N is the number of elements in data array. Due to its good convergence and and small mean square error (MSE), number of modified and extended RLS algorithms are also presented [4–7]. A method is presented in which covariance matrix is agitated whenever the change is detected [5, 6]. In another concept data weighting window is used on the input data sequence [12, 13] to adjust the effective memory of the algorithm. However adjustment of the changes of the window are not easy. Control of forgetting factor as mean of adjustment has also been used [8–11, 14]. Although these techniques provide significant improvement in performance of RLS algorithm but the cost paid is increase in complexity. This paper presents a technique to improve the convergence rate of RLS algorithm. Gain vector plays a critical role in performance of RLS algorithm. So a modification in gain vector is proposed which leads to significant improvement in the convergence rate of the algorithm. Mathematical formulation of the modification and constraint is also presented and simulation results are added to show the performance comparison of proposed scheme and RLS algorithm. The cost of added complexity is also minimal. There are only two operations in excess of the complexity of conventional RLS scheme. One conditional operator and one division operation. Paper organization is as follows: Section 2 presents the RLS algorithm, Section 3 shows the mathematical formulation of the gain vector and summary of the algorithm, Section 4 shows the simulation results and Section 5 concludes the paper. International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011 99
9

An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

May 12, 2023

Download

Documents

Meher Habib
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

An Improved Gain Vector to Enhance Convergence Characteristics ofRecursive Least Squares Algorithm

Anum Ali1, Anis-ur-Rehman1, Rana Liaqat Ali11COMSATS Institute of Information Technology, Islamabad, Pakistan.

[email protected]

Abstract

The Recursive Least Squares (RLS) algorithm is renowned for its rapid convergence but insome scenarios it fails to show swiftness required by several applications. Such failure may resultdue to different limiting conditions. Gain vector plays an essential role in the performance ofRLS algorithm. This paper proposes a modification in Gain vector that results in RLS algorithmperforming much better in perspective of convergence, without adding significant complexity.Simulation results are presented which prove the authenticity of the finding, and comparison withconventional RLS algorithm is presented.

Keywords: Gain Vector, Adaptive Array Signal Processing, Convergence Rate

1. Introduction

The adaptive filtering is implemented in numerous applications. RLS is well known for itssuperiority on Least Mean Square Algorithm (LMS) in misadjustment and convergence [1]. Thecomputational complexity of RLS algorithm is O(N)2 operations per iteration, where N is thenumber of elements in data array. Due to its good convergence and and small mean square error(MSE), number of modified and extended RLS algorithms are also presented [4–7]. A method ispresented in which covariance matrix is agitated whenever the change is detected [5, 6]. In anotherconcept data weighting window is used on the input data sequence [12, 13] to adjust the effectivememory of the algorithm. However adjustment of the changes of the window are not easy. Controlof forgetting factor as mean of adjustment has also been used [8–11, 14]. Although these techniquesprovide significant improvement in performance of RLS algorithm but the cost paid is increase incomplexity.

This paper presents a technique to improve the convergence rate of RLS algorithm. Gainvector plays a critical role in performance of RLS algorithm. So a modification in gain vectoris proposed which leads to significant improvement in the convergence rate of the algorithm.Mathematical formulation of the modification and constraint is also presented and simulation resultsare added to show the performance comparison of proposed scheme and RLS algorithm. The costof added complexity is also minimal. There are only two operations in excess of the complexity ofconventional RLS scheme. One conditional operator and one division operation.

Paper organization is as follows: Section 2 presents the RLS algorithm, Section 3 shows themathematical formulation of the gain vector and summary of the algorithm, Section 4 shows thesimulation results and Section 5 concludes the paper.

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

99

Page 2: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

2. RLS AlgorithmThe autocorrelation of the tap input vector u(n) of order M-by-M is given by

Φ(n) =n∑

i=1

λn−iu(i)uH(i) + δλnI (1)

δ is the Regularizing parameter and λ is a positive constant and is called forgetting factor. M-by-1cross-correlation matrix of the tap input vector u(n) and desire response d(n) is given by

z(n) =n∑

i=1

λ−1u(i)d∗(i) (2)

For Recursive Least Square problem, tap weight vector w(n) can be calculated as

Φ(n)w(n) = z(n)w(n) = Φ−1(n)z(n) (3)

Matrix inversion Lemma is usually deployed to avoid computationally inefficient calculations ofΦ−1(n). The final equation after using the Lemma is

Φ−1(n) = λ−1Φ−1(n− 1)−λ−2Φ−1(n− 1)u(n)uH(n)Φ−1(n− 1)

1 + λ−1uH(n)Φ−1(n− 1)u(n)(4)

k(n) is the M-by-1 vector and is called as Gain vector and is defined as the tap input vector u(n),transformed by the inverse of the correlation matrix Φ−1(n)

k(n) = Φ−1(n)u(n) (5)

The weight upgradation equation in RLS is obtained

w(n) = Φ−1(n)z(n− 1) + Φ−1(n)u(n)d∗(n)−k(n)uH(n)Φ−1(n− 1)z(n− 1) (6)

After simplifying, weight vector equation is

w(n) = w(n− 1) + k(n)e∗(n) (7)

and error is calculated ase(n) = d(n)− y(n) (8)

3. Mathematical FormulationFor simplicity in implementation the equation for the gain vector of RLS algorithm is given as

k(n) =π(n)

λ + uH(n)π(n)(9)

where π(n) is given byπ(n) = P(n− 1)u(n) (10)

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

100

Page 3: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

Now the proposed gain vector takes into consideration the value of∣∣∣∣

1ξ(n)

∣∣∣∣. The value of∣∣∣∣

1ξ(n)

∣∣∣∣being greater than, or less than equal to 1 a decision is taken and one of two forms of gain vector isimplemented as shown in Eq. 11.

k(n) =

π(n)λ + uH(n)π(n)

for∣∣∣∣

1ξ(n)

∣∣∣∣ ≤ 1

π(n)|ξ(n)| (λ + uH(n)π(n))

for∣∣∣∣

1ξ(n)

∣∣∣∣ > 1

(11)

Fig. 1 shows the signal flow diagram and the summary of the Algorithm presented is given inTable. 1.

ξ∗(n)k(n)

k(n)ξ∗(n)d∗(n)

+

P P w(n)z−1I

w(n− 1)uH(n)

y∗(n)

Where k(n) =

8>><>>:

π(n)

λ + uH(n)π(n)for

˛̨˛̨ 1

ξ(n)

˛̨˛̨ ≤ 1

π(n)

|ξ(n)| (λ|uH(n)π(n))for

˛̨˛̨ 1

ξ(n)

˛̨˛̨ > 1

Figure 1. Signal Flow Graph

Algorithm is initialized by settingw(0) = 0P(0) = δ−1I

and

δ =

{small positive constant for high SNRLarge positive constant for low SNR

For each instant of time, n=1,2,3..... computeπ(n) = P(n− 1)u(n),

k(n) =

π(n)λ + uH(n)π(n)

for∣∣∣∣

1ξ(n)

∣∣∣∣ ≤ 1

π(n)|ξ(n)| (λ + uH(n)π(n))

for∣∣∣∣

1ξ(n)

∣∣∣∣ > 1

ξ(n) = d(n)− wH(n− 1)u(n)w(n) = w(n− 1) + k(n)ξ∗(n)

andP(n) = λ−1P(n− 1)− λ−1k(n)uH(n)P(n− 1)

Table 1. Summary of Proposed Algorithm

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

101

Page 4: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

4. Simulation Results

The performance of proposed scheme has been studied with the help of MATLAB R© simulations.Results of RLS algorithm are also obtained for comparison purposes. Simulations are run for 150iterations and results are presented in Fig. 2, 3 and 4. Results of RLS algorithm and proposedscheme are compared for different values of Signal to Noise Ratio (SNR) and forgetting factor λ.

Fig. 2 shows the performance of proposed scheme at a high value of SNR=15dB. Consideringerror of 5% to be acceptable we can see in Fig. 2(a) that at a value of λ = 0.999 the proposedscheme achieves the desired level of 5% in 100 iterations where as RLS algorithm fails to do so inover 150 iterations. As the value of λ reduces from 0.999 to 0.98 as shown in Fig. 2(b)the presentedalgorithm gives a error below 5% in less than 60 iterations. On the other hand RLS does so in over100 iterations. When λ is reduced to 0.97 the proposed algorithm achieves the desired level within40 iterations where the conventional scheme takes over 80 iterations as shown in Fig. 2(c).

Results at SNR value of 8db are shown in Fig. 3. At value of λ = 0.999 the proposed schemetakes 80 iterations to achieve the desired level as shown in Fig. 3(a) but the conventional algorithmfails to do so in over 150 iterations. The number of iterations taken by proposed scheme in Fig. 3(a)is greater than the number of iterations in Fig. 2(a) due to the reduced SNR value as expected.The results at a reduced value of λ = 0.98 are shown in Fig. 3(b). The proposed scheme takes70 iterations in comparison with RLS algorithm which takes up to 140 iterations. Again numberof iterations taken in Fig. 3(b) are more than Fig. 2(b) due to the effect of reduced SNR. Value ofλ = 0.97 generated the results as shown in Fig. 3(c). The presented scheme achieves the desiredtarget of 5% in 55 iterations where RLS takes 110 iterations. The number of iterations taken byboth RLS and proposed scheme in Fig. 3(c) are greater than number of iterations in Fig. 2(c) as theSNR is reduced by 7dB.

Fig. 4 presents the comparison results at an relatively low SNR value of 0dB. Fig. 4(a) showsthe results at λ = 0.999. The proposed algorithm provides the desired response in less than 100iterations where RLS algorithm fails to do. Here the number of iterations at 0dB are more thannumber of iterations at 8dB and 15dB. The results at λ = 0.98 are presented in Fig. 4(b). Numberof iterations taken by proposed scheme and RLS are 60 and 80 respectively. Fig. 4(c) shows theresults at λ = 0.97, presented scheme takes 40 iterations and conventional algorithm takes morethan 80 iterations. Although the performance is expectedly perturbed in low SNR conditions butthe proposed scheme still provides fast convergence speed.

Irrespective of the values of SNR and λ the proposed scheme performs better than theconventional algorithm and shows rapid convergence. The results show that the presented algorithmis less perturbed in low SNR conditions as compare to the RLS algorithm. Hence the proposedtechnique is promising to be a good choice at low SNR conditions where most of the conventionaltechniques fail to show acceptable results. Proposed scheme also provides fast convergence incomparison with RLS algorithm for wide range of the values of forgetting factor λ. Both thealgorithms have shown better results in low λ values as expected for stationary environment but theconvergence speed of the proposed scheme has surpassed RLS on each of the values tested.

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

102

Page 5: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(a) λ = 0.999

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(b) λ = 0.98

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(c) λ = 0.97

Figure 2. Performance Comparison at SNR=15dB

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

103

Page 6: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(a) λ = 0.999

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(b) λ = 0.98

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(c) λ = 0.97

Figure 3. Performance Comparison at SNR=8dB

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

104

Page 7: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(a) λ = 0.999

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(b) λ = 0.98

20 40 60 80 100 120 1400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Iterations

Mea

n S

quar

ed E

rror

RLSProposed−RLS

(c) λ = 0.97

Figure 4. Performance Comparison at SNR=0dB

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

105

Page 8: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

5. Conclusion

In this paper an improved gain vector is presented and analyzed. The mathematical formulationof the gain vector is shown and summary of the proposed scheme is presented. Comparative studyof RLS and proposed scheme in terms of their convergence speed is presented with the help ofMATLAB R© simulations with different SNR and forgetting factor values. It is shown that theproposed scheme provides fast convergence for a wide range of SNR values and forgetting factors.The addition in complexity to achieve these performance enhancements is also insignificant. Henceis it concluded that proposed scheme is promising candidate for low SNR applications.

References

[1] S. Haykin, Adaptive Filter Theory, 3rd-ed, Printice Hall, 1996.[2] S. Haykin, A. H. Sayed, J. Zeidler, P. Yee, P. Wei, ”Tracking of Linear Time-Variant Systems,” Proc.

MILCOM, pp.602-606, San Diego, November 1995.[3] R. L. Ali, A. Ali, A. Rehman, S. A. Khan, S. A. Malik, ”Adaptive Beamforming Algorithms for Anti-

Jamming,” International Journal of Signal Processing Image Processing and Pattern Recognition, Vol. 4,N0. 1, March 2011.

[4] H. S. Yazdi, M. S. Yazdi, M. R. Mohammadi, ”A Novel Forgetting Factor Recursive Least SquareAlgorithm Applied to the Human Motion Analysis,” Interational Journal of Applied Mathematics andComputer Sciences, pp.128-135, 2009.

[5] J. Jiang, R. Cook, ”Fast Parameter Tracking RLS Algorithm with High Noise Immunity,” ElectronicLetters 28, pp.2043-2045, October 1992.

[6] D. J. Park, B. E. Jun, ”Self-Perturbing RLS Algorithm with Fast Tracking Capability,” Electronic Letters28, pp.558-559, March 1992.

[7] J. M. Cio, T. Kailath, ”Fast Fixed-Order, Least Squares Algorithms for Adaptive Filtering,” ICASSP 83,Boston, 1983.

[8] C. F. So, S. C. Ng, S. H. Leung, ”Gradient Based Variable Forgetting Factor RLS Algorithm,” SignalProcessing 83, pp.1163-1175, 2003.

[9] T. R. Fortescue, L. S. Kershenbaum, B. E. Ydstie, ”Implementation of Self-Tuning Regulators withVariable Forgetting Factors,” Automatica 17, pp.831-835, 1981.

[10] D. J. Park, et al., ”Fast Tracking RLS Algorithm Using Novel Variable Forgetting Factor with UnityZone,” Electron. Letters 27, pp.2150-2151, November 1991.

[11] S. Song, et al., ”Gauss Newton Variable Forgetting Factor Recursive Least Squares for Time VaryingParameters Tracking,” Electron. Letters 36, pp.988-990, May 2000.

[12] D. T. M. Slock and T. Kailath, ”Fast Transversal Filters with Data Sequence Weighting,” IEEE Trans.Acoust, Speech, Signal Process., vol. 33, no. 3, pp.346-359, March 1989.

[13] B. Toplis and S. Pasupathy, ”Tracking improvements in fast RLS algorithms using a variable forgettingfactor,” IEEE Trans. Acoust, Speech, Signal Process., vol. 36, no. 2, pp.206-227, Feb. 1988.

[14] Shu-Hung Leung, C. F. So, ”Gradient-Based Variable Forgetting Factor RLS Algorithm in Time-Varying Environments,” IEEE Trans. On Signal Processing, vol. 53, no. 8, pp.3141-3150, Aug. 2005.

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

106

Page 9: An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

Authors

Anum Ali: is a final-year student at COMSATS Institute of InformationTechnology, Electrical Engineering Department, Islamabad, for degree inBachelors of Science in Eletrical (Telecommunication) Engineering. Hisresearch interests include adaptive signal processing and adaptive filtering. Hehas a publication based on the application of Adaptive beamforming in Anti-jamming.

Anis-ur-Rehman: is a final-year student at COMSATS Institute of InformationTechnology, Electrical Engineering Department, Islamabad, for degree inBachelors of Science in Eletrical (Telecommunication) Engineering.

Rana Liaqat Ali: received the M.Sc. Electronics and MS in ElectricalEngineering degrees from Quaid-I-Azam University and Air University,Islamabad, Pakistan, in 1999 and 2006, respectively. He is currently pursuinghis Ph. D. at Department of Electrical Engineering COMSATS Islamabad.He worked as System Engineer in PANASONIC in 1999. After that hejoined University and recently working as assistant Professor at CIIT ElectricalEngineering Department Islamabad, Pakistan. His research interests include

Array Processing, Beamforming, Mic Arrays and Smart Arrays. He is working with DigiSys andRF Systems Research groups under supervision of Professor Dr. Shahid A Khan, Dean Faculty ofEngineering COMSATS, Islamabad Pakistan.

International Journal of Hybrid Information Technology Vol. 4, No. 2, April, 2011

107