8/23/2019 Duidamscriptie
1/72
The design and implementation of an adaptivebroadband feedback controller on a
six-degrees-of-freedom vibration isolation set-up
A.P. Duindam
Masters thesis
February 2005
University of TwenteThe Netherlands
Faculty of Engineering Technology
Department of Mechanical EngineeringMechanical Automation Laboratory
8/23/2019 Duidamscriptie
2/72
Masters thesis in Mechanical Engineering
February 2005
University of TwenteFaculty of Engineering TechnologyDepartment of Mechanical EngineeringMechanical Automation LaboratoryP.O. Box 2177500 AE EnschedeThe Netherlands
Report no. WA-978
Committee:Chairman Prof. dr. ir. J.B. JonkerMentor Dr. ir. J. van DijkSecond mentor Ir. G. NijsseExternal member Dr. ir. A.P. Berkhoff
Typeset by the author with the LATEX 2 document preparation system.Printed in The Netherlands.
Copyright c 2005, University of Twente, Enschede, The Netherlands.All rights reserved. No part of this report may be used or reproduced in
any form or by any means, or stored in a database or retrieval system
without prior written permission of the university except in the case of
brief quotations embodied in critical articles and reviews.
8/23/2019 Duidamscriptie
3/72
8/23/2019 Duidamscriptie
4/72
iv
8/23/2019 Duidamscriptie
5/72
Summary
As a result of structrure born vibrations, an unwanted noise signal may be emitted. Thevibrations are usually caused by a certain source structure, which are carried over to a re-ceiving structure emitting the unwanted disturbance signal. Besides passive control, which
may sufficiently attenuate high-frequency disturbance signals, active control is applied inorder to attenuate low-frequency noise. Active control can be achieved by fixed gain aswell as adaptive control.The topic of this report is to investigate the performance obtained by controlling broad-band (0-1kHz) disturbances using adaptive control. The performance obtained by adaptivecontrol was compared to the performance obtained by the design of equivalent fixed-gaincontrol. Therefor at our laboratory an hybrid isolation vibration setup has been built.The setup consists of a source structure inducing the disturbance, connected by six hybridisolation mounts to a receiver structure. The source structure is isolated from the receiverstructure by minimizing signals from six acceleration sensor outputs and by steering sixpiezo-electric actuator inputs (which serve as hybrid isolation mounts).
First the performance obtained by implementation of the controller in feedforward arrange-ment was investigated. Adaptive control was applied using the AdjointLMS algorithmwhich is known to be a computational efficient algorithm. To speed up the convergenceof the adaptive algorithm, the postconditioning technique was applied. The controller wasregularized in order to prevent saturation of the piezo-actuator inputs. It is shown thata reduction of the disturbance signal of 9.4 dB could be achieved in realtime. After theperformance of feedforward control was investigated, an adaptive feedback controller wasdesigned using the internal model arrangement. Also the adaptive controller was designedusing the postcondtioned AdjointLMS algorithm. The controller has to be stabilized usingregularization techniques by which a disturbance rejection measured by the six accelerationsensors of 3.5 dB was achieved.
v
8/23/2019 Duidamscriptie
6/72
vi 0. Summary
8/23/2019 Duidamscriptie
7/72
Preface
During the past year I have worked on my graduation assignment at the Mechanical Au-tomation Laboratory at the Faculty of Mechanical Engineering, University of Twente. Itwas an intensive and very instructive period. By choosing this assignment the combination
of the theoretical design and simulation of active vibration control in combination with re-altime implementation on an experimental setup was the thing which attracted me most.I want to thank Ph.D. student Ir. G. Nijsse, my direct supervisor for his enthusiastic ap-proach and the fruitful discussions we had, often helping me solve problems by consideringthem in a less complicated way. Furthermore I want to thank my fellow students at thelaboratory who gave me a pleasant time working on my masters assignment during thepast year.By ending this masters thesis, I also conclude my study Mechanical Engineering. I espe-cially want to thank my parents and brothers for their continuing support during my studyand giving me the opportunity to stand where I am now.
Arjen DuindamEnschede, 17th February 2005
vii
8/23/2019 Duidamscriptie
8/72
viii 0. Preface
8/23/2019 Duidamscriptie
9/72
Contents
Summary v
Preface vii
Contents x
1 Introduction 1
2 Optimal control 5
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 General optimal filter problem . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Optimal feedforward control . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3.2 Time domain controller . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.3 Frequency domain controller . . . . . . . . . . . . . . . . . . . . . . 112.4 Optimal feedback control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.4.2 Internal model control . . . . . . . . . . . . . . . . . . . . . . . . . . 132.4.3 Time domain controller . . . . . . . . . . . . . . . . . . . . . . . . . 152.4.4 Transform domain controller . . . . . . . . . . . . . . . . . . . . . . 152.4.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Adaptive control 19
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Adaptive feedforward control . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2.2 General LMS control problem . . . . . . . . . . . . . . . . . . . . . . 203.2.3 Presenting the secondary path: FxLMS algorithm . . . . . . . . . . 233.2.4 Reducing the computational load and increasing the convergence
speed: IO factorization . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.5 Decreasing the steering signals: regularized solution . . . . . . . . . 273.2.6 Reducing the computational load: adjointLMS algorithm . . . . . . 30
3.3 Adaptive feedback control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.3.2 Design of the adaptive controller . . . . . . . . . . . . . . . . . . . . 34
3.3.3 Stability and convergence properties of the adaptive feedback controller 36
ix
8/23/2019 Duidamscriptie
10/72
x Contents
3.3.4 Postconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4 Experimental results 41
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.2 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.3 Feedforward control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.3.1 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.3.2 Realtime results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 Feedback control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.4.1 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.4.2 Realtime results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5 Conclusions & recommendations 55
5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
A Inner-outer factorization 57
A.1 Inner-outer factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57A.2 Outer-inner factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
8/23/2019 Duidamscriptie
11/72
Chapter 1
Introduction
As digital signal processors have become more powerful over the past decades, the possi-
bility to active control unwanted noise and vibrations has gained an increased amount of
interest.
The impact an earthquake can have is a clear example of the effect of unwanted vibra-
tions. On a smaller scale the vibrations generated by a ship engine, propagated through
the body of the ship to the passengers inside, resulting in a disturbing noise, can be seen
as another example of unwanted vibrations. Applications where a high level of accuracy is
needed, like miniaturized manufacturing applications or high precisions sensing systems,
vibrations propagated through the surface or air can seriously influence the functioning of
the specific application. In general it can be said, that in order to reduce the effect of thevibrations induced by a certain source structure, those vibrations need to be isolated from
the receiving structure. For example, a suspension system can be implemented between
the ship body and the engine of the ship. This type of vibration control is usually called
passive control, because only passive elements are being used to reduce the vibrations.
Using passive control, high frequency disturbances can successfully be attenuated. The
isolation of lower frequency disturbances by passive control is often accompanied by prac-
tical problems like a limited minimum stiffness requirement of the suspension system. A
more suitable way of attenuating low frequency disturbances is to make use of an active
control system. The basic idea of active control is that the vibrations which need to be
attenuated are measured on an appropriate place. Using these measured signals a steeringsignal is calculated using a suitable algorithm. This steering signal is being send to the
actuators inducing a vibration which counteracts the disturbance signal. If the system is
assumed to be linear there can be made use of the superposition principle, which states
that if a wave A counteracts with wave B, the resulting wave C is just the sum of wave A
and B. The idea of active control is to isolate a disturbance signal by determining a steering
signal, which produces a secondary anti disturbance signal with the same magnitude and
opposite phase as is shown in figure 1.1. The net result is that the residual disturbing
signal will cancel out and the receiving body will be isolated from the source body. An
effective way to reduce high frequency as well as low frequency disturbances is to combine
both control methods which is commonly mentioned as hybrid control. A simple schematic
1
8/23/2019 Duidamscriptie
12/72
2 1. Introduction
Primarydisturbance
signal
Secondaryanti-disturbance
signal
Residualdisturbance
signal
Figure 1.1: Principle of superposition
view of an hybrid control system of the engine of ship can be seen in figure: 1.2. The engineof the ship is placed on hybrid mounts which consists of both passive and active elements,
thereby attenuating the vibrations carried over from the engine to the body of the ship.
In active control there can be made a distinction between fixed gain and adaptive con-
Hybridmounts
Raft
GearboxCoupling Bearing
Screwpropeller
Engine
Figure 1.2: Ships engine place on hybrid mounts
trollers. Fixed gain controllers are computational efficient but are only optimized for
disturbances whose statistics are time independent. For example if the engine power of a
ship is varied during a certain amount of time, also the statistics of the vibrations produced
by the engine can change. A time independent controller would give a less optimal atten-
uation in this situation. Therefore a better approach is to use a time adaptive controller
which can produce optimal results even if the disturbance statistics change. An importantcharacteristic of an adaptive controller is the time it needs to effectively adapt to the time
varying signals.
Another question which is important in designing a suitable controller is which informa-
tion is when available. An prerequisite for every controller is the availability of a signal
which represents the signal to be attenuated. Often the residual signal is used for this.
This signal is measured and feed back to the controller to calculate an appropriate steering
signal. However the controller performance can be significantly increased if there is some
kind of time advanced signal available. For instance if the signal that drives the engine
of the ship is available, this signal can feed forward to controller, by which the controller
can make an prediction of the disturbance signal in the time to come. The former type
8/23/2019 Duidamscriptie
13/72
1. Introduction 3
of controller arrangements is called the type of feedback controllers, the latter is called a
feedforward control arrangement.
For research and development a six-degrees-of-freedom (6DOF) vibration isolation set-up
has been built at our laboratory [13]. A photograph of the setup is depicted in figure
1.3 on the left; a schematic picture is given on the right. The set-up consists of three
(a) Six degrees of freedom hybrid isolationvibration setup.
Shaker
Spring
Receiver plate (B)
Sourceplate(A)
Sensor(D)
Actuator(C)
Mount
(b) Schematic representation of the six degrees of freedom hy-brid isolation vibration setup.
Figure 1.3: The experimental setup
mounts carrying a source plate (A). The source plate is excited with a disturbance by an
electro-dynamic shaker. The source plate is connected to the three mounts by six ceramic
piezo-electric actuators (C) (two actuators per mount), which serve as hybrid isolation
mounts. The three mounts are attached to a receiver plate (B) and every mount has two
acceleration sensors (D) on top. The objective of the set-up is to investigate if the receiver
plate can be isolated from the source plate by the six hybrid isolation mounts, such that
disturbances induced by the source plate are isolated from the receiver plate. Isolation is
established by minimizing signals from the six acceleration sensor ouputs and by steering
the six ceramic piezo-electric actuator inputs by a controller [7].
The topic of active vibration isolation control (AVIC) is being researched at the Me-
chanical Automation group in co-operation with an industrial partner (TNO) and another
research group at the faculty of Engineering Technology at the University of Twente. For-
mer research of AVIC at a one degree of freedom setup has been carried out by [11]. On
the 6 degree of freedom setup the AVIC problem with broadband disturbances has been
investigated in simulation by [4], and a realtime implementation of the AVIC problem with
tonal disturbances was carried out by [12]. The availability of a computationally fast D-
Space system makes it possible to implement a broadband controller on the experimental
6 degree of freedom setup. The objective of this thesis can be summarized as follows:
design and implement a broadband adaptive feedforward and feedback controller on the
8/23/2019 Duidamscriptie
14/72
4 1. Introduction
six degree of freedom setup. Thereby taking into consideration the need of a stable and
computational efficient algorithm gaining a high performance in combination with a short
learning curve.
This thesis is outlined as follows: in chapter 2 the general active optimal control prob-
lem will be issued in feedforward as well as in a feedback configuration. The adaptive
counterpart will be issued in chapter 3, where also several ways are described to improve
the convergence speed as well as to decrease the computational load. Special attention will
be given to the stability of the feedback adaptive algorithm. In chapter 4 first the identifi-
cation procedure will be briefly noticed. Subsequently the obtained results on the 6-DOF
setup will be presented for the various algorithms, analyzed and compared with simulation
results. Finally in chapter 5 conclusions will be drawn and several recommendations will
be made.
8/23/2019 Duidamscriptie
15/72
Chapter 2
Optimal control
2.1 Introduction
The main objective of this report is the design and implementation of an adaptive con-
troller. However, to gain insight in the performance of the adaptive controller, its perfor-
mance properties will be compared with the performance of the fixed gain controller, which
functions as a benchmark for the adaptive controller if fully converged. Also the fixed gain
controller can be used to determine the stability properties of the adaptive controller, es-
sentially in a feedback arrangement.
Therefor, in this chapter the design of the fixed gain controller will be described. First
in section 2.2, the general filter problem will be presented. This general problem will bedetailed out in the following section for a feedforward arrangement and subsequently in the
next section for a feedback arrangement. In both sections the controller will be described
in the time as well as in the transform domain. Special attention to the stability prop-
erties will be given if the controller is described in a feedback arrangement. A summary
concludes this chapter.
2.2 General optimal filter problem
In this section the general optimal filter problem will be described. A schematic overview
of the basic filter problem can be seen in figure 2.1. The blockdiagram consists of thetransfer paths P and S. The primary transfer path P is defined as the transfer path from
the reference signal x(n) to the disturbance signal d(n). The secondary path S is defined
as the transfer from the steering signal u(n) to the output signal y(n).
If the example of the ship is considered, the systems P and S can be physically represented
as the ships engine producing the disturbance vibration and the actuators producing the
antidisturbance signal respectively.
The systems P and S on the experimental setup are considered time invariant, linear
and stable. The time invariance property implies that the model of the systems can be
considered as constant. The property of linearity implies that the superposition principle
holds, which simplifies the model design significantly. If a discrete model of the system
5
8/23/2019 Duidamscriptie
16/72
6 2. Optimal control
P
W(q1)
x(n)
d(n)
y(n) e(n)u(n)
S
+
+
Figure 2.1: Blockdiagram of general optimal filter problem
is transformed to the z-domain [2], the stability requirement is guaranteed if the poles of
the transformed discrete model will lie inside the unit circle of the z-plane. Every physicalsystem is required to be causal. The causality requirement in the time domain implies that
the output of the system is not influenced by future input values.
The controller is represented by W which drives the actuator input u(n) from the reference
signal x(n). According to the superposition principle, the residual error measured on time
instant n by the sensors is the sum of the outputsignal y(n) and the disturbance signal
d(n).
e(n) = y(n) + d(n) (2.1)
The models obtained of the physical systems as well as the controller can be described in
several ways. One way of describing it, is by representing it as a finite impulse response(FIR) filter:
y(n) =L1l=0
h(l)u(n l) (2.2)
where the inputsignal u(n) is filtered by the coefficients h(l) of the filter. The filtercoeffi-
cients of the FIR filter can be seen as the response of the system to a unit impulse input.
Equation 2.2 can be alternatively written using the unit delay operator q1 which is defined
as follows:
ql
u(n) = u(n l) (2.3)Which results in:
y(n) = H(q1)u(n) (2.4)
An advantage of a FIR filter is that it is inherently stable, because all its poles are laying in
the origin. However if a lightly damped system is described by a FIR filter, the filter may
require many coefficients to describe such a system sufficiently. Therefore another way of
describing a system can be obtained by means of a state space model (SSM):
H x(n + 1) = Ax(n) + Bu(n)y(n) = Cx(n) + Du(n) (2.5)
8/23/2019 Duidamscriptie
17/72
2.3. Optimal feedforward control 7
By this way the state of the system on time-stamp n is described by the state vector x(n).
The state matrices A, B, C and D indicating the contribution of the states and inputsignal
to the future states and outputsignal. A state space description is a numerically robust
and computational efficient way of describing the model of a system [8].
Throughout this report in the time domain H(q1) denotes a model described as a FIRM.
A state space description is denoted as H, without the unitdelay operator. In the transform
domain, this distinction will not be made and the model description of H(z), should follow
logically from the text. Furthermore, time domain signals are dentoted italic, whereas
transform domain signals are denoted regular.
Having identified the different signals and systems of blockdiagram 2.1, the objective is
now to find the controller that will maximal attenuate the error signal. Ideally the optimal
controller which entirely cancels the error signal would be of the form:
W = S1P (2.6)
However, a physical realization of W can only be given if the controller is stable and causal.
When equation 2.6 is used, the resulting controller will likely to be unstable caused by the
non-minimum phase zeros of S 1. In that case also the controller W will be unstable
provided no zeros of P will cancel out any poles of S1 outside the unit circle. Therefore
another way of determining the controller is needed. A widely used way of doing this,
is to base the controller W on the minimisation of a predefined criterium J. A suitable
criterium appears to be the expected value of the the squared error signal. This criterium
can be seen as a measure for the energy contained by the residual error signal [2].
J = E[eT(n)e(n)] (2.7)
Minimisation of this criterium as a function of all possible causal stable controllers leads
to the optimal controller W.
Wopt = arg minW
J(W) (2.8)
In the following sections, an expression in the time as well as in the transform domain for
the optimal controller will be derived, using criterium 2.8.
2.3 Optimal feedforward control
2.3.1 Introduction
A feedforward arrangement suggests that signals upstream in the system are feedforward
to the controller to better predict the outcome of the disturbance signal. In section 2.2 a
general way of finding the optimal contoller was presented. In this section an expression
for the fixed gain controller in feedforward arrangement will be derived. First this will be
done in the time domain, subsequently the transform domain will be treated. The purpose
and effect of regularisation will be discussed in the appropriate subsections.
1
A system is said to be minimum phase if all the poles and zeros lay within the unit circle.
8/23/2019 Duidamscriptie
18/72
8 2. Optimal control
2.3.2 Time domain controller
A general schematic overview of the feedforward optimal controller can be seen in figure2.2. In this figure x(n) RK,u(n) RM,d(n), y(n), e(n) RL represents the reference,steering, disturbance, antidisturbance and error signals respectively. The primary path
P consists of K inputs and L outputs, the secondary path S consists of M inputs and L
outputs and the controller W(q1) consists of K inputs and M outputs, where M Kand L M.
To derive the time domain controller according to the defined criterium 2.7, we will assume
P
W(q1) S
x(n) d(n)
u(n) y(n) e(n)
K
M L+
+
Figure 2.2: Blockdiagram of the feedforward configuration with optimal control.
that the controller can be represented by a FIR filter. As was mentioned in the previous
subsection the output of the controller u(n) will be a weighted sum of a finite number of
present and pervious input values. The transfer of the kth input signal xk to the mth
output signal um can be noted as:
um(n) = Wm,k(q1)xk(n) (2.9)
Where the k to mth FIR filter is described by I coefficients:
Wm,k(q1) = [w
(0)m,k, w
(1)m,kq
1, . . . , w(i)m,kq
i, . . . , w(I1)m,k q
I+1] (2.10)
The coefficients wim,k can be stacked as follows:
wm,k = [w(1)m,k, w
(2)m,k, . . . , w
(i)m,k, . . . , w
(I)m,k] R1I (2.11)
wm = [wm,1,wm,2, . . . ,wm,k, . . . ,wm,K]
R
1KI (2.12)
w = [w1,w2, . . . ,wm, . . . ,wM]T RM KI1 (2.13)where w
(i)m,k denotes the ith filter coefficient of the FIR filter from input xk to output um.
Subsequently a notation for the matrix of regression vectors X will be introduced:
xk(n) = [xk(n), xk(n 1), . . . , xk(n i), . . . , xk(n I)] R1I (2.14)x(n) = [x1(n),x2(n), . . . ,xk(n), . . . , xK(n)]
T RKI1 (2.15)
X(n) =
x(n) 0 00 x(n)...
. . .
0 x(n)
T
RMM KI (2.16)
8/23/2019 Duidamscriptie
19/72
2.3. Optimal feedforward control 9
The vector of delayed inputsignals x(n) is called the regression vector. The anti-disturbance
signal y(n) can be calculated as follows:
y(n) = SW(q1)x(n) (2.17)
= S[X(n)w] (2.18)
Assuming time invariant models, the multiplication order ofS and W(q1) may be inter-
changed:
y(n) =S xT(n)w (2.19)
Where denotes the kronecker tensor product[5], and
R(n) = S xT
(n) RLM KI
(2.20)
is the matrix with past reference signals x(n) filtered by the secondary path S. It consists
of LMK sequences of filtered reference signals rlmk:
rl,mk(n) = Slmxk(n) (2.21)
Using equation 2.19 the error signal can now be expressed in terms of the filtered reference
signal:
e(n) = R(n)w+ d(n) (2.22)
Where R(n)w denotes a matrix vector product. Subsequently the criterium J can beexpressed in terms of the coefficients of the optimal controller Wopt
J = E[eT(n)e(n)] = EwTRT(n) + dT(n)
(R(n)w+ d(n))
= wTE
RT(n)R(n)
w+ 2wTE
RT(n)d(n)
+ . . .
. . . E dT(n)d(n)
(2.23)
This is a quadratic expression for each of the FIR filter coefficients of the controller w(i)m,k
and is minimized according to each of the filter coefficients by setting the derivative of the
criterium J to the corresponding coefficient to zero:
J
w(i)m,k
= 0 (2.24)
Which leads to the following expression for the optimal controller in the time domain:
Wopt =
ERT(n)R(n)
1ERT(n)d(n)
(2.25)
The matrix ERT(n)R(n)
is the autocorrelation matrix of the filtered reference signals
and may be written as Rrr. If this autocorrelation matrix is positive definite, the minimi-
sation of equation 2.23 leads to a unique global minimum. The autocorrelation matrix will
be positive definite if the filtered reference signal persistently excites the control filter W.
This means that the signal should have at least half as many spectral components as that
8/23/2019 Duidamscriptie
20/72
10 2. Optimal control
there are filtercoefficients. The crosscorrelation matrix between the filtered reference sig-
nal and the disturbance signal can be rewritten as Rrd which leads, compared to equation
2.25, to the compact notation:
Wopt = Rrr1Rrd (2.26)Where Rrr and Rrd are defined as:
Rrr = ERT(n)R(n)
(2.27)
Rrd = ERT(n)d(n)
(2.28)
Regularisation
Criterium 2.7 is optimal in the sense that it tries to minimise the energy of the error signal.
It does not take into account the energy of the steering signals. In practical situations, this
steering signal can be limited for various reasons. The actuator output can be restricted to
a maximum output level. Also to comply with the linearity assumption, the displacements
caused by the actuators can be limited. So in general it is often desirable to modify the
cost function by adding an effort weighting term in addition with the error weighting term.
Such a cost function can be defined as follows:
J = E[eT(n)e(n) + uT(n)u(n)] (2.29)
However using this cost function to determine the filter coefficients in an adaptive way,
turns out to be computational rather inefficient as will be mentioned in the next chapter.
Using the fixed gain solution only for analysis purposes, the following cost function will be
introduced:
J = E[eT(n)e(n) + wTw] (2.30)
This cost function additionally weighs the squared filtercoefficients multiplied by a weight-
ing factor beta. The factor beta is used to determine how strong the coefficients should
be weighed compared to the error signal. If the reference signal is a white noise sequence,
costfunctions 2.29 and 2.30 can be considered as the same. Combining equations 2.30 and
2.23 leads to the following expression for the optimal filter coefficients:
Wopt = {
Rrr +
I}1 R
rd (2.31)The new optimal solution can simply be computed by adding a term beta to the main
diagonal of the autocorrelation matrix. Another adaventage of this regularized solution
is that by adding a small term to the main diagonal will result in an increase of the
eigenvalues, and therefore makes a potentially poorly conditioned matrix easier to convert.
The performance of the reguralized solution will be less optimal compared to the optimal
solution without regularization. It turns out however, that the introduction of a small value
beta will only have a slight impact on the performance of the controller, while a significant
reduction in effort energy can be obtained. Furthermore it turns out that regularization of
the filtercoefficients also has a positive influence on the stability and convergence properties
of the adaptive solution as will be discussed in the next chapter.
8/23/2019 Duidamscriptie
21/72
2.3. Optimal feedforward control 11
2.3.3 Frequency domain controller
In the past subsections the fixed gain controller is derived in the time domain. In thefollowing subsections an expression for the fixed gain controller in the frequency domain
will be given. The minimization of the cost function in the time domain can be viewed as
the minimization of the H2-norm of the system in the frequency domain. To illustrate this
first the cost function is represented in the frequency domain by use of Parsevals theorem:
= E[eT(n)e(n)] = trE[e(n)eT(n)] (2.32)
=1
2tr
See(ej T)dT (2.33)
where in the second equation T denotes the sampletime and See(ejT) denotes the power
spectral density of the error signal:
See(ej T) = E[e(ejT)eT(ej T)]
= E
d(ej T) + y(ej T)
d(ejT ) + y(ejT)T
= E
P(ej T)x(ejT) + S(ej T)W(ej T)x(ejT)
. . .
. . .
P(ej T)x(ej T) + S(ej T)W(ej T)x(ejT)T
=
P(ejT) + S(ejT )W(ejT )
E
x(ejT)xT(ej T)
. . .
. . . PT(ej T) + WT(ejT)ST(ejT) (2.34)
If the reference signal is a white noise sequence with unit variance, then the expectation
of the reference signal is equal to unity:
E
x(ejT)xT(ej T)
= I (2.35)
By defining the H2-norm of the system H(z) as:
H(z) 2 =
12
tr
H(ej T)HT(ej T)dT (2.36)
The costfunction 2.7 can alternatively be written as the square of the H2-norm of the
system P(z) + S(z)W(z):
E[eT(n)e(n)] =1
2tr
P(ejT) + S(ejT)W(ejT)
. . .
. . .
PT(ejT) + WT(ej T)ST(ej T)
dT (2.37)
= P(z) + S(z)W(z) 22 (2.38)Because solving the optimization problem according to 2.7, is equivalent of minimizing the
squared H2 norm of the system, the minimization problem can now be stated as follows:
W(z) = minW P(z) + S(z)W(z) 22 (2.39)
8/23/2019 Duidamscriptie
22/72
12 2. Optimal control
The derivation of the solution, which is known as the model based causal Wiener solution,
can be found in [3], and the solution is directly given:
W(z) = S1o (z)
STi (z1)P(z)
+
(2.40)
The solution is obtained by performing an inner outer factorization on the secondary path.
The inner-outerfactorization is defined in appendix as is defined in appendix A.1. Because
the outerfactor has the property that it is minimum phase, it has a stable inverse. The
anticausal 2 transposed system, also known as the adjoint system, of the causal innerfac-
tor is denoted by STi (z1). The causality operator is denoted by {}+, which is defined
as taking the causal part of the total system between the brackets. The advantage of
this solution in contrast to the solution described by correlation matrices is that by the
former a purely model based solution is obtained. The latter has the disadvantage thatan autocorrelation matrix, with potentially huge dimensions, has to be inverted. Another
disadvantage is that an error will be made by using a finite data length to calculate the
correlationmatrix. However in order to obtain the modelbased solution, the models of the
primary and secondary path should be available.
Regularisation
To regularize the solution, the cost function can be extended as in equation: 2.30:
J = E[eT(n)e(n) + wTw] (2.41)
=1
2tr
P(ej T) + S(ejT)W(ejT)
PT(ejT) + WT(ejT)ST(ejT )
+ . . .
. . .
IMW(ejT)WT(ejT)
IMdT (2.42)
This can be written, using the H2-norm:
W(z) = minW
Paug(z) + Saug(z)W(z) 22 (2.43)
With:
Paug(z) = P(z)0MK
Saug(z) = S(z)IMM
(2.44)the augmented (L + M) K primary and augmented (L + M) L secondary path. Thestructure of the optimization problem of 2.43 is now the same as the structure for the
general optimization problem as shown in equation 2.39. Therefor the regularised transform
domain fixed gain solution can now directly be written using the general structure of the
solution of the general optimization problem.
W(z) = Saug -1o (z)
Saug Ti (z
1)Paug(z)
+(2.45)
2
A system H(z) is said to be anticausal H(z1
) if its output is depending on future inputvalues
8/23/2019 Duidamscriptie
23/72
2.4. Optimal feedback control 13
The above equation can be simplified using the fact that the the innerfactor of the aug-
mented secondary path can be split into two parts:
Saugi (z) =
S
augi,1 (z)
Saugi,2 (z)
(2.46)
With Saugi,1 (z) a L M system and Saugi,2 (z) a M M system. The product between thecausality operator can now be rewritten as:
Saug, Ti,1 (z
1) Saug, Ti,2 (z1)
P(z)
0MK
= Saug, Ti,1 (z
1)P(z) (2.47)
It is shown that the primary path has not to be augmented which leads to a simplified and
a computational more efficient way of determining the regularized controller:
W(z) = Saug -1o (z)
Saug Ti,1 (z
1)P(z)
+(2.48)
2.4 Optimal feedback control
2.4.1 Introduction
In the previous section the fixed gain feedforward controller was derived. It was assumed
that the reference signal is known a priori. However, if this knowledge is not available, the
controller can be arranged in an feedback configuration. By using the principle of internal
model control (IMC) an estimation of the disturbance signal can be obtained, which is
driving the controller like a newly obtained reference signal. The principle of IMC will
be discussed in the next section. Following the same structure as in the previous section
subsequently expressions for the fixed gain controller in the time domain and frequency
domain will be derived. Because a feedback structure does not have the property of being
inherently stable as was the case using a feedforward structure, a special section will
cover the stability properties of the feedback solution. In this section also the regularized
solutions will be treated.
2.4.2 Internal model control
A blockdiagram of a feedback configuration using IMC is shown in figure 2.3.In this configuration the disturbance signal d(n) is estimated by substracting an esti-
mation of the output signal y(n) from the error signal. This estimated output signal is
obtained by filtering the steering signal u(n) by the internal model S which is an estima-
tion of the real secondary path. By this way an estimation of the disturbance signal is
achieved:
y(n) = Su(n) (2.49)
d(n) = e(n) y(n) (2.50)This estimated disturbance signal functions as a newly obtained reference signal and is
used as the input for the controller W(q1). By defining the feedback path H as shown
8/23/2019 Duidamscriptie
24/72
14 2. Optimal control
P
W(q1) S
S
H
x(n) d(n)
d(n)
u(n) y(n)
y(n)
e(n)
K
M L
+
+
+
Figure 2.3: Blockdiagram of the feedback configuration in an IMC arrangement
in figure 2.3, the feedback arrangement can be represented in the simplified form shown
in figure 2.4. Where the transfer function from the error signal to the steering signal is
defined as:
H(z) = W(z)1 + S(z)W(z)
(2.51)
Using expression 2.51, the resulting sensitivity function G(z) of this arrangement is given
SH
d(n)
u(n) y(n) e(n)
M L+
+
Figure 2.4: Simplified blockdiagram of the feedback configuration in an IMC arrangement
by:
G(z) =e(z)
d(z)=
1 + S(z)W(z)
1
S(z) S(z)
W(z)(2.52)
If perfect plant knowledge is assumed, the arrangement of the sensitivity function reduces
to a form which is linear in the filtercoefficients:
G(z) = 1 + S(z)W(z) (2.53)
and criterium 2.7 can be applied which is quadratic in the filtercoefficients. Minimization
of the criterium to each of its coefficients leads to a global minimum, giving the coefficients
8/23/2019 Duidamscriptie
25/72
2.4. Optimal feedback control 15
of the optimal controller according to criterium 2.7. So using IMC and assuming perfect
plan knowledge will lead to an equivalent feedforward minimization problem, which is also
shown in figure 2.5: The blockdiagram of the feedback system is now given by an entirely
SWd(n) u(n) y(n) e(n)
ML +
+
Figure 2.5: Block diagram of feedback configuration with perfect plant knowledge assumed
feedfworward structure.
The strategy of using IMC and assuming perfect plant knowledge will be applied in the
next sections to calculate the optimal feedback controller.
2.4.3 Time domain controller
Using the IMC arrangement discussed in the previous subsection the optimal feedback time
domain controller is derived in the same way as was discussed in section 2.3.2. Assuming
perfect platnt knowledge the estimated disturbance signal equals the original disturbance
signal, which is playing the role of the reference signal in the feedforward case. The matrix
R(n) is now achieved by the kronecker tensor product between the secondary path S and
the disturbance signal d(n):
dl(n) = [dl(n), dl(n 1), . . . , dl(n i), . . . , dl(n I)] R1I (2.54)d(n) = [d1(n),d2(n), . . . ,dl(n), . . . ,dL(n)]
T RLI1 (2.55)R(n) = S dT(n) RLM LI (2.56)
Having specified the matrix R(n), the optimal time domain controller consisting of the
L M FIR filters can be obtained using equation 2.25.Wopt =
ERT(n)R(n)
1ERT(n)d(n)
(2.57)
In case perfect plant knowledge may not be assumed the designed controller above may
be suboptimal. A more optimal controller may then be derived by using the estimated
disturbance signal, which actually acts as the input of the controller, instead of the real
disturbance signal. However by designing the optimal controller using the estimated dis-
turbance, the dependence of the estimated disturbance with respect to the coefficients is
ignored in order to obtain a quadratic minimization problem. It is therefor not guaran-
teed to give a better performance compared to the controller obtained using perfect plant
knowledge.
2.4.4 Transform domain controller
Also a transform domain solution can be obtained using the internal model control im-
plementation and assuming perfect plant knowledge. Referering to cost function 2.33, the
8/23/2019 Duidamscriptie
26/72
16 2. Optimal control
power spectrum density function of the error signal can be written as:
See(ejT
) = E[e(ejT
)eT
(ejT
)]= E
d(ej T) + y(ej T)
d(ej T) + y(ej T)
T= E
P(ejT)x(ej T) + S(ej T)W(ej T)P(ejT)x(ej T)
. . .
. . .
P(ejT)x(ej T) + S(ejT)W(ej T)P(ejT)x(ej T)T
=
P(ejT ) + S(ej T)W(ej T)P(ejT)
E
x(ej T)xT(ej T)
. . .
. . .
PT(ejT) + PT(ej T)WT(ej T)ST(ejT)
(2.58)
The problem can now be restated as minimising the following H2-norm:
W(z) = minW P(z) + S(z)W(z)P(z) 2
2 (2.59)
which has the following solution [3]:
W(z) = S1o (z)
STi (z1)P(z)PTci(z
1)
+Pco(z) (2.60)
where denotes the left inverse operation and Pco, and Pci are the co-outer and co-innerfactor respectively of the primary path as explained in appendix A.1.
Noting that:
P(z)PTci(z1) = P(z)P1ci (z) = Pco(z) (2.61)
expression 2.60 can be simplified leading to the following solution for the optimal wiener
feedback controller:
W(z) = S1o (z)
STi (z1)Pco(z)
+
Pco(z) (2.62)
2.4.5 Stability
By assuming perfect plant knowledge, the feedback loop is completely ignored. In fact
this feedback loop is still present as can be seen in blockdiagram 2.6. Actually this is
SW
S S
d(n) u(n) y(n) e(n)
ML + +
+
Figure 2.6: blockdiagram of the IMC arrangement with internal feedback loop present
just a rearranged version of the blockdiagram shown in figure 2.3. In the above figure the
looptransfer path can be defined as:
L(z) = W(z)[S(z) S(z)] (2.63)
8/23/2019 Duidamscriptie
27/72
2.5. summary 17
In order to have a stable feedback transfer path it is required that the loopgain, which
may be expressed by the
-norm, of the loop transferpath is less then unity:
||L(z)|| < 1 for all T (2.64)
Intuitively it may be seen that the gain of the filter should be reduced at frequencies where
the gain of the filter is high, in order to stabilize an otherwise unstable feedback system.
To reduce the the peak values of the controllergain the method of regularisation can be
applied in the same way as was mentioned in the section on feedforward control. The
time domain regularized solution is then obtained using equation 2.33, using the estimated
filtered disturbance signal as defined in 2.56, instead of the filtered reference signal.
To obtain the transform domain solution, one could refer to equations: 2.45 and 2.58.
Using Parsevals theorem and combining referred equations leads to the following expressionfor the costfunction in the frequency domain:
J = E[eT(n)e(n) + wTw] (2.65)
=1
2tr
[P + SWP] [P + PWS] +
IMWW
IMdT (2.66)
Where (ejT) has been omitted for clarity reasons and denotes the complex conjugatetranspose. Because the controller W is prefactorised with the primary path, both terms
in the integral can not be combined.Therefor the transform-domain cost function can not
be obtained in the same general form as was derived in equation 2.38 and the solution cannot easily be generalized. For analysis purposes only the time-domain regularised solution
will be used.
2.5 summary
In this chapter a general method was presented to attenuate unwanted vibrations. This
method of attenuating vibrations was mainly based on the design of a time independent
controller which used the minimisation of the energy contained by the error signal as a way
to determine the optimal controller. A static time domain controller, which is commonly
denoted in literature as the class of Wiener controllers, was derived by the use of statisticalproperties of the reference and measured signals. By formulating the minimisation of a
criterium into the minimisation of the H2-norm a transform domain solution could be de-
rived. After the feedforward solution was derived a method to reduce the steering signals
was described, which is often constrained to a certain maximum level. Subsequently the
feedback problem was dealt with. By using an internal model arrangement, it was shown
that the feedback problem could be encountered as an equivalent feedforward problem,
provided perfect plant knowledge is available. The time and transform domain feedback
controllers were therefor derived using standard feedforward control laws. Finally it was
shown how imperfect plant knowledge may lead to stability problems. By using regular-
ization a method was presented to improve the stability of the feedback system.
8/23/2019 Duidamscriptie
28/72
18 2. Optimal control
8/23/2019 Duidamscriptie
29/72
Chapter 3
Adaptive control
3.1 Introduction
The fixed gain Wiener controller derived in the previous chapter is optimised only for in-
putsignals which statistic properties are stationary and can become less optimal if these
properties change over time. This changing can be caused by a slowly varying primary
path. Also a change in the statistics of the reference signal may result in a less optimal
controller. Under these conditions using an adaptive controller may lead to a better reduc-
tion of the disturbance signal, because it has the possibility to adapt to a more optimal
controller when the correlation properties of the inputsignals change over time. It requires
however the calculation of a new set of control filters every sample period and thereforan important design criterium will be the efficiency of the algorithm. Another important
requirement of an adaptive controller is that it has to converge to a stable controller where
it is desirable to have a fast convergence, so that the disturbancesignal may be attenuated
over a short time period. The actuators are driven by the output of the controller, yet the
output energy of the actuators to attenuate the disturbance signal is often restricted. This
leads to the requirement that the adaptive controller should also be efficient in the sense
that maximal reduction of the disturbance energy is achieved with a minimal amount of
control effort. These demands will lead to the design of an adaptive controller, arranged
in a feedforward or a feedback structure and will be the subject of this chapter.
The organisation of this chapter is as follows, first a basic structure for an adaptive con-troller will be laid out. Principles to determine the stability and convergence speed will be
issued. Subsequently the adaptive feedforward controller will be derived, first, the FxLMS-
algorithm. It will be shown how the computational efficiency can be enhanced together
with a increase in convergence speed by using the principle of inner-outer factorization.
In the following subsection it will be shown how the robustness to modeluncertainties can
be enhanced by using regularization. In the final subsection, the adjointLMS algorithm
is introduced leading to a further decrease of the computational load. In the following
section the adaptive controller in feedback arrangement is derived. The design can mainly
be generalized from the feedforward arrangement. Yet referring to the design of the fixed
gain feedback controller, special attention will be given to the stability requirements to
19
8/23/2019 Duidamscriptie
30/72
20 3. Adaptive control
LMS
W(q1, n)
x(n)
d(n)
y(n)
e(n)
K L
+
+
Figure 3.1: Blockdiagram of general adaptive filter problem
guarantee a stable convergence of the adaptive controller. A summary will conclude this
chapter.
3.2 Adaptive feedforward control
3.2.1 Introduction
In this section first the general feedforward adaptive problem will be discussed. The con-
straints to guarantee a stable adaptive controller will be mentioned. Subsequently several
methods will be introduced to increase the convergence speed and to reduce the computa-
tional load. Finally a way to enhance the stability will be presented, which simultaneously
reduces the maximum actuator outputs.
3.2.2 General LMS control problem
In figure 3.1 the basic blockdiagram of an adaptive control problem is shown: The refer-
ence signal x(n) is fed through the adaptive controller W(q1, n) in order to produce an
antidisturbance signal which is used to attenuate the disturbance signal d(n). The residual
error signal may then be written as:
e(n) = W(q1, n)x(n) + d(n) (3.1)
Where W(q1, n) is described using an FIR filter structure.
In the previous chapter it was mentioned that the optimal fixed gain controller de-
scribed by an FIR structure could be derived according to the minimization of a certain
costfunction J with respect to the coefficients of the control filter. The coefficients of the
optimal control filter could then be obtained by setting the derivative of the costfunction
according to the coefficients to zero:
J
w = 0 (3.2)
8/23/2019 Duidamscriptie
31/72
3.2. Adaptive feedforward control 21
MSE
w2
w1
Figure 3.2: Graphical representation of the MSE as function of the filtercoefficients.
provided the costfunction is a quadratic function of the coefficients. In adaptive control
essentially the same procedure takes place except that the minimum of the costfunction
is not determined at once, but instead is determined iteratively by letting the controlfil-
ter coefficients change each sampleperiod in the direction of the global minimum of the
costfunction. The size and direction in which the coefficients adapt is then determined by
the negative derivative of the costfunction with respect to the coefficients. This type of
adaptive gradient descent algorithms is commonly known as steepest descent algorithms
because the coefficients are updated in the direction of the steepest descent w.r.t. the cost-
function. When the costfunction is plotted against the filtercoefficients and consideringonly two coefficients, this may be visualized in the figure: 3.2. The adaptation process
may now be imagined as a path winding around the surface from a certain initial position
to a certain space around the place defined by the set of coefficients where the costfunction
has its minimum value. When a quadratic function is used, the negative derivative points
to the global minimum of the costfunction. The set of new control filter coefficients can
then be defined by the following updatestep:
w(new) = w(old) 2
J
w(3.3)
where denotes the stepsize and is used to adjust the speed of the adaptation process.
Using the mean square error given by equation 2.7 as the costfunction the general update
equation can be written as a function of the reference and error signals:
w(new) = w(old) E[XT(n)e(n)] (3.4)
because of the expectation operation incorporated, the computational load of the calcula-
tion of the derivative in the update equation may be rather large. Therefor the use of an
instantaneous estimate of the derivative is proposed. The set of equations to update the
controlfilter coefficients can then be given as:
w(new) = w(old) XT(n)e(n) (3.5)
8/23/2019 Duidamscriptie
32/72
22 3. Adaptive control
The adaptation algorithm involved with this update equation appears to be simple and
numerically robust and is commonly known as the least mean squares (LMS) algorithm.
Yet compared to an actual steepest descent method, it may differ in the fact that the
instantaneous gradient may differ from the gradient according to the MSE criterium and
the path on the errorspace may differ from the one using the MSE criterium. However,
it can be shown that when the adaptation process takes place slowly over time, i.e. the
amount in which the filtercoefficients are changing is small during the time defined by
the time the filters impulseresponse needed to decay sufficiently, the coefficients of the
adaptive filter will converge in the mean to the coefficients of optimal Wiener solution [2].
stability condition
The stability property of the LMS algorithm can be conveniently analyzed considering anaveraged behaviour of the algorithm. This is expressed by taking the mean value of the
different terms of the update step of the algorithm over a number of trials:
E[w(n + 1)] = E[w(n)] + E[XT(n)d(n)] E[XT(n)X(n)w(n)] (3.6)
When considering the reference signal as statistical independent of the filtercoefficients the
last term may be split into two different factors:
E[XT(n)X(n)w(n)] = E[XT(n)X(n)]E[w(n)] (3.7)
This independence assumption is only valid for a slowly varying filter, i.e.: the coefficients
of the filter can be considered as constant w.r.t. the length of the filter. By defining the
normalized coefficients as:
(n) = E[w(n)] wopt (3.8)
and recalling the expression for the optimal filtercoefficients 2.26 where Rxx is defined
as the autocorrelationmatrix of the reference signal, equation 3.6 can be substituted in
equation 3.8 yielding:
(n + 1) = [I Rxx](n) (3.9)
which represents a set of coupled equations demonstrating the evolution of the normalizedfiltercoefficients over time. By using an eigenvalue decomposition of the autocorrelation-
matrix:
Rxx = QQT (3.10)
equation 3.9 can be written as:
v(n + 1) = [I ]v(n) (3.11)
forming a set of I independent equations:
vi(n + 1) = (1 i)vi(n) (3.12)
8/23/2019 Duidamscriptie
33/72
3.2. Adaptive feedforward control 23
Where the ith normalized averaged rotated filter coefficient, vi(n) is defined as:
vi(n) = QT
[E[w(n)] wopt] (3.13)The independent coefficients vi(n) are also referred as the different modes in which the
adaptive algorithm converges. To guarantee a stable convergence, every independent mode
has to converge to zero. This leads to the following condition for the stepsize for each
independent mode i:
|1 i| < 1 0 < < 2/i (3.14)
From this condition it becomes clear that the mode associated with the highest eigenvalue
max will be the first mode to become unstable. Therefore, the maximum stepsize is bound
by the highest eigenvalue and the condition can be more specifically written as:
0 < < 2/max (3.15)
In practical situations however the independence assumption 3.7 is often not achievable
because of a fast developing filter. In that case a smaller value of the stepsize is required
to obtain a stable adaptation process.
convergence speed
The speed of convergence of each independent mode can be described by an exponential
decay [2] with time constant:
i =1
2i(samples) (3.16)
The speed of the convergence process is determined by the mode mode with largest time
constant which corresponds to the mode with the smallest eigenvalue. Also the convergence
process is influenced by the stepsize. A larger stepsize results in a faster convergence, so
a good measure of the overall convergence process appears to be the ratio between the
largest and smallest eigenvalue maxmin
. A fast overall convergence behaviour requires a small
eigenvaluespread.
3.2.3 Presenting the secondary path: FxLMS algorithm
In the previous subsection the general LMS adaptive filter problem was discussed. It was
assumed that the output of the control filter effects the disturbance signal measured at the
errorsensors without any change in gain or delay, i.e. the influence of the secondary path
is neglected. This assumption is not valid for most practical situations where there will be
a noticeable transfer path between the output of the control filter and the place where the
effect of the output of the controlfilter is measured. Therefore the effect of the secondary
path has to be incorporated in the LMS algorithm.
In the derivation of the controller, the assumption is made that the controlfilter W(q1, n)
will only change slowly compared to the timescale of the system dynamics of the secondary
8/23/2019 Duidamscriptie
34/72
24 3. Adaptive control
LMS
W(q1, n)
S
Sx(n)
d(n)
y(n)
e(n)
u(n)
R(n)
K M
L
+
+
Figure 3.3: Blockdiagram of FxLMS adaptive filter problem
path. By making this assumption, the filteroperations of the secondary path and the con-
trol filter may be transpositioned and still giving an accurate output. Refereing to equation
2.22 the error signal may then be written as:
e(n) = R(n)w(n) + d(n) (3.17)
Where R(n) represents the matrix of filtered reference signals and w(n) the vector of
timedependent controlfilter coefficients as was defined in the previous chapter. Defining
again the mean square error as the used costfunction, and using the instantaneous gradient,
the update step for the controlfilter may be written as:
w(n + 1) = w(n) RT(n)e(n) (3.18)
From this update equation it can be seen that an additional operation needs to be per-
formed in order to determine the matrix of filtered reference signals defined by R(n) com-
pared with the update equation incorporated with the general LMS algorithm mentioned
in the previous subsection. When a SISO problem is considered, this operation reduces to
a simple filtering operation between the reference signal and the secondary path. There-
fore this algorithm is commonly known as the filtered-reference LMS (FxLMS) algorithm.
However when multiple inputs or outputs are concerned, the filtering of the reference signal
implies a kronecker tensor product, although this signal will still be mentioned as a filteredreference signal in the following text. An extended blockdiagram with the secondary path
added is shown in figure 3.3.
Expression 3.18 used to update the filtercoefficients assumes perfect knowledge of the
physical plant modeled by S. Usually the the actual secondary path S can not be mod-
eled exactly and an estimated version is used, denoted by S. The use of a model of the
secondary path which not exactly represents the actual plant has the implication that
the instanteous gradient will point in a slightly different direction, leading to a different
convergence path of the filtercoefficients described by the following update equation:
w(n + 1) = w(n) RT(n)e(n) (3.19)
8/23/2019 Duidamscriptie
35/72
3.2. Adaptive feedforward control 25
Provided the algorithm is stable, the implication of using the modified update equation is
that the coefficients will in the mean converge to a suboptimal solution described by:
w =
E[RT(n)R(n)]1
E[RT(n)d(n)] (3.20)
which differs from the optimal Wiener solution described by equation: 2.26.
The stability criterium for the FxLMS algorithm can be derived in a similar way as
was described for the general LMS algorithm, leading to an expression for the theoretical
maximum stepsize:
0 < < 22Re(max)
|max|2 (3.21)
Where the potentially imaginary eigenvalues are taken from the crosscorrelation matrix
consisting of the reference signal filtered by the estimated and real secondary path Rrr.
It may also be noted that if at least one of the eigenvalues has a negative real part,
the associated independent mode vi described by equation 3.12 will exponentially increase,
leading to an instable adaptation process. If perfect plant knowledge is assumed the matrix
E[RT(n)R(n)] will be guaranteed to be positive definite, having only positive eigenvalues,
provided the reference signal persistently excites the filter.
A sufficient condition to guarantee stability if perfect plant knowledge is not assumed
[2] may then be given by :
eig[SH(ejT)S(ejT ) + SH(ej T)S(ejT)] > 0 for all T (3.22)
Where H denotes the complex conjugate transpose operator. Besides using a more accurate
plant model, another solution to stabilize the controller is to make use of regularization.
This will be discussed in section 3.2.5.
3.2.4 Reducing the computational load and increasing the convergence
speed: IO factorization
In the previous subsection it was shown that the convergence properties of the FxLMS
algorithm are depending on the size of the eigenvaluespread from the crosscorrelation ma-
trix consisting of the reference signal filtered by the estimated and real secondary path
Rrr. If this eigenvaluespread approaches unity, faster convergence of the adaptive filtercoefficients to the fixed gain solution is achieved. A low eigenvaluespread therefor is desir-
able for a fast convergence. Using the FxLMS algorithm, the eigenvaluespread is limited
by the dynamical range of the power spectrum of the reference signal combined with the
dynamical range of the frequency response of the secondary path [2]. Considering a single
input single output system and assuming perfect plant knowledge this may be written as:
maxmin
|S(ej T)|2Sxx(ej T)max[|S(ejT)|2Sxx(ejT)]min
(3.23)
If the reference signal is assumed to be a white noise sequence, it has a power spectrum of
unity over the whole frequency range. The eigenvaluespread is then bounded only by the
8/23/2019 Duidamscriptie
36/72
26 3. Adaptive control
LMS
W(q1, n) S
Si
S1o
x(n)
d(n)
y(n)
e(n)
u(n)u(n)
R(n)
K M
L
+
+
Figure 3.4: Blockdiagram of FxLMS adaptive filter problem with postconditioning applied
ratio of maximum and minimum values of the gain of the secondary path. So as to have afast convergence it is desirable to keep this ratio as small as possible.
Now lets consider the outputsignal of the adaptive controller is to be prefiltered by the
inverse of the secondary path. Referring to the properties of the inner and outer factor
explained in appendix A.1, this is allowed because the outerfactor is a minimum phase
system and thus has a stable inverse. The errorsignal may then be expressed as:
e(n) = SS1o W(q1, n)x(n) + d(n) (3.24)
= SiW(q1, n)x(n) + d(n) (3.25)
and the blockdiagram of the FxLMS algorithm using IO-factorization, which is also refereedto as postconditioning, is shown in figure 3.4. The resulting secondary path may now be
recognized as the inner-factor of the original secondary path. Because the inner-factor of
a system is by definition an all-pass system, it has a frequency response of unity over the
whole frequency range, and therefor the power spectrum density of the reference signal
filtered with the inner-factor remains unaffected. Consequently the eigenvaluespread of
the filtered reference signal is limited by the dynamical range of the power spectrum of the
reference signal only when inner outer factorization is applied.
The prize to pay using the postconditioned FxLMS algorithm, is that the output signal
of the adaptive filter has to be filtered by the inverse of the outerfactor, which is an extra
operation. However by noticing that the order of the inner factor equals the amount ofzeros of the secondary path outside the unit circle, the order of the model used to filter the
reference signal may just be largely reduced. When the computational load of the FxLMS
algorithm is examined, it appears that kronecker tensor product between the reference
signal and the secondary path S x(n) requires the major part. Filtering the referencesignal using a reduced order secondary path may therefor lead to a far more efficient
algorithm compared to the traditional FxLMS algorithm.
A comparison of the amount of additions and multiplications referred to as floating
point operations (flops) between the two algorithms is given in table 3.1. The total amount
of flops per sample required by each version of the FxLMS consist of the amount of flops
of a number of separate operations which can be categorized as:
8/23/2019 Duidamscriptie
37/72
3.2. Adaptive feedforward control 27
1. The filtering of the reference signal by the adaptive control filter.
2. The calculation of the kronecker tensor product between the reference signal and thesecondary path.
3. The calculation of the new vector of filtercoefficients.
4. For the postconditioned FxLMS algorithm, the filtering of the inverse of the outer-
factor by the outputsignal of the adaptive control filter.
In the table, N denotes the order of the secondary path, Ni denotes the order of the
inner-factor. The order of the outerfactor is by definition equal to the order of the system
itself. K, M and L denotes respectively the amount of reference, steering and error signals.
Further the amount of tabs of the control filter is denoted by I.To have an idea of the practical reduction in the amount of flops an inner-outer factor-
Table 3.1: Comparison of the number of floating point operations required by the FxLMSalgorithm with and without postconditioning
Operation Traditional FxLMS withFxLMS algorithm postconditioning
Filtered reference signal LM K[2N2 + 3N + 1] LM K[2N2i + 3Ni + 1]
Filterupdate 2MKIL + L 2MKIL + L
Filtering controller output 0 2N[N + M + L]by outerfactor +2M L
N
L
Filtering reference signal 2IKM M 2IKM Mwith adaptive filter
Total amount of flops LM K[2N2 + 3N + 2I + 1]+ LM K[2N2i + 3Ni + 2I + 1]+2M KI M + L 2M KI M+
2N[N + M + L] + 2M L N
ization on an identified secondary path of the 6-DOF hybrid isolation vibration setup is
performed. The identified system has an order of 100, yielding an innerfacor of order 25 and
the adaptive controlfilter is assumed to be contained by I is 200 tabs. The total amount
of flops without inner-outer factorization of the FxLMS algorithm equals: 747636. Using
postconditioning, yields a total of 86902 number of flops, which is less then 12 percent of
the amount of flops required by the traditional FxLMS algorithm.
3.2.5 Decreasing the steering signals: regularized solution
The energy required by the actuators to obtain optimal reduction of the disturbance signal
can be considerably high when the adaptive controller is designed to minimize the mean
square error only. Therefore as was discussed in the previous chapter it may be desirable
to make use of a regularized solution, which also tries to minimize the output of the
controller. In this subsection the implication of regularization on the adaptive controller
will be discussed.
8/23/2019 Duidamscriptie
38/72
28 3. Adaptive control
Regularization of the controller output
In order to derive the regularized adaptive controller, first the costfunction described byequation 2.29 is considered. In addition to the mean squared error also a term proportional
to the mean squared actuator-input is to be minimized. First the adaptive controller
without applying postconditioning is considered. The error and actuator-input signal may
then be written as:
e(n) = R(n)w(n) + d(n) (3.26)
u(n) = X(n)w(n) (3.27)
Taking the instantaneous derivative of the mentioned costfunction according to each of the
filtercoefficient yields the following update equation for the vector of filtercoefficients:
w(n + 1) = w(n) [RT(n)e(n) + XT(n)u(n)] (3.28)
Resulting in an extra of 3M KI flops per sampletime compared to the non-regularized
solution. Next the regularized solution using IO-factorization is derived. The filtered
reference signal is obtained by taking the kronecker tensor product between the inputsignal
and the innerfactor of the secondary path as was defined in the previous subsection. The
inputsignal can now be defined as follows:
u(n) = S1o
X(n)w(n)
(3.29)
= Q(n)w(n) with Q(n) = S1
o xT(n) (3.30)
Taking again the instantaneous derivative of the costfunction to each of the filtercoefficients
the update equations is now given by:
w(n + 1) = w(n) [RT(n)e(n) + QT(n)u(n)] (3.31)
The problem arises with the calculation of the kronecker tensor product which incorpo-
rates an amount LM K[2N2 + 3N1] extra multiplications and additions which leads to an
relatively large increase of the computational load.
Regularization of the controlfiltercoefficients
Therefore another approach is taken when applying postconditioning. Instead of weighing
the controller-output signals, the weighing of the sum of the squared coefficients of the
adaptive controller in the costfunction is proposed. By doing so the coefficients of the
adaptive FIR filter of the controller will be minimized in addition with the mean square
error according to the following costfunction:
J = E[eT(n)e(n) + wT(n)w(n)] (3.32)
Which results in the following expression for the update equation of the filtercoefficients:
w(n + 1) = [1 ]w(n) [RT(n)e(n)] (3.33)
8/23/2019 Duidamscriptie
39/72
3.2. Adaptive feedforward control 29
where the factor [1 ] is called the leakage factor and so this algorithm is also knownas the leaky LMS algorithm, because the coefficients would leak away if the error signal
approaches zero. The extra flops involved with the leaky LMS algorithm are just M KI+ 2
operations.
When a similar analysis on the stability of this algorithm is performed as was discussed
in section 3.2.2 it appears that the stability and convergence properties of the algorithm
are now depending on the eigenvalues of the matrix: E[RT(n)R(n) +I], which effectively
means that to each of the eigenvalues a term is added. So besides decreasing the controller
output, another advantage of introducing a coefficient weighting factor is that eigenvalues
having otherwise a small negative real part may now become positive. In general can
be said that adding a small value to the vector of eigenvalues will increase the smallest
eigenvalue by a relative large amount and therefore reducing the eigenvaluespread. The
leaky LMS algorithm can therefor be used to decrease the steering signals, increase the
robustness as well as the convergence speed of the FxLMS algorithm. The drawback of
using coefficient weighing factor in the costfunction is that the solution will converge to
2.31 which gives a suboptimal performance compared with the optimal solution given by
equation 2.26. However experimental results have proved using only a small value of can
have a major effect on the decrease of the steering signals and resulting in only a slightly
decreased performance.
Regularization of the outerfactor
It may be mentioned that using regularization in combination with postconditioning, leads
only to the minimizing of the adaptive part of the controller. This may be seen by consid-
ering the complete controller consisting of two systems. The first system can be recognized
as the adaptive controller represented by the FIR-filter W(q1, n), the second part equals
the inverse outerfactor of the secondary path: S1o and the complete controller may thenbe represented by the following expression:
Wcombined = S1o
fixedW(q1, n)
adaptive(3.34)
When a regularization term is included in the costfunction, the sum of squared coefficients
of the adaptive filter is being minimized together with the mean square error. As a result
the gain of the adaptive part of the controller will decrease at its peak values. However
the gain of the fixed part of the controller remains unaffected, which may still lead to a
considerable large gain of the combined controller at those frequencies corresponding to
the peak values of the frequency response of the inverse outer factor.
If it is desired to decrease the gain of the total controller at the mentioned frequencies
as well, the outerfactor may also be regularized. This is done by adding a small value to
frequency response of the outerfactor, which may be expressed as [2]:
STo (z1)So(z) = STo (z1)So(z) + I (3.35)
8/23/2019 Duidamscriptie
40/72
30 3. Adaptive control
Where I denotes an MM identity matrix, and M denotes the number of inputsignals ofS(z). The gain of the inverse of the outerfactor is most large at those places where the gain
of the outerfactor is most small. By adding a small value to the gain of the outerfactor, it
will most largely be increased at places where its value is smallest, thereby decreasing the
gain of the inverse of the outerfactor most largely at its peak values. It was observed that
by lowering the gain at the peak values of the inverse outerfactor, the gain of the adaptive
part of the controller was increased at the same frequencies. By doing so the gain at those
frequencies may be more efficiently reduced by the adaptive controller using the standard
regularization according to equation: 3.32
Instead of filtering the errorsignal by Si(z), the errosignal now needs to be filtered by:
S(z)S1o (z). The additional computational load is determined by the increased size of the
impulse response of the latter model, compared to the impulse response of Si(z). Yet, the
advantage of this solution with respect to for example introducing a frequency dependent
weighing term in the costfunction is that the additional computational load required, may
be much smaller.
The convergence properties are determined from the eigenvalues of the autocorrela-
tionmatrix obtained by filtering the reference signal by the combined model: S(z)S1o (z).
Because this system will not anymore be an all-pass system, an increase in the eigenvalue-
spread will be the result.
3.2.6 Reducing the computational load: adjointLMS algorithm
A disadvantage of the FxLMS algorithm is that the computational load is relatively highespecially when multiple inputs and outputs are considered. As was mentioned earlier,
this is mainly caused by the kronecker tensor product involved in filtering the reference
signal by the secondary path, which requires a relatively large amount of flops. In this
subsection it will be shown that a computational much more efficient algorithm called the
AdjointLMS algorithm can be obtained by filtering the error signal instead of the reference
signal.
The algorithm will be derived considering a time averaged approach. First lets consider
the general costfunction
J = E[eTe] (3.36)
Taking the derivative of this costfunction to the filtercoefficients results in:
J
w= 2E[RT(n)e(n)] (3.37)
Written out the expectation operation, this derivative can be written apart for each coef-
ficient as follows:
J
w(i)m,k
= limN
2
N
N
n=NL
l=1 Sm,lxk(n i)el(n) (3.38)
8/23/2019 Duidamscriptie
41/72
3.2. Adaptive feedforward control 31
When approaching the state space model Sl,m by a FIR model with a sufficient number of
J coefficients s(j)l,m, the above derivative can be written as:
J
w(i)m,k
= limN
2
N
Nn=N
Ll=1
J1j=0
s(j)m,lxk(n i j)el(n) (3.39)
By introducing the dummy variable n = n j the derivative becomes:
J
w(i)m,k
= limN
2
N
Nn+j=N
Ll=1
J1j=0
s(j)m,lel(n
+ j)xk(n i) (3.40)
When considering taking the mean of the derivative from n = Nj until n = Nj as
the same as taking the mean from N until N as N goes to , n
+j may be replaced byn in the expectation operation:
J
w(i)m,k
= limN
2
N
Nn=N
Ll=1
J1j=0
s(j)m,lel(n + j)xk(n i) (3.41)
By defining the filtered error signal as:
fm(n) =
Ll=1
J1j=0
s(j)m,lel(n + j) (3.42)
The gradient may then be regarded as a multiplication of the reference and the filtered
error signal:
J
w(i)m,k
= limN
2
N
Nn=N
fm(n)xk(n i) (3.43)
The time averaged behaviour of an adaptation algorithm using the gradient based on a
filtered error signal will be the same as a steepest descent algorithm based on a filtered
reference signal [15, 2].
When taking the instantaneous version of the gradient defined by equation 3.43 and
examining the expression for the filtered error, it becomes apparent that this expression is
not causal because a time advanced error signal is required. To make the expression causala delay ofJ 1 samples will be introduced to the error and reference path and by defining
j = j + J 1 the filtering of the error signal can be written as
fm(n J + 1) =L
l=1
j=J1j=0
s(jJ1)m,l el(n j) (3.44)
Leading to the following expression for the update-step
w(n + 1) = w(n) +
2
J
w(n)(3.45)
= w(n) + XT(n J + 1)f(n J + 1) (3.46)
8/23/2019 Duidamscriptie
42/72
32 3. Adaptive control
LMS
W(q1, n) S
S(q1) qq
x(n)
d(n)
y(n)
e(n)
u(n)
x(n ) f(n )
K
M
M
L
+
+
Figure 3.5: Blockdiagram of adaptive filter problem using AdjointLMS alghoritm, de-
notes J 1 samples
Where X(n) is defined as in equation 2.16 and f(n) is defined as the vector ofM1 filterederror signals. It may be noticed that the filtering of the error signal actually occurs by a
delayed time reversed impulse response of the secondary path model, which z-transform
can be written as:
zJ+1Sm,l(z1) =
J1j=0
s(j)m,lz
jJ1 (3.47)
The filter ST(z1) is called the adjoint of S(z) and is defined as its anticausal transposedcounterpart. Therefore this algorithm is also known as the adjoint-LMS algorithm. In
order to implement a stable and causal approximation of the adjoint of the secondary path
the adjoint system is described by a finite impulse response. The length of the filter is
governed by the amount of samples by which the impulse response of the secondary path
is contained.
The blockdiagram of the adjointLMS algorithm is shown in figure 3.5. The stability behav-
iour of the FxLMS algorithm was described for a slowly varying filter. Because the gradient
estimate of the AdjointLMS algorithm will be similar to the gradient of the FxLMS in the
limit of a slow adaptation process, the stability conditions described for the FxLMS algo-
rithm will also apply for the AdjointLMS algorithm [2, 1]. Yet the convergence behaviourwill be somewhat slower because of the delay introduced to make the adjoint filter causal.
Because both algorithms converge to the same optimal solution the amount of reduction
obtained at the disturbance signal will ultimately be similar for both algorithms.
Having specified the algorithm, the most important reason using the adjoint LMS alogrithm
is because it is much more efficient. The reason of this, lies in the fact that the kronecker
tensor product incorporated by the filtering of the reference signal by the FxLMS algo-
rithm can be avoided. This may save a large amount of flops required per sample, especially
when multiple input and output channels are involved. The difference in the amount of
flops for the traditional FxLMS and the adjoint LMS algorithm is denoted in table 3.2
Where the symbols are defined as in section 3.2.4 and J denotes the amount of tabs by
8/23/2019 Duidamscriptie
43/72
3.2. Adaptive feedforward control 33
Table 3.2: Comparison of the number of floating point operations required by the FxLMS
algorithm and Adjoint LMS algorithm FxLMS algorithm Adjoint LMS algorithm
Filtered reference/error signal LM K[2N2 + 3N + 1] 2M LJ MFilterupdate 2MKIL + L 2M KI + M
Filtering reference signal 2IKM M 2IKM Mwith adaptive filter
Total amount of flops LM K[2N2 + 3N + 2I + 1]+ M[2JI + 4KI 1]2M KI M + L
which the impulse response of the secondary path is contained. Using the practical ex-
ample mentioned before, and noting that J can be given by 200 tabs, the total amount
flops of the FxLMS algorithm is 747636. The total amount of flops required by the Ad-
joint LMS algorithm contains 484794, which is only 65 percent compared to the traditional
FxLMS algorithm. Finally it should be noted that when the adjoint LMS algorithm, like
LMS
W(q1, n) SS1o
Si (q1) q
q
x(n)
d(n)
y(n)
e(n)
u(n)u(n)
x(n ) f(n )
K
M
M
L
+
+
Figure 3.6: Blockdiagram of adaptive filter problem using AdjointLMS alghoritm withpostconditioning applied, denotes J
1 samples
the FxLMS algorithm is arranged with postconditioning as shown if figure 3.6 it takes
advantage from the more contained size of the impulse response of the innerfactor of the
secondary path compared with the impulse response of the secondary path itself. However
the reduction in the amount of flops between the FxLMS and the AdjointLMS algorithm
is less pronounced when using postconditioning. The innerfactor of the secondary path
may now be sufficiently described by a FIR filter containing 20 tabs. The flops required by
the postconditioned FxLMS algorithm amounts 86902 compared to 75160 flops required
by the postconditioned AdjointLMS algotihm.
8/23/2019 Duidamscriptie
44/72
34 3. Adaptive control
3.3 Adaptive feedback control
3.3.1 Introduction
In the previous section the design of an adaptive feedforward controller was discussed. It
was mentioned in the chapter on the design of the fixed gain controller, that a feedback
arrangement can be considered as a feedforward arrangement using the principle of internal
model control and assuming perfect plant knowledge. By this way the optimal feedback
controller could be obtained using the standard feedforward design theory. This strategy
will also be applied in the design of the adaptive feedback controller, as will be shown in the
first subsection of this section. In the following subsection the influence of modeluncertainty
on the behaviour of the stability and convergence properties of the adaptive controller will
be described. Finally the adaptive controller is presented using postconditioning whichconcludes this section on adaptive feedback control.
3.3.2 Design of the adaptive controller
Following closely the discussion on the design of the fixed gain feedback controller men-
tioned in section 2.4, the adaptive feedback controller is designed according to the Ad-
jointLMS algorithm discussed in the previous section.
First perfect plant knowledge is not assumed and the according blockdiagram of the
adaptive feedback controller can be shown in figure 3.7. As a reference signal, the estimated
LMS
W(q1, n) S
S
S(q1) qq
d(n)
d(n)
y(n)
y(n)
e(n)
u(n)
x(n ) f(n )
M
M
L
+
+
+
Figure 3.7: Blockdiagram of adaptive feedback filter problem using the AdjointLMS al-ghoritm, denotes J 1 samples
8/23/2019 Duidamscriptie
45/72
3.3. Adaptive feedback control 35
disturbance signal d(n) is used. The error signal can then be written as:
e(n) = SW(q1, n)d(n) + d(n) (3.48)
The estimated disturbance signal is reconstructed by filtering the steering signal by an
internal mode