NONLINEAR DESIGN OF 3-AXES AUTOPILOT FOR SHORT RANGE SKID-TO-TURN SURFACE-TO-SURFACE HOMING MISSILES By Abhijit Das SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE (BY RESEARCH) IN ELECTRICAL ENGINEERING AT INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR MAY 2006
224
Embed
NONLINEAR DESIGN OF 3-AXES AUTOPILOT FOR … · Short Range Skid-to-Turn Surface-to-Surface Homing Missiles ... design of missile autopilots has ... may be useful for designing more
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
NONLINEAR DESIGN OF 3-AXES AUTOPILOT FOR
SHORT RANGE SKID-TO-TURN
SURFACE-TO-SURFACE HOMING MISSILES
By
Abhijit Das
SUBMITTED IN PARTIAL FULFILLMENT OF THE
REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE (BY RESEARCH)
IN
ELECTRICAL ENGINEERING
AT
INDIAN INSTITUTE OF TECHNOLOGY
KHARAGPUR
MAY 2006
Certificate
This is to certify that the thesis entitled “Nonlinear Design of 3-axes Autopilot for
Short Range Skid-to-Turn Surface-to-Surface Homing Missiles” submitted by
Abhijit Das for the award of the degree of Master of Science (by research) is a
record of bonafide research work carried out by him under our guidance and supervision
during the period 2003-2006. The results embodied in this thesis have not been submitted
to any other University or Institute for the award of any degree or diploma.
IIT, Kharagpur
1st May, 2006
Siddhartha Mukhopadhyay
Professor,
Department of EE
Indian Institute of Technology
Kharagpur -721 302, INDIA
Amit Patra
Professor,
Department of EE
Indian Institute of Technology
Kharagpur -721 302, INDIA
INDIAN INSTITUTE OF TECHNOLOGY
Date: May 2006
Author: Abhijit Das
Title: Nonlinear Design of 3-Axes Autopilot for Short
Range Skid-to-Turn Surface-to-Surface Homing
Missiles
Department: Electrical Engineering
Degree: M.S. Convocation: May Year: 2007
Permission is herewith granted to Indian Institute of Technology tocirculate and to have copied for non-commercial purposes, at its discretion, theabove title upon the request of individuals or institutions.
Signature of Author
THE AUTHOR RESERVES OTHER PUBLICATION RIGHTS, ANDNEITHER THE THESIS NOR EXTENSIVE EXTRACTS FROM IT MAYBE PRINTED OR OTHERWISE REPRODUCED WITHOUT THE AUTHOR’SWRITTEN PERMISSION.
THE AUTHOR ATTESTS THAT PERMISSION HAS BEEN OBTAINEDFOR THE USE OF ANY COPYRIGHTED MATERIAL APPEARING IN THISTHESIS (OTHER THAN BRIEF EXCERPTS REQUIRING ONLY PROPERACKNOWLEDGEMENT IN SCHOLARLY WRITING) AND THAT ALL SUCH USEIS CLEARLY ACKNOWLEDGED.
iii
To the loving memory of ’MA’.
From the album of my memory, I remember those early days of my life, a
naughty boy always tried to get rid of his mother’s domination. Now I
can understand that without Her domination, I may not be able to see
these days.
To ’BABA’, who had the arduous task of rising that incorrigible boy.
Acknowledgments
I would like to thank Prof. Siddhartha Mukhopadhyay and Prof. Amit Patra, my su-
pervisors, for their many suggestions and constant support during this research. I am
also thankful to Mr. Ranajit Das, Sc. ”C”, DRDL Hyderabad and Mr. Sourav Patra,
research scholar, Electrical Engineering, for their help through the early years of chaos
and confusion.
Abhijit Das
Systems and Information Lab
Dept. of Electrical Engineering
Indian Institute of Technology
Kharagpur-721302
Abstract
Traditionally, missile autopilots have been designed using linear control approaches with
gain scheduling. Autopilot design is typically carried out in the frequency domain and
the plant is linearized around various operating points on the trajectory. Moreover, three
single axis autopilots are usually designed without considering the interaction among the
motion axes, i.e., the autopilots in each of the three axes are designed independently of
each other. Such designs can not handle the coupling among pitch-yaw-roll channels, es-
pecially under high angles of attack occurring in high maneuver zones. In the last decade,
design of missile autopilots has been extensively studied using modern control design
±10Cnη ±12Cmη ±12Thrust misalignment variation in %TmX
±5TmY
±2TmZ
±3
Table 3.1: Variation in aerodynamic coefficients and thrusts in x− y − z directions
control law has been applied. The same scenario can be seen for yaw and roll channel in
Figures 3.10 and 3.11 respectively. The last three Figures in 3.12 describe the control
input requirements in pitch,yaw and roll channels for robust control law. We can see that
the control input requirement is higher for FBLC and it varies rapidly. Figure 3.13 shows
the rate of control surface deflection and we can observe that except manuevering zone
the rate is well below the maximum limit (normalized to 1).
Note that we have not implemented this H∞ control law in closed loop guidance and
seeker. One main reason is that finding suitable weighting functions W1 and W3 requires
extensive efforts. We have left this part as future work.
Chapter 3. H∞ Control of Feedback Linearized Inner Rate Loop Dynamics100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
Time(Normalised)
qd
ot
and
V1 (
No
rmal
ised
)
FBLC Controller
qdot
v1
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−0.05
0
0.05
0.1
Time (Normalised)
qd
ot
and
V1 (
No
rmal
ised
)
Robust Controller
qdot
v1
Figure 3.9: Comparison of FBLC and robust controller in pitch plane
Chapter 3. H∞ Control of Feedback Linearized Inner Rate Loop Dynamics101
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
Time (Normalised)
rdo
t an
d V
2 (N
orm
alis
ed)
FBLC Controller
rdot
v2
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−0.05
0
0.05
Time (Normalised)
rdo
t an
d V
2 (N
orm
alis
ed)
Robust Controller
rdot
v2
Figure 3.10: Comparison of FBLC and robust controller in yaw plane
Chapter 3. H∞ Control of Feedback Linearized Inner Rate Loop Dynamics102
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−1.5
−1
−0.5
0
0.5
1
1.5
Time (Normalised)
pd
ot
and
V3 (
No
rmal
ised
)
FBLC Controller
pdot
v3
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
Time (Normalised)
pd
ot
and
V3 (
No
rmal
ised
)
Robust Controller
pdot
v3
Figure 3.11: Comparison of FBLC and robust controller in roll plane
Chapter 3. H∞ Control of Feedback Linearized Inner Rate Loop Dynamics103
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−5
0
5
10
Time (Normalised)
Pit
ch D
efle
ctio
n
RCFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−5
0
5
Time (Normalised)
Yaw
Def
lect
ion RC
FBLC
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−10
0
10
20
Time (Normalised)
Ro
ll D
efle
ctio
n RCFBLC
Figure 3.12: Control deflection comparison
Chapter 3. H∞ Control of Feedback Linearized Inner Rate Loop Dynamics104
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−1
−0.5
0
0.5
Time (Normalised)No
rmal
ized
Pit
ch D
efle
ctio
n R
ate
RCFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−1
0
1
Time (Normalised)No
rmal
ized
Yaw
Def
lect
ion
Rat
e
RCFBLC
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−1
−0.5
0
0.5
Time (Normalised)No
rmal
ized
Ro
ll D
efle
ctio
n R
ate
RCFBLC
Figure 3.13: Control deflection rate
Chapter 3. H∞ Control of Feedback Linearized Inner Rate Loop Dynamics105
3.3 Comments
This chapter has demonstrated the use of robust feedback linearization for tackling the
aerodynamic uncertainties through a short range surface-to-surface homing missile. The
centralized design of the multivariable controller has been formulated as a multiobjective
optimization problem in the LMI framework. The solution is numerically sought through
the LMI solver. The performance robustness of the designed controller has been verified
through a realistic and practical 6-DOF simulation platform. Feedback linearization along
with a robust control law shows pretty good performance over a nominally designed feed-
back linearizing controller and thus it presents a robust feedback linearization approach.
Chapter 4
Sliding Mode control after Feedback
Linearization
4.1 Introduction
In this chapter, a robust control structure known as sliding mode control or variable
structure control, has been applied for designing rate loop controller. A variable structure
system is one whose structure can be changed or switched abruptly according to a certain
switching logic whose aim is to produce a desired overall behavior of the system. The
simplest example of variable structure systems are relay or on-off systems, in which the
control input can have only two values, on or off. Similar to the previous Chapter 3, sliding
mode controller has been formulated for the feedback linearized plant. We have already
discussed that the success of feedback linearization approach is hinged on the availability of
the accurate description of the model [14]. Indeed severe model uncertainty mainly due to
the aero-coefficients may degrade the performance of the feedback linearization approach.
In this regard, some robust scheme [28] such as sliding mode control is required. Figure
4.1 describes the proposed control structure and one can observe that this structure is
similar to the control structure proposed in Figures 2.1 and 3.1. The only difference is
that instead of linear controller block ’LC’ in Figure 2.1 and robust control block ’RC’ in
Chapter 4. Sliding Mode control after Feedback Linearization 107
Figure 4.1: Block diagram of the system representing robust feedback linearization foroutputs q, r and p
Figure 3.1, a new sliding mode control block ’SMC’ has been introduced. In Section 3.2 we
have discussed how uncertainties and various disturbances affect the nominally feedback
linearized plant. From (2.10) and (3.20), we can write the linearized input-output relation
of the plant model with q, r and p as outputs, as
y1
y2
y3
=
v1
v2
v3
(4.1)
In Chapter 3 we have designed H∞ robust control law for the new inputs v = [v1, v2, v3]T
to tolerate the model uncertainties and disturbances. Here in next sections we will discuss
about the formulation of sliding mode control law instead of H∞ control law for designing
v.
Chapter 4. Sliding Mode control after Feedback Linearization 108
4.2 Formulation of sliding mode controller
In this section a very short discussion on sliding modes which is a particular approach to
the design of variable structure systems, has been introduced. Developed in the Soviet
Union more than 40 years ago, sliding mode controllers differ from simpler relay controllers
in that they rely on extremely high speed switching among the control values. As discussed
already total design has been carried out in two steps.
• Performing input-output linearization of the nominal plant.
• Formulation of robust sliding mode control law for that feedback linearized plant.
We consider the system described by
·x (t) = f (x (t)) + g (x (t)) u (t)
y (t) = h (x (t)), (4.2)
where x(t) is the n-dimensional plant state, u is m-dimensional plant input, y is m-
dimensional plant output, f : Rn → Rn and g : Rn → Rn ×Rm and h : Rn → Rm are
smooth functions. Sliding mode controller for system 4.2 can be designed by the following
steps.
• STEP I : Performing input-output feedback linearization
• STEP II: Formulation of sliding mode control law for designing v
STEP I has been already done in Section 2.2.1 and the relevant equations that we will
demonstrate in this chapter will be exactly the same as Section 2.2.1.
4.2.1 Step II: Formulation of sliding mode control law for de-
signing v
The main idea behind sliding mode control is to choose a suitable surface in state space,
typically a linear hypersurface, called the switching surface, and switch the control input
Chapter 4. Sliding Mode control after Feedback Linearization 109
on this surface. The control input is then chosen to guarantee that the trajectories near
the sliding surface are directed toward the surface. Ideally then, any control input will
suffice so long as the resulting trajectories are pointing toward the surface. Once the
system is trapped on the surface, the closed loop dynamics are completely governed by
the equations that define the surface. In this way, since the parameters defining the surface
are chosen by the designer, the closed loop dynamics of the system will be independent
of perturbations in the parameters of the system and robustness is achieved. The design
of sliding mode control can be broken down into two steps:
• Specifying a suitable sliding surface
• Achieving the sliding condition and designing system dynamics on the surface
4.2.1.1 Specifying sliding surfaces
Let ei = yi − ri with ri the reference trajectories, be the tracking error for the output yi
and let
ei = yi − ri =[
ei ei ... ... eri−1i
]T
be the tracking error vector. Furthermore, let us define a time-varying surface Si(t) in
the state space Rri−1 by the scalar equation si(yi; t) = 0, where
s(yi; t) = (d
dt+ k)n−1ei (4.3)
and k is strictly positive constant. Or we can write (4.3) as
si (t) = e(ri−1)i + ki(ri−1)e
(ri−2)i + · · ·+ ki2e
(1)i + ki1e
(0)i + ki0
∫eidt (4.4)
In this way we can define m sliding surfaces si, i= 1, · · · ,m, based on the input-output
linearized system given in (4.1) ki(r−1), · · · , ki0 are such that
λri + ki(ri−1)λri−1 + · · ·+ ki1λ + ki0 (4.5)
Chapter 4. Sliding Mode control after Feedback Linearization 110
is Hurwitz polynomial. For the tracking task to be achievable using a finite control law
v, the initial state ri(0) must be such that
ri(0) = yi(0) (4.6)
Given initial condition (4.6), the problem of tracking ri = yi is equivalent to that of
remaining on the surface Si(t)∀t ≥ 0; indeed si ≡ 0 represents a linear differential equation
whose unique solution is ei = 0, given initial condition (4.6). Thus the problem of tracking
the n-dimensional vector ri can be reduced to that of keeping the scalar quantity s at
zero.
4.2.1.2 Achieving sliding condition
The closed-loop system is said to satisfy the sliding condition if the following applies [27].
1
2
ds2i
dt≤ −ηi |si| , (ηi > 0) (4.7)
where ηi, i = 1, · · · ,m are positive numbers. Note that sliding condition will make si (t) =
0 and si (t) = 0 in a finite time. Since is a stable differential equation, satisfaction of
si (t) = 0 by ei(t) in turn leads to asymptotic tracking.
Let
·s
def=
·s1
...·
sm
, y(ρ) =
y(r1)1...
y(rm)m
(4.8)
sgn (s) = [sgn (s1) , · · · , sgn (sm)]
where sgn(·) is the signum function. Now it has been reported in [36] that a control law
that achieves the sliding condition (4.7) is given by
u = E−1(( ·
s−y(ρ))−M − λsgn (s)
)(4.9)
Chapter 4. Sliding Mode control after Feedback Linearization 111
where λ = diag [λ1, · · · , λm] with λi a positive number greater than the given positive
number ηi. Note that
( ·si−yri
i
)= −rri
i + ki(ri−1)e(ri−1)i + · · ·+ ki2e
(2)i + ki1e
(1)i + ki0ei
so( ·s−y(ρ)
)does not depend on u. The integral term in (4.4) can be omitted by setting
ki0 = 0. Since the sliding condition also implies si(t) = 0, the asymptotical tracking can
still be achieved by the control law (4.9) as long as, for i = 1, · · · ,m, ki(r−1), · · · , ki1 are
such that λri−1 + ki(ri−1)λri−2 + · · · + ki1 are Hurwitz. In practice the implementation of
variable structure controllers results in control chattering. The ideal behavior of sliding
mode controller is achieved in the theoretical limit as the switching frequency becomes
infinite. In practice the small, but nonzero delay in control switching will cause the tra-
jectory to slightly overshoot or undershoot the switching surface each time the control is
switched. This is known as chattering. It has been observe that more the model uncer-
tainties, the chattering becomes severe. The approach taken in this thesis to overcome
the undesirable chattering is to introduce what is known as boundary layer around the
sliding surface and approximate the switching control law by a continuous control inside
this boundary layer. Thus the discontinuous control law sgn(si) is often replaced by the
saturation function sat( si
εi) where
sat (x) = x, if |x| ≤ 1
sat (x) = sgn (x) , if |x| ≥ 1(4.10)
4.2.2 Application to the STT missile model
As we have already tried input-output feedback linearization approach with nominal mis-
sile model in Section 2.2.1, let us start from the linearized input-output relation obtained
in 2.10,
y1
y2
y3
=
v1
v2
v3
(4.11)
Chapter 4. Sliding Mode control after Feedback Linearization 112
and we can easily observe that
r1 = 1, r2 = 1, r3 = 1. So, referring to equation (4.4) we can say that in this particular
case there will be three sliding surfaces namely
s1 = e1, s2 = e2, s3 = e3 where
e1 = q − qd
e2 = r − rd
e1 = p− pd
(4.12)
As discussed earlier qd, rd, pd can be obtained from outer loop or lateral acceleration error
dynamics. So from (4.8) we can write,
(s− y(ρ)) =
s1 − q
s2 − r
s3 − p
=
−k11qd
−k21rd
−k31pd
Now the control input can be easily found out from (4.9) with a boundary layer vector
ε =
ε1
ε2
ε3
=
0.1
0.2
0.2
The design parameters are given by k11, k21 and k31. These parameters have been chosen
by trial and error method. It has been observed for this particular case that using higher
values of kijs, the system becomes unstable and shows more chattering. For lower values
of kijs, the system performance shows a few second delay with respect to the nominal
FBLC. Thus we have chosen these kij values where the system performance is more or
less satisfactory. The 6-DOF simulation results have been shown in the next section to
check the system performance with sliding condition.
Chapter 4. Sliding Mode control after Feedback Linearization 113
4.2.2.1 Simulation Results
The simulation results shown for the sliding mode control is based on the detailed 6-DOF
given by (1.3) and (1.4). First we have tried with the same variations in aerodynamic
coefficients given in Table 3.1. But the control law (4.9) failed to handle that much
of variations and the system became unstable. A huge chattering was present in that
case. In the second stage we have decreased the variations in aerodynamic coefficients.
The simulations have been performed for open loop guidance as well as for closed loop
Aerodynamic coefficients variation in %CL ±30CS ±5CN ±5Cm ±5Cn ±5Clζ ±5Cnη ±5Thrust misalignment variation in %TmX
±1
Table 4.1: Variation in aerodynamic coefficients and thrusts in x− y − z directions
guidance with seeker. Open loop simulations have been performed to compare with the
H∞ robust control performance presented in Chapter 3. Figures 4.2,4.3 and 4.4 show
the input-output linearization with the sliding mode controller (FBLCSM) in comparison
with FBLC with linear controller (FBLCL) with guidance and seeker in open loop as done
before for the H∞ controller. But it can be seen that much improvement has not been
noticed for FBLCSM.
Figure 4.5 shows the input deflections for sliding mode and FBLC. Here also we can
see that in some cases sliding mode demands more input for almost the same performance.
So if we compare the open loop performances of SMC with H∞ control, we can see that
the latter one can handle more aerodynamic variations than the former one. As stated
before sliding mode control law has also been tested in FORTRAN 6-DOF platform with
Chapter 4. Sliding Mode control after Feedback Linearization 114
closed loop guidance and seeker. Some of these results have been explained below. We
will see that openloop and closed loop simulation results gives almost same performance
for sliding mode controller in comparison with FBLC. Figure 4.6 shows the pitch,yaw and
roll rates for FBLC and sliding mode controller. Here we can see that FBLC and the
sliding mode controller behave similarly.
Figures 4.7 and 4.8 show almost similar response for α, β and gimbal angle for both
FBLCL and FBLCSM. Figure 4.9 shows the pitch and yaw latax profiles from which
it can be concluded that performance is similar for small aerodynamic perturbations as
shown in Table 4.1. For heavy perturbation in aerodynamic coefficients, the FBLCL as
well as sliding mode controller become unstable. Figures 4.10 and 4.11 show pitch-yaw-
roll channel deflections and the fin distributions respectively. As expected, the sliding
mode controller takes more fin than the FBLCL. The rate of fin deflections are shown in
Figure 4.12 and the rate is below the maximum level.
Figure 4.13 represents the variation of aerodynamic coefficients and for small pertur-
bations in aerodynamic coefficients the response for FBLC and the sliding mode control
remain more or less the same. Thus we have seen that sliding mode control implemented
here can tolerate very small perturbations. Performance wise FBLCL and FBLCSM are
more or less similar and we have not found any significant improvements that is claimed
in [36].
Chapter 4. Sliding Mode control after Feedback Linearization 115
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−5
0
5
x 10−3
Time(Normalised)
qd
ot
and
V1 (
No
rmal
ised
)
FBLC Controller
qdot
v1
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−0.05
0
0.05
0.1
Time (Normalised)
qd
ot
and
V1 (
No
rmal
ised
)
SMC Controller
qdot
v1
Figure 4.2: Linearization in pitch channel
Chapter 4. Sliding Mode control after Feedback Linearization 116
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−1
−0.5
0
0.5
1
x 10−3
Time (Normalised)
rdo
t an
d V
2 (N
orm
alis
ed)
FBLC Controller
rdot
v2
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−0.04
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
Time (Normalised)
rdo
t an
d V
2 (N
orm
alis
ed)
SMC Controller
rdot
v2
Figure 4.3: Linearization in yaw channel
Chapter 4. Sliding Mode control after Feedback Linearization 117
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−2
−1
0
1
2x 10
−3
Time (Normalised)
pd
ot
and
V3 (
No
rmal
ised
)
FBLC Controller
pdot
v3
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−0.04
−0.02
0
0.02
0.04
Time (Normalised)
pd
ot
and
V3 (
No
rmal
ised
)
SMC Controller
pdot
v3
Figure 4.4: Linearization in roll channel
Chapter 4. Sliding Mode control after Feedback Linearization 118
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−4
−2
0
2
Time (Normalised)
Pit
ch D
efle
ctio
n
SMCFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−10
−5
0
Time (Normalised)
Yaw
Def
lect
ion
SMCFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−10
−5
0
Time (Normalised)
Ro
ll D
efle
ctio
n
SMCFBLC
Figure 4.5: Control deflection comparison
Chapter 4. Sliding Mode control after Feedback Linearization 119
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1−1
−0.5
0
0.5
1
Normalised Time
Nor
mal
ised
Pitc
h R
ate
Comparative Performance in 6−DOF Simulation
SlidingFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1−4
−2
0
2
Normalised Time
Nor
mal
ised
Yaw
Rat
e
SlidingFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−1
0
1
Normalised Time
Nor
mal
ised
Rol
l Rat
e
SlidingFBLC
Figure 4.6: Pitch, yaw and roll rates
Chapter 4. Sliding Mode control after Feedback Linearization 120
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−1
−0.5
0
0.5
1
Normalised Time
Nor
mal
ised
Alp
ha (
Bod
y)
Comparative Performance in 6−DOF Simulation
SlidingFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−2
−1
0
1
Normalised Time
Nor
mal
ised
Bet
a (B
ody)
Comparative Performance in 6−DOF Simulation
SlidingFBLC
Figure 4.7: α and β
Chapter 4. Sliding Mode control after Feedback Linearization 121
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
Normalised Time
Nor
mal
ised
Gim
bal A
ngle
(El)
Comparative Performance in 6−DOF Simulation
SlidingSliding
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−2
0
2
4
Normalised Time
Nor
mal
ised
Gim
bal A
ngle
(Az) Sliding
Sliding
Figure 4.8: Gimbal angle profile
Chapter 4. Sliding Mode control after Feedback Linearization 122
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1−1
−0.5
0
0.5
Normalised Time
Nor
mal
ised
Pitc
h la
tax
Comparative Performance in 6−DOF Simulation
SlidingFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−1
0
1
2
Normalised Time
Nor
mal
ised
Yaw
lata
x
Comparative Performance in 6−DOF Simulation
SlidingFBLC
Figure 4.9: Pitch and yaw latax
Chapter 4. Sliding Mode control after Feedback Linearization 123
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−2
−1
0
1
Normalised Time
Eff.
pitc
h de
fln (
δ PB
)
Comparative Performance in 6−DOF Simulation
SlidingFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−2
0
2
Normalised Time
Eff.
yaw
def
ln (
δ YB
) SlidingFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−3
−2
−1
0
Normalised Time
Rol
l def
ln (
δ R )
SlidingFBLC
Figure 4.10: Pitch,yaw and roll deflections
Chapter 4. Sliding Mode control after Feedback Linearization 124
5 10 15 20
−2
−1
0
1
2
Normalised Time
Nor
mal
ised
Fin
−1 d
efl
n SlidingFBLC
0.2 0.4 0.6 0.8 1
−1.5
−1
−0.5
0
0.5
1
1.5
Normalised Time
Nor
mal
ised
Fin
−2 d
efl
n
SlidingFBLC
0.2 0.4 0.6 0.8 1
−2
−1
0
1
Normalised Time
Nor
mal
ised
Fin
−3 d
efl
n
SlidingFBLC
0.2 0.4 0.6 0.8 1
−1.5
−1
−0.5
0
0.5
1
1.5
Normalised Time
Nor
mal
ised
Fin
−4 d
efl
n SlidingFBLC
Figure 4.11: Fin demands
Chapter 4. Sliding Mode control after Feedback Linearization 125
5 10 15 20
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
Normalised Time
Nor
mal
ised
Fin
−1 d
efl
n r
ate Sliding
FBLC
0.2 0.4 0.6 0.8 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
Normalised Time
Nor
mal
ised
Fin
−2 d
efl
n r
ate Sliding
FBLC
0.2 0.4 0.6 0.8 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
Normalised Time
Nor
mal
ised
Fin
−3 d
efl
n r
ate Sliding
FBLC
0.2 0.4 0.6 0.8 1
−0.5
0
0.5
Normalised Time
Nor
mal
ised
Fin
−4 d
efl
n r
ate Sliding
FBLC
Figure 4.12: Fin deflection rate
Chapter 4. Sliding Mode control after Feedback Linearization 126
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1−0.3
−0.2
−0.1
0
Comparative Performance in 6−DOF Simulation
Rol
ling
mom
ent c
oeffi
cien
t CL
Normalised Time
SlidingFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
0
0.5
1
Sid
e fo
rce
coef
ficie
nt C
S
Normalised Time
SlidingFBLC
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
−8
−6
−4
−2
0
2
Yaw
ing
mom
ent c
oeffi
cien
t Cny
Normalised Time
SlidingFBLC
Figure 4.13: Force and moment Coefficients
Chapter 4. Sliding Mode control after Feedback Linearization 127
4.3 Comments
The above simulation results describe the performance of sliding mode controller for a
feedback linearized skid-to-turn homing missile. The main idea in designing sliding mode
controller is to tackle model uncertainties which are inherently present in the system. The
simulation results presented here constitute a case study of robust feedback linearization
where the parametric uncertainties are small enough. In comparison with the Chapter 3,
one can say that sliding mode control law, does not give a very satisfactory performance
even in small aerodynamic perturbations shown in Table 4.1. However it has been tested
successfully in closed loop seeker based detailed 6-DOF model. On the other hand, H∞robust control law, gives satisfactory performance with huge aerodynamic uncertainties
shown in Table 3.1, has been tested in open loop guidance only.
Chapter 5
Conclusions
In this thesis, an approach has been presented for controlling a highly maneuverable short
range surface-to-surface missile. Our choice has been oriented by the industrial as well
as academic context. The necessity of increasing missile performances with reduced flight
range has motivated new investigations in order to estimate not only the potential of
recent theoretical control methods, but also their application level, in order to meet the
industrial demands.
In this thesis, the application of input-output linearization has been presented for a short
range surface-to-surface skid-to-turn homing missile. Here, a state dependent nonlinear
feedback control law has been designed for improving the overall missile performance in
view of reduced range, less control effort etc. Aerodynamic missiles generally suffer from
coupling effect among pitch-yaw-roll axes. This coupling effect results in large variations
in aerodynamic coefficients, side-slip, control effort in yaw etc. These in turn may cause
fin saturation and gimbal angle limit violation which cause the missile to lose its actual
track. The conventional three loop autopilot generally fails to solve this coupling prob-
lem. To get rid of this problem a nonlinear multivariable approach has been proposed.
In this approach, the whole missile model has been feedback linearized. Then a linear
proportional controller has been designed for obtaining the tracking performance in inner
rate loop. Thus the pitch-yaw-roll dynamics, specifically rate dynamics, of the missile
becomes linear and decoupled which are the most essential requirement for most of the
Chapter 5. Conclusions 129
aerodynamic missiles. The simulation results show the evidence of linearization and de-
coupling and the improved performance of the Feedback Linearizing Control (FLC) law
over the traditional three loop autopilot.
Although input-output linearization is successful in decoupling and linearizing nonlinear
dynamics, it fails to handle the same problem when plant dynamics become uncertain.
Generally short range aerodynamic missiles, in which boost time is small enough, suffers
from the problem of uncertainties due to aerodynamic coefficients. This problem becomes
more acute when the missile tries to turn rapidly to track the target. In view of this prob-
lem, two different robust control laws have been formulated. In both cases the nominal
missile dynamics has been input-output linearized as before. In the first case, a linear
matrix inequality approach to H∞ control has been presented along with its application to
uncertain missile dynamics. It has been shown through open loop MATLAB simulation
results that upto 100% uncertainty in aerodynamic coefficients can be tolerated through
this method.
In the second case, another approach of sliding mode control has been considered. Here
also, sliding mode control law has been applied to an input-output linearized plant. Sim-
ulation results have been presented with closed loop guidance and seeker and it has been
seen that this method is not so useful when the system poses very high uncertainties in
terms of aerodynamic coefficients.
All of the above control laws presented in this thesis, require whole state measurements.
But only some of the states are measured (rates in three channels) and the rest are un-
measured (velocity components in three channels). That means the unmeasured states
are to be estimated to realize the control laws practically. For that purpose, a nonlinear
Luenberger observer has been used. The forward velocity of the missile is assumed to
be known from thrust profile and observer equations have been formulated in terms of
α and β as input to the observer i.e., the output of the system are direct functions of α
and β. Thus α and β have been first estimated and then velocity components are derived
easily from standard mathematical relations. Simulation results show good tracking per-
formances in presence of sensor noise.
Chapter 5. Conclusions 130
The main contributions of the thesis can be summarized as follows:
1. Minimized roll rate even during high angle of attack maneuver.
2. Good decoupling among the three axes during pitch maneuver.
3. Estimation of the unmeasured states of the system using a nonlinear observer re-
quired for the computation of nonlinear feedback.
4. Design of a robust H∞ control law that retains performance even with aerodynamic
uncertainties.
5. Design of a robust sliding mode controller to tolerate the uncertainties caused due
to aerodynamic coefficients.
Some recommendations can be drawn in view of the overall work done in this thesis.
• Outer loop feedback linearization along with the rate loop feedback linearization
will make the whole plant dynamics linear and decoupled.
• Some more advanced robust control laws such as µ−synthesis, H∞ loop shaping etc,
are to be tested in order to cater more aerodynamic uncertainties and disturbances.
• Observer gain has to be adaptive and robust irrespective of different flight condi-
tions.
The analysis and numerical results presented in this thesis amply demonstrate the fea-
sibility of designing nonlinear control systems for the next generation high performance
missile autopilots. Nonlinear design methods have the potential for enhancing missile
performance while simplifying the design process. This can result in a lighter and more
accurate missile system.
Appendix A
A Brief Theory of Feedback
Linearization
A.1 Introduction
A single input-single output control affine system obeys the following state and output
equation x = f(x) + g(x)u
y = h(x), (A.1)
x ∈ Rn is a vector, u is a scalar input, y is a scalar output,f : Rn → Rn and g : Rn → Rn
are vector fields and h : Rn → R is a scalar function.Suppose that f(·), g(·) and h(·) are
differentiable function.
Obviously y does not depend explicitly on u since u is not an argument of function
h. If u is changed instantaneously, there will be no immediate change in y. The change
comes gradually via x. To check the corresponding behavior of ywe can differentiate the
Chapter A. A Brief Theory of Feedback Linearization 132
output equation [54]:
y = ∂h∂x
x = ∂h∂x
(f(x) + g(x)u)
= ∂h∂x
f(x) + ∂h∂x
g(x)u
(A.2)
This shows that y depends directly on u if and only if [∂h/∂x] g(x) 6= 0 we will say that
the system has relative degree 1 if [∂h/∂x] g(x) 6= 0
Now, let us assume that [∂h/∂x] g(x) = 0 differentiating the output y once more, we
obtain:
y =∂
∂x
[∂h
∂x
]f(x) +
∂
∂x
[∂h
∂xf(x)
]g(x)u (A.3)
The system is said to have relative degree 2 if:
∂h
∂xg(x) = 0 and
∂
∂x
[∂h
∂xf(x)
]g(x) 6= 0 (A.4)
The idea of relative degree can be easily generalized with the help of the following notation.
Definition 1 (Lie Derivative) Let λ : Rn → R be a differentiable function and f a
vector field, both defined on an open subset U of Rn.The derivative of λ along f or Lie
derivative of λ along f is given by the inner product
⟨∂λ
∂x, f(x)
⟩=
∂λ
∂xf(x) (A.5)
The lie derivative of λ along f is usually denoted as Lfλ, so that:
Lfλ(x) =n∑
i=1
∂λ
∂xi
fi(x) (A.6)
Chapter A. A Brief Theory of Feedback Linearization 133
If Lfλ is again differentiated along another vector field g, the following is obtained:
LgLfλ(x) =∂ (Lfλ)
∂xg(x) (A.7)
This operation could be used recursively along the same vector field.Lkfλ indicates that λ
is being differentiated k times along f such that
Lk
fλ(x) =∂(Lk−1
f λ)∂x
f(x)
L0fλ(x) = λ(x)
(A.8)
Definition 2 (Relative Degree) The SISO system given by Equation A.1 is said to
have a relative degree r at a point xo if
1. LgLk−1f h(x) = 0 ∀ x in a neighborhood of xok = 1, ..., r − 1
2. LgLr−1f h(xo) 6= 0
The following formula describing the time derivatives of the output is an immediate
consequences of Definition 2 :
dky
dtk= y(k) =
Lk
fh(x), k = 1, ..., r − 1
Lkfh(x) + LgL
k−1f h(x)u, k = r
, (A.9)
Remark 1 For SISO linear systems:
x = Ax + Bu
y = Cx(A.10)
Chapter A. A Brief Theory of Feedback Linearization 134
The relative degree is the difference in degree between the nominator and denominator of
its equivalent transfer function:
G(s) = C(sI − A)−1B (A.11)
Equivalently, the relative degree of a SISO linear system is the positive integer r such
that:
LgLr−1f h = CAr−1B 6= 0 (A.12)
Definition 3 (Strong Relative Degree) A system is said to have a strong relative de-
gree if the relative degree is r for all xo ∈ Rn
A.1.1 Input-output linearisation
Consider a system with a strong relative degree r.Then we have that the rth derivative of
the output is given by:
y(r) = Lrfh + LgL
r−1f hu, LgL
r−1f h 6= 0 (A.13)
There is an interesting possibility. Introduce the feedback law:
u =1
LgLr−1f h
(v − Lr
fh), (A.14)
Where v is a reference signal The resulting relationship is between v and y(r) is:
y(r) = v (A.15)
Chapter A. A Brief Theory of Feedback Linearization 135
Which is a linear system. Taking the Laplace transform:
y(s) =1
snv(s) (A.16)
By using the feedback law A.14, we have obtained a system that is linear from the reference
signal v to the output y.
Proposition 1 Suppose a system of the form of equation A.1 has relative degree r at a
point xo. Define,
ϕ1(x) = h(x)
...
ϕr(x) = Lr−1f h(x)
, (A.17)
If r is strictly less than n, it is always possible to find n−r arbitrary functions ϕr+1(x), ..., ϕn(x)
such that the mapping
Φ(x) = [ϕ1(x), ..., ϕr(x), ..., ϕn(x)]T (A.18)
has a jacobian matrix which is nonsingular at xo and therefore represents a local coordi-
nates transformation in a neighborhood of xo. Moreover, it is always possible to choose
ϕr+1(x), ..., ϕn(x) in such a way that
Lgϕi(x) = 0, r + 1 ≤ i ≤ n, (A.19)
Chapter A. A Brief Theory of Feedback Linearization 136
Differentiating Equation A.17 with respect to time and using Equation A.9, we obtain:
dφ1(x)dt
= dh(x)dt
= Lfh(x) = φ2(x)...dφr−1(x)
dt= Lr−1
f h(x) = φr(x)dφr(x)
dt= Lr
fh(x) + LgLr−1f h(x)u
, (A.20)
By introducing the variables:
ξ =
ξ1
...
ξr
=
φ1(x)...
φr(x)
(A.21)
η =
η1
...
ηn−r
=
φr+1(x)...
φn(x)
(A.22)
Then Equation A.20 can be written as follows:
dξ1dt
= ξ2(t)...dξr−1
dt= ξr(t)
dξr
dt= b(ξ, η) + a(ξ, η)u
, (A.23)
where, y = ξ1 a(ξ, η) = LgL
r−1f h (Φ−1(ξ, η))
b(ξ, η) = Lrfh (Φ−1(ξ, η))
, (A.24)
Notice that
x = Φ−1(ξ, η), (A.25)
Chapter A. A Brief Theory of Feedback Linearization 137
As far as the other new co-ordinates are concerned, corresponding to η = [η1, ..., ηn−r]T ,
we cannot expect any special structure in the corresponding equations. However, if
φr+1, ..., φn have been chosen such that Lgφr+i = 0, i = 1, ..., (n− r), then
dηi
dt= Lfφr+i (x(t)) , i = 1, ..., (n− r) (A.26)
By defining qi (ξ, η) = Lfφr+i (Φ−1 (ξ, η)) , then we have that the derivatives of ηi can be
written as follows:dηi
dt= qi (ξ, η) , i = 1, ..., (n− r) (A.27)
or, in vector form:dη
dt= q (ξ, η) (A.28)
In summary, the normal form of a control affine SISO nonlinear system of Equation A.1,
with relative degree r around a point xo, is given by:
ξ1 = ξ2
ξ2 = ξ3
...
ξr−1 = ξr
ξr = b(ξ, η) + a(ξ, η)u
η = q(ξ, η)
y = ξ1
(A.29)
Notice that if we choose the feedback law:
u = 1a(ξ,η)
(v − b(ξ, η))
= 1LgLr−1
f h(Φ−1(ξ,η))
(v − Lr
fh(Φ−1(ξ, η)))
= 1LgLr−1
f h(x)
(v − Lr
fh(x)) (A.30)
Chapter A. A Brief Theory of Feedback Linearization 138
Then the resulting system from the reference input v to the output y = ξ1 is linear. The
normal form is given by:
ξ1 = ξ2
ξ2 = ξ3
...
ξr−1 = ξr
ξr = v
η = q(ξ, η)
y = ξ1
(A.31)
Notice, however, that the internal dynamics η = q(ξ, η) are possibly nonlinear, so that
the system has not been fully linearised by the feedback law.
If the feedback law is changed to:
u =1
a (ξ, η)(v − b (ξ, η)− λ0ξ1 − ...− λr−1ξr) (A.32)
Then the normal form is given by:
ξ1 = ξ2
ξ2 = ξ3
...
ξr−1 = ξr
ξr = −λ0ξ1 − ...− λr−1ξr + v
η = q(ξ, η)
y = ξ1
(A.33)
and the transfer function from input to output is:
y(s)
v(s)=
1
λr−1sr + ... + λ0
(A.34)
Chapter A. A Brief Theory of Feedback Linearization 139
It is easy to see that the unforced internal dynamics (or zero dynamics) which are defined
for ξ = 0 (η = −η3) , are asymptotically stable around η = 0, so that the control signal
should be bounded for bounded v and ξ. Notice that ξ is bounded if v is bounded since
with a > 0 the transfer function y(s)/v(s) is stable.
A.2 Multi input multi output systems
The concepts used in the above section for SISO systems, such as input-state linearization,
input-output linearization, normal forms, zero-dynamics, and so on, can be extended to
MIMO systems. For the MIMO case, we consider square systems (i.e., systems with the
same numbers of inputs and outputs) of the following form
x = f(x) + g1(x)u1 + ..... + gm(x)um
y1 = h1(x)
· · ·ym = hm(x)
(A.35)
Where is the state vector,u′is(i = 1, ....., m) are control inputs, y′js(j = 1, ....., m) are
outputs, and , f and g′is are the smooth vector fields, and h′js are smooth scalar functions.
If we collect the control inputs ui into a vector u, the corresponding vectors into a matrix
G, and the outputs into a vector y, the system’s equations can then be compactly written
asx = f(x) + G(x)u
y = h(x)(A.36)
A.2.1 Feedback Linearization of MIMO Systems
The approach to obtain the input-output linearization of the MIMO systems is again to
differentiate the outputs yj of the system until the inputs appear, similarly to the SISO
Chapter A. A Brief Theory of Feedback Linearization 140
case. To start with,
yj = Lfhj +m∑
i=1
(Lgihj)ui, (A.37)
If Lgihj(x) = 0 ∀i ,then the inputs do not appear and we have to differentiate again.
Assume that rjis the smallest integer such that at least one of the inputs appear in y(rj)j ,
then
y(rj)j = L
rj
f hj +m∑
i=1
LgiL
rj−1f hjui, (A.38)
With LgiL
rj−1f hj(x) 6= 0 for at least one i, ∀x ∈ Ω.If we perform the above procedure for
each output yj, we can obtain a total of m equations in the above form, which can be
written compactly as
y(r1)1
· · ·· · ·y
(rm)m
=
Lr1f h1
· · ·· · ·Lrm
f hm
+ E(x)
u1
· · ·· · ·um
, (A.39)
Where the m×m matrix E is defined as
E(x) =
Lg1Lr1−1f h1........LgmLr1−1
f h1
................
................
Lg1Lrm−1f hm........LgmLrm−1
f h1m
(A.40)
Chapter A. A Brief Theory of Feedback Linearization 141
The matrix E(x) is called the decoupling matrix for the MIMO system. If the decoupling
matrix is non-singular in a region Ω around a point x0, then the input transformation
u = −E−1
Lr1f h1
· · ·· · ·Lrm
f hm
+ E−1
v1
· · ·· · ·vm
, (A.41)
yields a linear differential relation between the output y and the new input v
y(r1)1
· · ·· · ·y
(rm)m
=
v1
· · ·· · ·vm
(A.42)
Note that the above input-output relation is decoupled, in addition to being linear. Since
only affects the corresponding output yj , but not the others, a control law of the form
A.41 is called a decoupling control law, or non-interacting control law. As a result of
the decoupling, one can use SISO design on each y − v channel in the above decoupled
dynamics to construct tracking or stabilization controllers. It is useful to formalize the
concept of relative degree for MIMO systems at this point. Since there is a relative degree
associated with each output, the relative degree of the MIMO system is defined by m
integers.
Definition 4 The system A.35 is said to have relative degree (r1, · · · , rm) at x0 if there
exists a neighborhood Ω of x0 such that ∀x ∈ Ω,
• LgiLk
fhj (x) = 00 ≤ k ≤ ri − 11 ≤ i, j ≤ m
• E(x) is non-singular
Chapter A. A Brief Theory of Feedback Linearization 142
The total relative degree of the system is defined by
r = r1 + · · ·+ rm
Let us consider the case of r < n first. A normal forma can be obtained for the system in
a manner similar to the SISO case, as we now show. First, choose as coordinates
ζ11 = h1 (x) ζ1
2 = Lfh1 (x) · · · ζ11 = Lr1−1
f h1 (x)
· · ·ζm1 = hm (x) ζm
2 = Lfhm (x) · · · ζmm = Lrm−1
f hm (x)
(A.43)
These are simply the m outputs yj and their derivatives up to order rj.
Similarly to the SISO case, the r coordinates ζji (j = 1, . . . , m; i = 1, . . . , rj) are in-
dependent and can be used as a partial set of a new state vector. This is because the
gradient vectors
Lifhj (x) 0 ≤ i ≤ rj − 1 1 ≤ j ≤ m
are linearly independent, as can be shown in a manner analogous to SISO case, using the
non-singularity of the decoupling matrix E. Now, let us complete the choice of the new
state vector by choosing n− r more functions η1, . . . , ηn−r (x) which are independent with
respect to each other and to the r coordinates chosen earlier. This can always be done,
based on the Frobenius theorem. However, unlike the SISO case, it is no longer possible
to guarantee that
∀x ∈ Ω Lgiηk (x) = 0 1 ≤ i ≤ m 1 ≤ k ≤ n− r
unless the vector fields g1, . . . , gm are involutive on Ω . As a result, the state equations
for these n− r coordinates will have the input vector u appearing.
Chapter A. A Brief Theory of Feedback Linearization 143
With (ζ, η) as coordinates, the system equations can also be put into a ”normal form”.
Specifically, the external dynamics is
ζj1 = ζj
2
· · ·ζjrj
= aj (ζ, η) +m∑
i=1
bij (ζ, η) ui
where j = 1, 2, . . . ,m, and
aj (ζ, η) = Lrj
f hj (x)
bij (ζ, η) = Lgi
Lfr−1j hj (x)
The internal dynamics is·η = w (ζ, η) + P (ζ, η) u
with (k = 1, . . . , n− r) and (i = 1, . . . ,m)
wk (ζ, η) = Lfηk (x)
Pki (ζ, η) = Lgiηk (x)
Note that P ∈ R(n−r)×m and w ∈ Rn−r. As in the SISO case, feedback law of A.41
renders the (n− r) states η unobservable.
An interesting case of the above input-output linearization corresponds to the total
relative degree being n,i.e.,m∑
j=1
rj = n
In this case, there is no internal dynamics. With the control law in the form of A.41, we
obtain an input-state linearization of the original nonlinear system. With the equivalent
inputs vi designed as in the SISO case, both stabilization and tracking can then be achieved
for the system without any worry about the stability of the internal dynamics. We remark
Chapter A. A Brief Theory of Feedback Linearization 144
that the necessary and sufficient conditions for input-state linearization of multi-input
nonlinear systems are similar to and more complex than those for single input systems.
A.2.2 Zero-dynamics and control design
When designing controllers based on the linear input-output relation in (A.39)[63], one
has to be concerned with the stability of the internal dynamics. It is therefore of interest to
study the stability of the zero-dynamics, an extent case of internal dynamics with output
being exactly zero.Similarly to the SISO case, the zero-dynamics of a MIMO system is
obtained by constraining the output to zero [70],[19].
Definition 5 The zero-dynamics of the MIMO nonlinear system is the dynamics of the
system when the outputs are constrained to be identically zero.
since the constraint that the output identically equal to zero implies that all the
derivatives of the output are zero, we have
ζ(t) ≡ 0
In order to keep the outputs identically zero, the control inputs must be chosen as
u(t) = −E−1(0, η)a(0, η)
where η(t) is the soltion of the differential equations.
η(t) = w(0, η)− P (0, η)E−1(0, η)a(0, η)
with η(0) arbitary. In the original x coordinates, when the system operates in zero-
dynamics, the system states x evolve on the surface.
M∗ =
x ∈ Ω|hj(x) = Lfhj(x) = ..... = Lrj−1f hj(x) = 0, 1 ≤ j ≤ m
Chapter A. A Brief Theory of Feedback Linearization 145
Of course, the initial states x(0) must be chosen to b on this surface. In terms of x(0)
the constraining control inputs u are
u∗(x) = −E−1(x)
Lr1f h(x)
.....
.....
Lrmf hm(x)
(A.44)
The zero-dynamics is given by the equation
x = f(x) + g(x)u∗(x) (A.45)
with the states constrained on the surface M∗
Similarly to the SISO case, we can define the notation of minimum phase systems.
Definition 6 The MIMO nonlinear system A.35 is said to be asymtotically minimum
phase if the zero-dynamics is locally asymtotically stable.
The definition of exponentially minimum phase is similar.
For minimum phase systems, the control design results in section 2.2 can easily ex-
tended to MIMO case.
Appendix B
LMI Approach to H∞ Control
B.1 The Theory of H∞ Control based on LMI Ap-
proach
A brief introduction to the robust control has been given in the Chapter 3. Here we have
given more detailed analysis of robust control formulation.
B.1.1 Singular value decomposition
In classical control theory, Bode magnitude plot gives the gain of a SISO system at
different frequencies. But if the system is a MIMO one then the Bode plots of all the
elements of the transfer matrix give little idea about the gain of the system since it
undermines the interactions among the elements. Eigen values of the transfer matrix can
be an answer but then they are not a very good measure of the gain or size of a matrix.
Moreover, eigen values are limited to square transfer matrices (equal no. of inputs and
outputs) only. Therefore, a more general entity is required and singular values have been
found suitable.
Chapter B. LMI Approach to H∞ Control 147
The singular values of a complex valued matrix A are defined as the positive square
roots of the eigen values of A × A. Singular values are always real and non-negative.
They are generally denoted by σ The same way as a square matrix can be digitalized by
a similarity transformation with the modal matrix, similarly a non-square matrix can be
digitalized by a method called singular value decomposition. Let A be an m × n matrix
with rank r. Then it can be decomposed into singular values as,
A = U∑
V ∗
where, U is an m×m unitary matrix,V is an n× n unitary matrix and
∑=
σ1 0 .... 0
0 : ... 0
: : σr :
0 0 ... 0
is an m× n matrix.
B.1.2 Norms
Norms can be viewed as the measure of the size of a matrix or a vector. A generalized
p− norm of a vector x ∈ Cn is defined as,
‖x‖p
∆=
(n∑
i=1
|xi|p)1/p
, for 1 ≤ p ≤ ∞
Chapter B. LMI Approach to H∞ Control 148
In particular, 1-norm, 2-norm and ∞-norm are most commonly used. They are denoted
and defined in the following way:
‖x‖1
∆=
(n∑
i=1
|xi|)
‖x‖2
∆=
(n∑
i=1
|xi|2)1/2
‖x‖∞∆= max
i|xi|
The following are some properties of any of the vector norms:
• ‖x‖ ≥ 0
• ‖x‖ = 0 if and only if x = 0
• ‖αx‖ = |α| ‖x‖ for any scalar α
• ‖x + y‖ ≤ ‖x‖+ ‖y‖
Matrix norms are the extension of the concept of length in three-dimensional space to
higher dimensional hyperspace. On the basis of the vector norm, matrix norms or operator
norms of matrices are defined as,
‖A‖p
∆= sup
x 6=0
‖Ax‖p
‖x‖p
where A ∈ Cm×n
In particular, operator 2-norm can be computed as, ‖A‖2 = σ (A) Apart form the prop-
erties of the vector norms, any of the matrix norms obey the following relations:
• ρ (A) ≤ ‖A‖ where ρ denotes spectral radius
• ‖AB‖ ≤ ‖A‖ . ‖B‖
• ‖UAV ‖ = ‖A‖ for any appropriately dimensioned unitary matrices U and V
Chapter B. LMI Approach to H∞ Control 149
B.1.3 Vector Spaces
It is a set whose elements are real or complex valued vectors and where vector addition
and multiplication of vectors by a scalar are defined i.e. if v1, v2 ∈ Cn where Cn denotes
an n-dimensional vector space then,
(α1v1 + α2v2) ∈ Cn, for any α1 and α2 and
βv1, βv2 ∈ Cn for any β
B.1.4 Basis Vector
It is a set of linearly independent vectors that span the vector space (i.e. linear combina-
tions of the basis vectors can produce any vector that belongs to that vector space). A
set of basis vectors u1, u2....un is said to be orthogonal if the inner product of any two
of them is zero i.e. 〈ui, uj〉 = 0∀i, j and i 6= j An orthogonal set of unit vectors is called
an orthonormal set (i.e. in addition to 〈ui, uj〉 = 0, 〈ui, ui〉 = 1,∀i)
B.1.5 L2 space
Let f(t) and g(t) be two matrix valued time functions. An L2[or, L2(−∞,∞)] space is
defined as the space of real matrix valued time functions with finite inner products given
by,
〈f(t), g(t)〉 =
∞∫
−∞
tr[fT (t)× g(t)
]dt (B.1)
The norm induced by the inner product is given by,
‖f(t)‖22 =
∞∫
−∞
tr[fT (t)× f(t)
]dt (B.2)
Chapter B. LMI Approach to H∞ Control 150
This is how the 2-norm of signals is defined. If this norm exists then it is said that,f(t) ∈L2(−∞,∞). For causal signals (i.e.f(t) = 0 for t < 0) if the 2-norm exists then, f(t) ∈L2 [0,∞).
B.1.6 L∞ space
A matrix valued time function f(t) is said to belong to L∞ Space if it has a finite ∞-norm
defined as ‖f(t)‖∞ = supt
σ [f(t)]. If f(t) is a scalar function then the ∞-norm is defined
as,‖f(t)‖∞ = supt|f(t)|
B.1.7 H2 space
H2 space is nothing but the frequency-domain version of the space. In other words, it is
the space of complex matrix functions having bounded 2-norms or H2-norms i.e.
‖F (jω)‖22 =
1
2π
∞∫
−∞
tr [F ∗(jω).F (jω)]dω < ∞ (B.3)
It may be noted that the above 2-norm will exist for strictly proper functions with no
pole on the imaginary axis. The real rational subspace of H2 which consists of all strictly
proper real rational stable transfer matrices is denoted by RH2
B.1.8 H∞ space
Simply stating H∞ space is the space of frequency dependent matrix functions having
bounded H∞-norm,which is defined as
‖F‖∞ = supω
σ [F (jω)] (B.4)
Chapter B. LMI Approach to H∞ Control 151
For scalar functions, the H∞-norm is nothing but the peak value of the Bode magnitude
plot. H∞-norm exists for proper transfer functions with no pole on the imaginary axis.
RH∞ is a subspace of H∞ space consisting of real rational proper and stable transfer
matrices.
B.1.9 Packed Matrix Notation
A transfer matrix can be found from the A, B, C, D matrices as, G(s) = C(sI−A)−1B+D
In packed matrix notation, G(s) is represented in terms of A,B,C, D matrices as,
G =
[A B
C D
]
It is to be kept in mind that G written in the above form, is not a matrix in the
original sense but only a notation, which gives some computational advantages. The
following three formulae regarding the packed matrix notation of different interconnection
of systems are very useful and will be used frequently in later sections.
Figure B.1: System interconnections: (a) Series connection, (b) Inversion, (c) Parallelconnection
Chapter B. LMI Approach to H∞ Control 152
Series Connection Let there be two systems connected in series as shown in Fig. If,
G1 =
[A1 B1
C1 D1
]and G2 =
[A2 B2
C2 D2
]then G1G2 =
A1 0 B1
B2C1 A2 B2D1
D2C2 C2 D2D1
Inversion If ,G =
[A B
C D
]then G−1 =
[A−BD−1C BD−1
−D−1C D−1
],provided D is non-
singular.
Parallel Connection If G1 =
[A1 B1
C1 D1
]and G2 =
[A2 B2
C2 D2
]then it can be shown
that G1 + G2 =
A1 0 B1
0 A2 B2
C1 C2 D1 + D2
B.1.10 Robust Stability
Robustness of the stability in the face of model errors will be treated briefly[41],[20].The
whole concept is based on the so called small gain theorem which trivially applies to the
situation sketched in fig B.2.
Figure B.2: Closed loop with loop transfer H
The stable stable transfer H represents the total looptransfer in a closed loop. If we
require that the modulus (amplitude) of H is less than 1 for all frequencies it is clear
from fig B.6 that the polar curve cannot encompass the point −1 and thus we know
Chapter B. LMI Approach to H∞ Control 153
Figure B.3: Small gain stability in Nyquist space
from nyquist criterion that the loop will always constitute a stable system.So stability is
guaranteed as long as
‖H‖∞def= sup
ω|H(jω)| < 1 (B.5)
sup stands for supremum which effectively indicates the maximum.(only in the case that
the supremum is approached at within any small distance but never really reached it is
not allowed to speak of a maximum.) Notice that no information concerning phase angle
has been used which is typically H∞. In the above formula we get the first taste of H∞by simultaneous definition of the infinity norm which has been discussed already.For the
MIMO system the small gain condition is given by
‖H‖∞def= sup
ωσ (H(jω)) < 1 (B.6)
Where σ denotes the maximum singular value (always real) of the transfer H (for the
ω under consideration).All together, these conditions may seem somewhat exaggerated,
because transfers, less than one, are not so common. The actual application is therefore
Chapter B. LMI Approach to H∞ Control 154
somewhat ”nested” and very depictively indicated in the literature as ”the baby small gain
Figure B.4: Baby small gain theorem for additive model error
theorem” illustrated in fig . In the upper block scheme all relevant elements of fig B.2 have
been displayed in case we have to deal with an additive model error ∆P . We now consider
the ”baby” loop as indicated containing ∆P explicitly. The lower transfer between the
Figure B.5: Control sensitivity guards stability robustness for additive model error
output and the input of ∆p, as once again illustrated in fig , can be evaluated and happens
to be equal to the control sensitivity R as shown in the lower block scheme. (Actually we
get a minus sign that can be joined to ∆P .Because we only consider absolute values in the
small gain theorem, this minus sign is irrelevant: it just causes a phase shift of 180 which
leaves the conditions unaltered.) Now it is easy to apply small gain theorem to the total
loop transfer H = R∆P . The infinity norm will appear to be an induced operator norm
in the mapping between identical signal spaces L and such it follows Schwartz inequality
Chapter B. LMI Approach to H∞ Control 155
so that we may write:
‖R∆P‖∞ ≤ ‖R‖∞ ‖∆P‖∞ (B.7)
Ergo,if we can guarantee that:
‖R‖∞ ≤ 1
α(B.8)
a sufficient condition for stability is :
‖R‖∞ ≤ α (B.9)
If all we require from ∆P is stated in equation B.7 then it is easy to prove that the
condition on R is also necessary condition. till this is rather crude condition but it can
be defined by weighting over the frequency axis. Once again from fig we recognize that
the robustness stability constraint effectively limits the feedback from the point, where
both the disturbance and the output of the model error block ∆P enter, and the point
of the plant such that the loop transfer is less than one. The smaller error bound 1α
the
greater the feedback α can be and vice versa! We so analyzed the effect of additive model
error ∆P . Similarly we can study the effect of multiplicative error which is very easy if
we take:
Ptrue = P + ∆P = (I + ∆) P (B.10)
where obviously ∆ is the bounded multiplicative model error. ( Together with P it
evidently constitutes the additive model error ∆P .) In similar blockschemes we now get
Figure B.6: Baby small gain theorem for multiplicative model error
Chapter B. LMI Approach to H∞ Control 156
figures . The ”baby” -loop now contains ∆ explicitly and we notice that transfer P is
somewhat ”displaced” out of the additive perturbation block. The result is that ∆ sees
Figure B.7: Complementary sensitivity guards stability robustness for multiplicativemodel error
itself fed back by (minus) the complementary sensitivity T . (The P has, so to speak ,
been taken out of ∆P and adjoined to R yielding T .) If we require that:
‖∆‖∞ ≤ 1
β(B.11)
the robust stability follows from:
‖T∆‖∞ ≤ ‖T‖∞ ‖∆‖∞ ≤ 1 (B.12)
yielding as final condition:
‖T‖∞ ≤ β (B.13)
Again proper weighting may refine the condition.
Chapter B. LMI Approach to H∞ Control 157
B.2 A Linear Matrix Inequality Approach to H∞ Con-
trol
The continuous and discrete-time H∞ control problems are solved via elementary ma-
nipulation on linear matrix inequalities (LMI)[26]. Two interesting new features emerge
through this approach: solvability conditions valid for both regular and singular problems,
and an LMI-based parametrization of all H∞-suboptimal controllers, including reduced-
order controllers.The solvability conditions involve Riccati inequalities rather than the
usual indefinite Riccati equations. Alternatively, these conditions can be expressed as a
system of three LMI’s. Efficient convex optimization techniques are available to solve this
system. Moreover, its solutions parameterize the set of H∞ controllers and bear impor-
tant connections with the controller order and the closed loop Lyapunov functions. In
this thesis after some brief introduction on LMI, the H∞ synthesis has been described in
the view of LMI.
B.2.1 Brief theory
The history of LMIs in the analysis of dynamical systems goes back more than 100 years.
The story begins in about 1890, when Lyapunov published his seminal work introducing
what we now call Lyapunov theory. It has been reported in his seminal work that the
differential equation,d
dxx(t) = Ax(t) (B.14)
is stable (i.e., all trajectories converge to zero) if and only if there exists a positive definite
matrix P such that
AT P + PA < 0 (B.15)
The requirement ,P > 0, AT P + PA < 0 is what we now call a Lyapunov inequality on
P , which is a special form of an LMI. Lyapunov also showed that this first LMI could
be explicitly solved. Indeed, we can pick any Q = QT > 0 and then solve the linear
Chapter B. LMI Approach to H∞ Control 158
equation AT P + PA = −Q for the matrix P , which is guaranteed to be positive-definite
if the system B.14 is stable. In summary, the first LMI used to analyze stability of a
dynamical system was the Lyapunov inequality B.15, which can be solved analytically
(by solving a set of linear equations).The important role of LMIs in control theory was
already recognized in the early 1960’s, especially by Yakubovich . The Positive-Real
lemma and extensions were intensively studied in the latter half of the 1960s. By 1970, it
was known that the LMI appearing in the Positive-Real lemma could be solved not only
by graphical means, but also by solving a certain Algebraic Riccati Equation (ARE). In
1971, a paper on quadratic optimal control by J. C. Willems led to the LMI
[AT P + PA + Q PB + CT
BT P + C R
]≥ 0 (B.16)
and pointed out that it can be solved by studying the symmetric solutions of the ARE
AT P + PA− (PB + CT )R−1(BT P + C) + Q = 0 (B.17)
B.2.2 Advantages of LMI
Linear matrix inequalities (LMIs) and LMI techniques have emerged as powerful tools
in areas ranging from control engineering to system identification and structural design.
Three factors make LMI techniques appealing:
• A variety of design specifications and constraints can be expressed as LMIs.
• Once formulated in terms of LMIs, a problem can be solved exactly by efficient
convex optimization algorithms.
• While most problems with multiple constraints or objectives lack analytical solutions
in terms of matrix equations, they often remain tractable in the LMI framework.
This makes LMI based design a valuable alternative to classical analytical method.
Chapter B. LMI Approach to H∞ Control 159
• iv. If the system is strictly proper, the solution of H∞ control problem is not possible
using DGKF method but with LMI technique, it is possible to solve sub-optimal
H∞ control using Bounded Real Lemma
B.2.3 Basic Idea about LMI
A linear matrix inequality (LMI) has the form
F (η) = F0 +m∑
i=1
ηiFi > 0 (B.18)
Where η ∈ <m the variable and the symmetric matrices are Fi = F Ti ∈ <n×n, i =
0, 1..........,m given. The inequality symbol in B.18 means that F (η) is positive definite,
i.e.,uT F (η)u > 0 for all nonzero u ∈ <n. Of course, the LMI B.18 is equivalent to a set of
n polynomial inequalities in η , i.e., the leading principal minors of F (η) must be positive.
We will also encounter non-strict LMIs, which have the form
F (η) ≥ 0 (B.19)
The LMI B.18 is a convex constraint on η, i.e., the set η|F (η) > 0 is convex. Al-
though the LMI B.18 may seem to have a specialized form, it can represent a wide
variety of convex constraints on η. In particular, linear inequalities, quadratic inequal-
ities, matrix norm inequalities, and constraints that arise in control theory, such as
Lyapunov and convex quadratic matrix inequalities, can all be cast in the form of an
LMI. Multiple LMIs F (1)(η) > 0, ......., F (η)(p) > 0 can be expressed as the single LMI
diag(F (1)(η), ....., F (p)(η)) > 0. Therefore we will make no distinction between a set of
LMIs and a single LMI, i.e., ”the LMI F (1)(η) > 0, ......., F (η)(p) > 0” will mean ”the LMI
diag(F (1)(η), ....., F (p)(η)) > 0”. When the matrices Fiare diagonal, the LMI F (η) > 0is
just a set of linear inequalities. Nonlinear (convex) inequalities are converted to LMI form
Chapter B. LMI Approach to H∞ Control 160
using Schur complements. The basic idea is as follows: The LMI
[Q(η) S(η)
S(η)T R(η)
]> 0 (B.20)
Where Q(η) = Q(η)T , R(η) = R(η)T and depend affinely on ’η’, is equivalent to, R(η) > 0,
Q(η)− S(η)R(η)−1S(η)T > 0
B.2.4 Matrices as variable
We will often encounter problems in which the variables are matrices, e.g., the Lyapunov
inequality AT P + PA < 0. Where A ∈ <n×n is given and P = P T is the variable. In this
case we will not write out the LMI explicitly in the form F (η) > 0, but instead make clear
which matrices are the variables. The phrase the LMI AT P + PA < 0 in P means that
the matrix P is a variable. As another related example, consider the quadratic matrix
inequality
AT P + PA + PBR−1BT P + Q < 0 (B.21)
Where A,B, Q = QT , R = RT > 0 are given matrices of appropriate sizes, and P = P T
is the variable. Note that this is a quadratic matrix inequality of the variable P. It can
be expressed as the linear matrix inequality using the Schur complement as,
[−AT P − PA−Q PB
BT P R
]> 0
Chapter B. LMI Approach to H∞ Control 161
B.2.5 Lyapunov’s Inequality
We have already mentioned the Linear Matrix Inequality Problem (LMIP) associated
with Lyapunov’s inequality, i.e.
P > 0 , AT P + PA < 0
Where P is a variable and A ∈ <n×n is given. It can be shown that this LMI is feasible
if and only if the matrix A is stable, i.e., all trajectories of x = Ax converge to zero as
t →∞, or equivalently, all eigen values of P must have negative real part. To solve this
LMIP, we pick any Q > 0 and solve the Lyapunov equation AT P + PA = −Q, which is
nothing but a set of n(n+1)2
linear equations for the n(n+1)2
scalar variables in P . This set
of linear equations will be solvable and result in P > 0 if and only if the LMI is feasible.
In fact this procedure not only finds a solution when the LMI is feasible, it parameterizes
all solutions as Q varies over the positive-definite cone.
B.3 Stabilizing Controllers
A necessary feature of any feedback system is that it be stable in some appropriate sense.
In this chapter we introduce the feedback arrangement we will be studying for the rest of
the course. Once introduced, our main objective is to precisely define feedback stability
and then to parametrize all controllers that stabilize the feedback system. The general
feedback setup we are concerned with is shown above. As depicted the so-called closed-
loop system has one external input and one output, given by w and z respectively. The
signal or function w captures the effects of the environment on the feedback system; for
instance noise, disturbances and commands. The signal z contains all characteristics
of the feedback system that are to be controlled. The maps G and K represent linear
subsystems where G is a given ”plant” which is fixed, and K is the controller or control
law whose aim is to ensure that the mapping from w to z has the desired characteristics.
To accomplish this task the control law utilizes signal y, and chooses an action u which
Chapter B. LMI Approach to H∞ Control 162
Figure B.8: General feedback arrangement
directly affects the behavior of G.
Here G and K are state space systems, with G evolving according to
x(t) = Ax(t) +[
B1 B2
] [w(t)
u(t)
],
[z(t)
y(t)
]=
[C1
C2
]x(t) +
[D11 D12
D21 D22
][w(t)
u(t)
],
(B.22)
and K being described by
xK(t) = AKxK(t) + BKy(t)
u(t) = CKxK(t) + DKy(t)(B.23)
Throughout this section we have the standing assumption that the matrix triples (Ak, Bk, Ck)
and are both stabilizable and detectable
As shown in the figure, G is naturally partitioned with respect to its two inputs and two
Chapter B. LMI Approach to H∞ Control 163
outputs. We therefore partition the transfer function of G as
G(s) =
A B1 B2
C1 D11 D12
C2 D21 D22
=
[G11(s) G12(s)
G21(s) G22(s)
](B.24)
so that we can later refer to these constituent transfer functions.
At first we must determine under what conditions this interconnection of components
makes sense. That is, we need to know when these equations have a solution for an
arbitrary input w.
The system of Figure B.8 is well-posed if unique solutions exist x(t), xK(t), y(t) and u(t),
for all input functions w(t), and all initial conditions x(0), xK(0).
Proposition 2 The connection of G and K in Figure B.8 is well-posed, if and only if,
I −D22DK is nonsingular.
Proof : The proof of this result amounts to simply writing out the system state
equations. So we have
x(t) = Ax(t) + B1w(t) + B2u(t)
xK(t) = AKxK(t) + BKy(t),(B.25)
and
[I −DK
−D22 I
][u(t)
y(t)
]=
[0 CK
C2 0
][x(t)
xK(t)
]+
[0
D21
]w(t) (B.26)
Now it is easily seen that the left hand side matrix is invertible if and only if I −D22DK
is nonsingular. If this holds, clearly one can substitute u, y into B.25 and find a unique
solution to the state equations. Conversely if this does not hold, from B.26 we can find
a linear combination of x(t), xK(t), and w(t) which must be zero, which means that,
x(0), xK(0), w(0) cannot be chosen arbitrarily.
Chapter B. LMI Approach to H∞ Control 164
We have the following result which is frequently used.
Corollary 1 If either D22 = 0 or Dk = 0, then the interconnection in Figure B.8 is
well-posed.
We are now ready to talk about stability. From now on we tacitly assume that our
feedback system is well-posed.
B.3.1 System Stability
In this section we introduce the notion of internal stability, and discuss its relation to the
boundedness of input-output maps [91].
Definition 7 The system in Figure B.8 is internally stable if for every initial condition
x(0) of G, and xk(0) of K, the limits
x(t), xK(t)t→∞→ 0
hold
When w = 0
The following is an immediate test for internal stability.
Proposition 3 Suppose that the system of Figure B.8 is well-posed. Then the system is
internally stable if and only if
A =
[A 0
0 AK
]+
[B2 0
0 BK
][I −DK
−D22 I
]−1 [0 CK
C2 0
](B.27)
Chapter B. LMI Approach to H∞ Control 165
is Hurwitz
Proof : This is easily seen by noting that is the A-matrix of the closed-loop; this follows
from B.25 and B.26.
As defined, internal stability refers to the autonomous system dynamics in the absence of
an input w; in this regard it coincides with the standard notion of asymptotic stability of
dynamical systems. However it has immediate implications on the input-output properties
of the system.
In particular, the transfer function from w to z, denoted T (s), will have as poles a subset
of the eigenvalues of A; for example, when Dk = 0 we have
T (s) =[
C1 D12CK
](Is− A)−1
[B1
BKD21
]+ D11
If A is Hurwitz, this function has all its poles in the left half plane of C. An important
consequence is that w 7→ z defines a bounded operator on L2 [0,∞); this is termed input-
output stability.
The question immediately arises as to whether the two notions are interchangeable, i.e.
whether the boundedness of w 7→ z implies internal stability; clearly, the answer is nega-
tive: an extreme example would be to have C1, D11, D12 be all zero which gives T (s) = 0
but clearly says nothing about A. In other words, the internal dynamics need not be
reflected in the external map.
There is, however, a way to characterize internal stability in terms of the boundedness of
an input-output operator, by considering the map between injected interconnection noise
in the feedback loop, to the interconnection variables. The relevant diagram is given in
Figure B.9, where the controller K has the same description as in Figure B.8. The system
G22 is the lower block of G, described by the state space equations
x22(t) = Ax22(t) + B2v1(t)
v2(t) = C2x22(t) + D22v1(t)
Chapter B. LMI Approach to H∞ Control 166
Figure B.9: Input-output stability
where (C2, A, B2, D22) are the same matrices as in the state space description of G. We
have also included the external inputs d1 and d2 at the interconnection between G22 and
K.
As with our more general system, we say that this new system is well-posed if there
exist unique solutions for x22(t), xK(t), v1 and v2 for all inputs d1(t) and d2(t) and initial
conditions x22(0), xK(0). We say it is internally stable if it is well posed and for di = 0
x(t), xK(t)t→∞→ 0 holds for every initial condition.
It is an easy exercise to see that the system is well-posed, if and only if, I − D22DK
is nonsingular; this is the same well-posedness condition we have for Figure B.8. Also
noticing that all the states in the description of G are included in the equations for G22,
it follows immediately that internal stability of one is equivalent to internal stability of
the other.
Lemma 1 Given a controller K, Figure B.8 is internally stable, if and only if, Figure
B.9 is internally stable.
The next result shows that with this new set of inputs, internal stability can be charac-
terized by the boundedness of an input-output map.
Lemma 2 Suppose that (C2, A, B2) is stabilizable and detectable. Then Figure B.9 is
Chapter B. LMI Approach to H∞ Control 167
internally stable if and only if the transfer function of
[d1
d2
]7→
[v1
v2
]
has no poles in the closed right half plane of C.
Proof : We begin by finding an expression for the transfer function. For convenience
denote
D =
[I −DK
−D22 I
],
then routine calculations lead to the following relationship:
[v1(s)
v2(s)
]=
D−1
[0 CK
C2 0
](Is− A
)−1
[B2 0
0 BK
]D−1 + D−1 +
[0 0
0 −I
][d(s)
d2(s)
]
where is the closed loop matrix from B.27. Therefore the ”only if” direction follows
immediately, since the poles of this transfer function are a subset of the eigenvalues of A
, which is by assumption Hurwitz ; see Proposition 3 and Lemma 1.
To prove ”if”: assume that the transfer function has no poles in C+, therefore the same
is true of [0 CK
C2 0
]
︸ ︷︷ ︸C
(Is− A
)−1
[B2 0
0 BK
]
︸ ︷︷ ︸B
We need to show that A is Hurwitz ; it is therefore sufficient to show that(C, A, B
)is a
stabilizable and detectable realization. Let
F =
[F 0
0 FK
]− D−1
[0 CK
C2 0
]
Chapter B. LMI Approach to H∞ Control 168
Where F and Fk are chosen so that A + B2F and AK + BKFK are both Hurwitz. It is
routine to show that
A + BF =
[A + B2F 0
0 AK + BKFK
]
and thus(A, B
)is stabilizable.
A formally similar argument shows that(C, A
)is detectable.
B.3.2 Stabilization
In the previous section we have discussed the analysis of stability of a given feedback
configuration; we now turn to the question of design of a stabilizing controller. The
following result explains when this can be achieved.
Proposition 4 A necessary and sufficient condition for the existence of an internally
stabilizing K for Figure B.8, is that (C2, A, B2) is stabilizable and detectable. In that
case, one such controller is given by
K(s) =
A + B2F + LC2 + LD22F −L
F 0
Where F and L are matrices such that A + B2F and A + LC2 are Hurwitz.
Proof : If the stabilizability or detectability of (C2, A, B2) is violated, we can choose an
initial condition which excites the unstable hidden mode. It is not difficult to show that
the state will diverge to infinity regardless of the controller. Details are left as an exercise.
Consequently no internally stabilizing K exists, which proves necessity. For the sufficiency
side, it is enough to verify that the given controller is indeed internally stabilizing. Start
Chapter B. LMI Approach to H∞ Control 169
by noting that Dk = 0 and so the configuration is well-posed. Now substitute the state
space for the controller into the expression for A given in Proposition 3.
A =
[A B2F
−LC2 A + B2F + LC2
]
Let
T =
[I 0
I I
]
and notice
T−1AT =
[A + B2F B2F
0 A + LC2
]
Since the eigenvalues of A are therefore given by those of A + B2F and A + LC2 we see
is A Hurwitz.
B.4 H∞ Synthesis
Now we consider optimal synthesis with respect to the H∞ norm introduced already[21].
Again we are concerned with the feedback arrangement of figure where we have two state
space systems G and K, each having their familiar role. We will pursue the answer to the
following question: does there exist a state space controller K such that
• The closed loop system is internally stable;
• The closed loop performance satisfies
∥∥∥∥S
(∧G,
∧K
)∥∥∥∥∞
< 1
Chapter B. LMI Approach to H∞ Control 170
Thus we only plan to consider the problem of making the closed loop contractive in the
sense of H∞ . It is clear, however, that determining whether there exists a stabilizing
controller so that
∥∥∥∥S
(∧G,
∧K
)∥∥∥∥∞
< γ, for some constant γ , can be achieved by rescaling
the γ dependent problem to arrive at the contractive version given above. Furthermore,
by searching over γ , our approach will allow us to get as close to the minimal H∞ norm as
we desire, but in contrast to our work on H2 optimal control, we will not seek a controller
that exactly optimizes the H∞ norm. There are many approaches for solving the H∞
control problem. Probably the most celebrated solution is in terms of Riccati equations.
Here we will present a solution based entirely on linear matrix inequalities, which has the
main advantage that it can be obtained with relatively straightforward matrix tools, and
without any restrictions on the problem data. In fact Riccati equations and LMIs are
intimately related, an issue we will explain when proving the Kalman-Yakubovich-Popov
lemma concerning the analysis of the H∞ norm of a system, which will be key to the
subsequent synthesis solution.
Before getting into the details of the problem, we make a few comments about the
motivation for this optimization.
As we know already that the H∞ norm is the L2-induced norm of a causal, stable,
linear-time invariant system. More precisely, given a causal linear time-invariant operator
G : L2(−∞,∞) → L2(−∞,∞), the corresponding operator in the isomorphic space∧L2
(jR) is a multiplication operator M∧G
for a certain∧G (s) ∈ H∞, and
‖G‖L2→L2= ‖MG‖L2→L2
=∥∥∥G
∥∥∥∞
The motivation for minimizing such an induced norm lies in the philosophy making
error signal small (z), we are minimizing the maximum ”gain” of the system in the energy
or L2 sense. Equivalently, the excitation w is considered to be an arbitrary L2 signal and
we wish to minimize its worst-case effect on the energy of z.
Chapter B. LMI Approach to H∞ Control 171
B.4.1 Two important matrix inequalities
The entire synthesis approach of the chapter revolves around the two technical results
presented here. The first of these is a result purely about matrices; the second is an
important systems theory result and is frequently called the Kalman-Yacubovich-Popov
lemma, or KYP lemma for short. We begin by stating the following which the reader can
prove as an exercise.
Lemma 3 Suppose P and Q are matrices satisfying ker P = 0 and ker Q = 0. Then for
every matrix Y there exists a solution J to
P ∗JQ = Y
The above lemma is used to prove the next one which is one of the two major technical
results of this section
Lemma 4 Suppose,
1. P , Q and H are matrices and that H is symmetric;
2. The matrices WP and WQ are full rank matrices satisfying ImWP = kerP and
ImWQ = kerQ then there exists a matrix J such that
H + P ∗J∗Q + Q∗JP < 0 (B.28)
if and only if, the inequalities
W ∗P HWP < 0 and W ∗
QHWQ < 0
Chapter B. LMI Approach to H∞ Control 172
both hold.
Observe that when the kernels of P and Q are not both nonzero the result does not apply
as stated. However it is readily seen from Lemma 1, that if both of the kernels are zero
then there is always a solution J . if for example only kerP = 0 then W ∗QHWQ < 0
is a necessary and sufficient condition for a solution to lemma 1 to exist, as follows by
simplified version of the following proof.
Proof : We will show the equivalence of the conditions directly by construction. To
begin define V1 to be a matrix such that
ImV1 = ker P ∩ ker Q,
and V2 and V3 such that
Im [V1V2] = ker P and Im [V1V3] = ker Q
without loss of generality we assume that V1, V2 and V3 have full column rank and define
V4 so that
V =[
V1 V2 V3 V4
]
is square and nonsingular. Therefore the LMI B.21 above holds, if and only if
V ∗HV + V ∗P ∗J∗QV + V ∗Q∗JPV < 0 (B.29)
does. Now PV and QV are simply the matrices P and Q on the domain basis defined by
V ; therefore they have the form
PV =[
0 0 P1 P2
]and QV =
[0 Q1 0 Q2
];
Chapter B. LMI Approach to H∞ Control 173
we also define the block components
V ∗HV =:
H11 H12 H13 H14
H∗12 H22 H23 H24
H∗13 H∗
23 H33 H34
H∗14 H∗
24 H∗34 H44
Further define the variable Y by
Y =
[Y11 Y12
Y21 Y22
]=
[P ∗
1
P ∗2
]= J∗
[Q1 Q2
]
from their definitions ker[
P1 P2
]= 0 and ker
[Q1 Q2
]= 0, and so by Lemma
1 we see that Y is freely assignable by choosing an appropriate matrix J . Writing out
inequality B.29 using the above definitions we get
H11 H12 H13 H14
H∗12 H22 H23 + Y ∗
11 H24 + Y ∗21
H∗13 H∗
23 + Y11 H33 H34 + Y12
H∗14 H∗
24 + Y21 H∗34 + Y ∗
12 H44 + Y22 + Y ∗22
< 0
Apply the Schur complement formula to the upper 3 × 3 block, and we see the above
holds, if and only if, the two following inequalities are met.
−H :=
H11 H12 H13
H∗12 H22 H23 + Y ∗
11
H∗13 H∗
23 + Y11 H33
< 0
Chapter B. LMI Approach to H∞ Control 174
and H44 + Y22 + Y ∗22 −
H14
H24 + Y ∗21
H34 + Y12
∗
H−1
H14
H24 + Y ∗21
H34 + Y12
< 0
as already noted above Y is freely assignable and so we see that provided the first in-
equality can be achieved by choosing Y11, the second can always be met by appropriate
choice of Y12, Y21 and Y22. That is the above two inequalities can be achieved, if and only
if,−H < 0 holds for some Y11. Now applying a Schur complement on
−H with respect to
H11, we obtain
H11 0 0
0 H22 − Y ∗12Y
−111 H12 Y ∗
11 + X∗
0 Y11 + X H33 −H∗13H
−111 H13
< 0,
where X = H∗23 −H∗
13H−111 H12. Now since Y11 is freely assignable we see readily that the
last condition can be satisfied, if and only if, the diagonal entries of the lift hand matrix
are all negative definite. Using the Schur complement result twice these three conditions
can be converted to the equivalent conditions
[H11 H12
H∗12 H22
]< 0 and
[H11 H13
H∗13 H33
]< 0
by the choice of our basis we see that these hold, if and only if, W ∗P HWP < 0 and W ∗
QHWQ <
0 are both met. Having proved this matrix result we move on to our second result, the
KYP lemma.
B.4.2 The KYP Lemma
There are many versions of this result, which establishes the equivalence between a fre-
quency domain inequality and a state-space condition in terms of either a Riccati equation
Chapter B. LMI Approach to H∞ Control 175
or an LMI. The version given below turns an H∞ norm condition into an LMI. Being able
to do this is very helpful for attaining our goal of controller synthesis, however it is equally
important simply as a finite dimensional analysis test for transfer functions.
Lemma 5 Suppose∧
M (s) = C (Is− A)−1 B + D . Then the following are equivalent
conditions.
1. The matrix A is Hurwitz and ∥∥∥∥∧
M
∥∥∥∥∞
< 1;
2. ii. There exists a symmetric positive definite matrix X such that
C∗
D∗
[C D
]+
A∗X + XA XB
B∗X −I
< 0 (B.30)
the condition in (ii) is clearly an LMI and gives us a very convenient way to evaluate the H
norm of a transfer function. In the proof below we see proving that condition (ii) implies
that (i) holds is reasonably straightforward, and involves showing the direct connection
between the above LMI and the state space equations that describe M . proving the
converse is considerably harder; fortunately we will be able to exploit the Riccati equation
techniques. An alternative proof, which employs only matrix arguments is beyond the
scope. Proof :We begin by showing (ii) implies (i). The top left block in B.30 states that
A∗X + XA + C∗C < 0. Since X > 0 we see that A must be Hurwitz. It remains to
show contractiveness which we do by employing a system-theoretic argument based on
the state equations for M . using the strict inequality B.30 choose 1 > ε > 0 such that
[C∗
D∗
] [C D
]+
[A∗ + XA XB
B∗X − (1− ∈) I
]< 0 (B.31)
Chapter B. LMI Approach to H∞ Control 176
holds. Let ω ∈ L2 [0,∞] and realize that in order to show that M is contractive, it is
sufficient to show that ‖z‖2 ≤ (1− ∈) ‖ω‖2 , where z := Mω. The state space equations
relating ω and z are·x (t) = Ax (t) + Bω (t) , x (0) = 0,
z (t) = Cx (t) + Dω (t) .
Now multiplying inequality B.31 on the left by [x∗(t)w∗(t)] and on the right by the adjoint