Top Banner

of 43

Interkoneksi Sistem

Oct 12, 2015

Download

Documents

interkoneksi sistem
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Lecture Note on Robust Control

    Jyh-Ching JuangDepartment of Electrical EngineeringNational Cheng Kung University

    Tainan, [email protected]

    1

  • Motivation of Robust ControlJ.C. Juang

    For a physical process, uncertainties may appear as Parameter variation

    Unmodeled or neglected dynamics

    Change of operating condition

    Environmental factors

    External disturbances and noises

    Others

    Robust control, in short, is to ensure that the performance and stability of feedbacksystems in the presence of uncertainties.

    Objectives of the course Recognize and respect the uncertainty effects in dynamic systems

    Understand how to analyze the dynamic systems in the presence of uncertain-ties

    Use of computer aided tools for control system analysis and design

    Design of robust controllers to accommodate uncertainties

    2

  • Course RequirementsJ.C. Juang

    Prerequisite: Control engineering

    Linear system theory

    Multivariable systems

    Working knowledge of Matlab (or other equivalent tools)

    References: Kemin Zhou with John Doyle, Essentials of Robust Control," Prentice Hall,1998.

    Doyle, Francis, and Tannenbaum, Feedback Control Theory," Maxwell MacMil-lan, 1992.

    Boyd, El Ghaoui, Feron, and Balakrishnan, Linear Matrix Inequalities inSystem and Control Theory," SIAM, 1994.

    Zhou, Doyle, and Glover, Robust and Optimal Control," Prectice Hall, 1996.

    Maciejowski, Multivariable Feedback Design," Addison-Wiley, 1989.

    Skogestad and Postlethwaite, Multivariable Feedback Control," John Wiley& Sons, 1996.

    Dahleh and Diaz-Bobillo, Control of Uncertain Systems," Prentice Hall, 1995.

    Green and Limebeer, Linear Robust Control," Prentice Hall, 1995.

    Grading:30% : Homework: approximately five assignments40% : Examination: one final examination around mid-December30% : Term project: paper studying and/or design example

    3

  • Course OutlineJ.C. Juang

    1. Introduction

    2. Preliminaries

    (a) Linear algebra and matrix theory

    (b) Function space and signal

    (c) Linear system theory

    (d) Measure of systems

    (e) Design equations

    3. Internal Stability

    (a) Feedback structure and well-posedness

    (b) Coprime factorization

    (c) Linear fractional map

    (d) Stability and stabilizing controllers

    4. Performance and Robustness

    (a) Feedback design tradeoffs

    (b) Bodes relation

    (c) Uncertainty model

    (d) Small gain theorem

    (e) Robust stability

    (f) Robust performance

    (g) Structured singular values

    5. State Feedback Control

    (a) H2 state feedback(b) H state feedback

    6. Output Feedback Control

    (a) Observer-based controller

    (b) Output injection and separation principle

    (c) Loop transfer recovery

    (d) H controller design7. Miscellaneous Topics

    (a) Modeling issues

    (b) Gain scheduling

    (c) Multi-objective control

    (d) Controller reduction

    4

  • Review of Linear Algebra and Matrix Theory

    5

  • Linear SubspacesJ.C. Juang

    Let R and C be real and complex scalar field, respectively. A linear space V over a field F consists of a set on which two operations are definedso that for each pair of elements x and y in V there is a unique element x+ y in V,and for each element in F and each element x in V there is a unique element xin V, such that the following conditions hold:1. For all x, y in V, x+ y = y + x.2. For all x, y, z in V, (x+ y) + z = x+ (y + z).3. There exists an element in V denoted by 0 such that x+ 0 = x for each x in V.4. For each element x in V there exists an element y in V such that x+ y = 0.5. For each element x in V, 1x = x.6. For each , in F and each element x in V, ()x = (x).7. For each element in F and each pair of elements x and y in V, (x+y) = x+y.8. For each , of elements in F and each element x in V, (+ )x = x+ x.The element x+ y and x are called the sum of x and y and the product of andx, respectively.

    A subset W of a vector space V over a field F is called a subspace of V if W isa vector space over F under the operations of addition and scalar multiplicationdefined on V.

    Let x1, x2, , xk be vectors in W, then an element of the form 1x1+2x2+ +kxkwith i F is a linear combination over F of x1, x2, , xk.

    The set of all linear combinations of x1, x2, , xk V is a subspace called the spanof x1, x2, , xk, denoted by

    span{x1, x2, , xk} = {x|x = 1x1 + 2x2 + + kxk;i F}

    A set of vectors x1, x2, , xk W are said to be linearly dependent over F if thereexists 1, 2, , k F not all zero such that 1x1 + 2x2 + + kxk = 0; otherwisethey are linearly independent.

    Let W be a subspace of a vector space V, then a set of vectors {x1, x2, , xk} Wis said to be a basis for W if x1, x2, , xk are linearly independent and W =span{x1, x2, , xk}.

    The dimension of a vector subspace W equals the number of basis vectors. A set of vectors {x1, x2, , xk} in W are mutually orthogonal if xixj = 0 for all i 6= j,where xi is the complex conjugate transpose of xi.

    A collection of subspaces W1, W2, . . ., Wk of V is mutually orthogonal if xy = 0whenever x Wi and y Wj for i 6= j.

    6

  • The vectors are orthonormal if xixj = ij ={

    1 when i = j0 otherwise

    , the Kronecker delta.

    Let W be a subspace of V. The set of all vectors in V that are orthogonal to everyvector in W is the orthogonal complement of W and is denoted by W.

    W = {y V : yx = 0x W}

    Each vector x in V can be expressed uniquely in the form x = xW +xW for xW Wand xW W.

    A set of vectors {u1, u2, . . . , uk} is said to be an orthonormal basis for a k-dimensionalsubspace W if the vectors form a basis and are orthonormal. Suppose that the di-mension of V is n, it is then possible to find a set of orthonormal basis {uk+1, . . . , un}such that

    W = span{uk+1, . . . , un} Let M Rmn be a linear transformation from Rn to Rm. M can be viewed as anm n matrix.

    The kernel or null space of a linear transformation M is defined askerM = N (M) = {x|x Rn :Mx = 0}

    The image or range of M isimgM = {y|y Rm : y =Mx for some x Rn}

    The rank of a matrix M is defined as the dimension of imgM . An m n matrix M is said to have full row rank if m n and rank(M) = m. It hasa full column rank if m n and rank(M) = n.

    Let M be an m n real, full rank matrix with m > n, the orthogonal complementof M is a matrix M of dimension m (m n) such that [ M M ] is a square,nonsingular matrix with the following property: MTM = 0.

    The following properties hold: (kerM) = imgMT and (imgM) = kerMT .

    7

  • Eigenvalues and EigenvectorsJ.C. Juang

    The scalar is an eigenvalue of the square matrix M Cnn ifdet(I M) = 0

    There are n eigenvalues, which are denoted by i(M), i = 1, 2, , n. The spectrum of M , denoted by spec(M), is the collection of all eigenvalues of M ,i.e., spec(M) = {1, 2, , n}.

    The spectral radius is defined as(M) = max

    i|i(M)|

    where i is an eigenvalue of M .

    If M is a Hermitian matrix, i.e., M = M, then all eigenvalues of M are real. Inthis case, max(M) is used to represent the maximal eigenvalue of M and min isused to represent the minimal eigenvalue of M .

    If i spec(M), then any nonzero vector xi Cn that satisfiesMxi = ixi

    is a (right) eigenvector of M . Likewise, any nonzero vector wi Cn that satisfieswiM = iw

    i

    is a left eigenvector of M .

    A cyclic matrix M admits the spectral decomposition or modal decomposition

    M =ni=1

    ixiwi = XW

    where

    W =

    w1w2...wn

    , X = [ x1 x2 xn ] , and =1 0 00 2 0...

    . . ....

    0 0 n

    Here, i is an eigenvalue of M with xi and wi being the right and left eigenvectors,respectively such that wi xj = ij.

    If M is Hermitian, then there exists a unitary matrix U , i.e., UU = I and a realdiagonal such that

    M = UU

    In this case, the columns of U are the (right) eigenvectors of M .

    8

  • Matrix Inversion and Pseudo-InverseJ.C. Juang

    For a square matrixM , its inverse, denoted byM1 is the matrix such thatMM1 =M1M = I.

    Let M be a square matrix partitioned as follows

    M =

    [M11 M12M21 M22

    ] Suppose that M11 is nonsingular, then M can be decomposed as

    M =

    [I 0

    M21M111 I

    ] [M11 00 S11

    ] [I M111 M120 I

    ]where S11 =M22 M21M111 M12 is the Schur complement of M11 in M .

    If M (as partitioned above) is nonsingular, then

    M1 =[M111 +M

    111 M12S

    111 M21M

    111 M111 M12S111

    S111 M21M111 S111

    ] If both M11 and M22 are invertible,

    (M11 M12M122 M21)1 =M111 +M111 M12(M22 M21M111 M12)1M21M111 The pseudo-inverse (also called Moore-Penrose inverse) of a matrixM is the matrixM+ that satisfies the following conditions:

    1. MM+M =M

    2. M+MM+ =M+

    3. (MM+) =MM+

    4. (M+M) =M+M

    Significance of pseudo-inverse: Consider the linear matrix equation with unknownX,

    AXB = C

    The equation is solvable if and only if

    AA+CB+B = C

    All solutions can be characterized as

    X = A+CB+ A+AY BB+ + Yfor some Y .

    In the case of no solutions, the best approximation is

    Xappr = A+CB+

    9

  • Invariant SubspacesJ.C. Juang

    Let M be a square matrix. A subspace Cn is invariant for the transformationM , or M-invariant, if Mx for every x .

    is invariant for M means that the image of under M is contained in : M . Examples of M-invariant subspaces: eigenspace of M , kerM , and imgM . If is a nontrivial subspace and is M-invariant, then there exist x and suchthat Mx = x.

    Let 1, . . ., k be eigenvalues of M (not necessarily distinct), and let xi be the cor-responding (generalized) eigenvectors. Then = span{x1, . . . , xk} is an M-invariantsubspace provided that all the lower-rank generalized eigenvectors are included.

    More specifically, let 1 = 2 = = l be eigenvalues of M , and let x1, x2, . . ., xl becorresponding eigenvector and the generalized eigenvectors obtained through thefollowing equations:

    (M 1I)x1 = 0(M 1I)x2 = x1

    ...

    (M 1I)xl = xl1Then a subspace with xj for some j l is an M-invariant subspace if andonly if all lower-rank eigenvectors and generalized eigenvectors of xj are in , i.e.,xi , i j.

    An M-invariant subspace Cn is called a stable invariant subspace if all theeigenvalues of M constrained to have negative real parts.

    Example

    M[x1 x2 x3 x4

    ]=[x1 x2 x3 x4

    ] 1 1

    13

    4

    with

  • Vector Norm and Matrix NormJ.C. Juang

    Consider a linear space V over the field F . A real-valued function defined on allelements x from V is called a norm, written , if it satisfies the following axioms:1. (Nonnegativity) x 0 for all x V and x = 0 if and only if x = 0.2. (Homogeneity) x = || x for all x V and F .3. (Triangle inequality) x+ y x+ y for all x and y in V.

    A linear space together with a norm defined on it becomes a normed linear space.

    Vector norm: Let x =

    x1x2...xn

    be a vector in Cn. The followings are norms on Cn1. Vector p-norm (for 1 p

  • Singular Value DecompositionJ.C. Juang

    The singular values of a matrix M are defined asi(M) =

    i(MM)

    Here, i(M)s are real and nonnegative.

    The maximal singular value, max, can be shown to be

    max(M) = M = maxx 6=0

    Mx2x2

    When M is invertible, the maximal singular value of M1 is related to the minimalsingular value of M , min(M), by

    max(M1) =

    1

    min(M)

    The rank of M is the same as the number of nonzero singular values of M . The matrix M and its complex conjugate transpose M have the same singularvalues, i.e., i(M) = i(M).

    Let M Cmn, there exists unitary matrices U = [ u1 u2 um ] Cmm andV =

    [v1 v2 vn

    ] Cnn such thatM = UV

    where

    =

    [1 00 0

    ]with 1 =

    1 0 00 2 0...

    . . . 00 0 r

    with 1 2 r 0 where r = min{m,n}.

    The matrix admits the decomposition

    M =ri=1

    iuivi =

    [u1 u2 ur

    ] 1 0 00 2 0...

    . . . 00 0 r

    [ v1 v2 vr ]

    kerM = span{vr+1, , vn} and (kerM) = span{v1, , vr}. imgM = span{u1, , ur} and (imgM) = span{ur+1, , um}. The Frobenius norm of M equals MF =

    21 +

    22 + + 2r .

    The condition number of a matrix M is defined as (M) = max(M)max(M1).

    12

  • Semidefinite MatricesJ.C. Juang

    A square Hermitian matrix M =M is positive definite (semidefinite), denoted byM > 0 ( 0), if xMx > 0 ( 0) for all x 6= 0.

    All eigenvalues of a positive definite matrix are positive.

    Let M =[M11 M12M21 M22

    ]be a Hermitian matrix. Then M > 0 if and only if M11 > 0

    and S11 =M22 M21M111 M12 > 0. Let M 0. There exists a unitary matrix U and nonnegative diagonal such thatM = UU.

    The square root of a positive semidefinite matrix M , denoted by M1/2, satisfiesM1/2M1/2 =M and M1/2 0. It can be shown that M1/2 = U1/2U.

    Let M1 and M2 be two matrices such that M1 = M1 0 and M2 = M2 0. ThenM1 > M2 if and only if (M2M

    11 ) < 1

    For a positive definite matrix M , its inverse M1 exists and is positive definite. For two positive definite matrices M1 and M2, we have M1 + M2 > 0 when > 0and 0.

    13

  • Matrix CalculusJ.C. Juang

    Let x = [xi] be an n-vector and f(x) be a scalar function on x, then the firstderivative gradient vector is defined as

    f(x)

    x= [gi] =

    f(x)x1f(x)x2...

    f(x)xn

    and the second derivative Hessian matrix is

    2f(x)

    x2= [Hij] =

    2f(x)x21

    2f(x)x1 xn

    ......

    2f(x)xn x1

    2f(x)x2n

    Let X = [Xij] be an mn matrix and f(X) be a scalar function on X. The derivativeof f(X) with respect to X is defined as

    Xf(X) =

    [

    Xijf(X)

    ] Formula for derivatives (assuming that x and y are vectors and A, B, and X arereal matrices)

    x(yTAx) = ATy

    x(xTAx) = (A+ AT )x

    (traceX)

    X= I

    (traceAXB)

    X= ATBT

    (traceAXTB)

    X= BA

    (traceXTAX)

    X= (A+ AT )X

    (traceAXBX)

    X= ATXTBT +BTXTAT

    (log detX)

    X= (XT )1

    (detX)

    X= (detX)(XT )1

    14

  • Functional Space and Signal

    15

  • Function SpacesJ.C. Juang

    Let V be a vector space over C and be a norm defined on V. Then V is anormed space.

    The space Lp(,) (for 1 p 0. Let V be a Hilbert space and U V a subset. Then the orthogonal complement ofU , denoted by U is defined as

    U = {u : u, v = 0,v U , u V}The orthogonal complement is a Hilbert space.

    Let W = L2+ L2, then W = L2. Let U and W be subspaces of a vector space V. V is said to be the direct sumof U and W, written V = U W, if U W = {0} and every element v V can beexpressed as v = u+w with u U and w W. If V is an inner product space and Uand W are orthogonal, then V is said to be the orthogonal direct sum of U and W.

    The space L2 is the orthogonal direct sum of L2+ and L2.

    16

  • Power and Spectral SignalsJ.C. Juang

    Let w(t) be a function of time. The autocorrelation matrix is

    Rww := limT

    1

    2T

    TT

    w(t+ )w(t) dt

    if the limit exists and is finite for all .

    The Fourier transform of the autocorrelation matrix is the spectral density, de-noted as Sww(j)

    Sww(j) :=

    Rww()ej d

    The autocorrelation can be obtained from the spectral density by performing aninverse Fourier transform

    Rww() =1

    2pi

    Sww(j)ej d

    A signal is a power signal if the autocorrelation matrix Rww() exists for all andthe power spectral density function Sww(j) exists.

    The power of w(t) is defined as

    wrms :=

    limT

    1

    2T

    TTw(t)2 dt =

    trace[Rww(0)]

    In terms of the spectral density function,

    w2rms =1

    2pi

    trace[Sww(j)] d

    Note that rms is not a norm, since a finite-duration signal has a zero rms value.

    17

  • Signal QuantizationJ.C. Juang

    For a signal w(t) mapping from [0,) to R (or Rm), its measures can be defined as1. -norm (peak value)

    w = supt0

    |w(t)|or

    w = supt0

    max1im

    |wi(t)| = max1im

    wi

    2. 2-norm (total energy)

    w2 =

    0

    w2(t) dt

    or

    w2 =

    0

    mi=1

    w2i (t) dt =

    mi=1

    wi22 =

    0

    wT (t)w(t) dt

    3. 1-norm (resource consumption)

    w1 = 0

    |w(t)| dt

    or

    w1 = 0

    mi=1

    |wi(t)| dt =mi=1

    wi1

    4. p-norm

    wp = ( 0

    |w(t)|p dt)1/p

    5. rms (root mean square) value (average power)

    wrms =

    limT

    1

    T

    T0

    w2(t) dt

    or

    wrms =

    limT

    1

    T

    T0

    wT (t)w(t) dt

    If w2

  • Check the following signalsL1 L2 L power signal

    w(t) = sin t

    w(t) =

    {e2t t 00 t < 0

    w(t) = 11+t

    w(t) =

    {1/t 1 t > 0

    0 t > 1 or t < 0

    Relationship

    L

    Power signal

    L2

    L1

    19

  • Review of Linear System Theory

    20

  • Linear SystemsJ.C. Juang

    A finite-dimensional linear time-invariant dynamical system can be described bythe following equation

    x = Ax+Bw, x(0) = xo

    z = Cx+Dw

    where x(t) Rn is the state, w(t) Rm is the input, and z(t) Rp is the output. The transfer function matrix from w to z is defined as

    Z(s) =M(s)W (s)

    where Z(s) and W (s) are the Laplace transform of z(t) and w(t), respectively. It isknown that

    M(s) = D + C(sI A)1B For simplicity, the short-hand notation is used

    M(s)s=

    (A BC D

    ) The state response for the given initial condition xo and input w(t) is

    x(t) = eAtxo +

    t0

    eA(t)Bw() d

    and the output response is

    z(t) = CeAtxo +

    t0

    CeA(t)Bw() d +Dw(t)

    Assume that the initial condition is zero, D = 0, and the input is an impulse, thenthe output admits

    m(t) = Impulse response of M(s) = CeAtB

    With respect to power signal input, the power spectral density of the output isrelated to the power spectral density of the input by

    Szz(j) =M(j)Sww(j)M(j)

    The system is stable (Hurwitz) if all the eigenvalues of the matrix A are in the lefthalf plane.

    21

  • Controllability and ObservabilityJ.C. Juang

    The pair (A,B) is controllable if and only if1. For any initial state xo, t > 0, and final state xf , there exists a piecewisecontinuous input w() such that x(t) = xf .

    2. The controllability matrix[B AB A2B An1B ] has full row rank, i.e.,

    < A|imgB >:=ni=1 img(Ai1B) = Rn.3. The matrix

    Wc(t) =

    t0

    eABBT eAT d

    is positive definite for any t > 0.

    4. PBH test: The matrix[A I B ] has full row rank for all in C.

    5. Let and x be any eigenvalue and any corresponding left eigenvector of A, i.e.,xA = x, then xB 6= 0.

    6. The eigenvalues of A+BF can be freely assigned by a suitable choice of F .

    The pair (A,B) is stabilizable if and only if1. The matrix

    [A I B ] has full row rank for all in C with 0.

    4. The matrix

    [A IC

    ]has full column rank for all in C.

    5. The eigenvalues of A+HC can be freely assigned by a suitable choice of H.

    6. For all and x such that Ax = x, we have Cx 6= 0.7. The pair (AT , CT ) is controllable.

    22

  • The pair (C,A) is detectable if and only if

    1. The matrix

    [A IC

    ]has full-column rank for all

  • State Space AlgebraJ.C. Juang

    LetM1(s) :

    {x1 = A1x1 +B1w1z1 = C1x1 +D1w1

    and M2(s) :

    {x2 = A2x2 +B1w2z2 = C2x2 +D2w2

    In terms of the compact representation,M1(s)s=

    (A1 B1C1 D1

    )andM2(s)

    s=

    (A2 B2C2 D2

    ).

    Parallel connection of M1(s) and M2(s).

    w

    w2 M2(s)

    w1 M1(s)

    z2

    z1

    z

    The new state is x =

    [x1x2

    ], input w = w1 = w2, and output z = z1 + z2.

    Thus

    x =

    [A1 00 A2

    ]x+

    [B1B2

    ]w and z =

    [C1 C2

    ]x+ (D1 +D2)w

    or M1(s) +M2(s)s=

    A1 0 B10 A2 B2C1 C2 D1 +D2

    . Series connection of two systems

    w2 M2(s)z2 = w1 M1(s)

    z1

    The connection givesM1(s)M2(s)s=

    (A1 B1C1 D1

    )(A2 B2C2 D2

    )=

    A1 B1C2 B1D20 A2 B2C1 D1C2 D1D2

    = A2 0 B2B1C2 A1 B1D2D1C2 C1 D1D2

    . Inverse system M11 (s) s=

    (A1 B1D11 C1 B1D11C1D11 D11

    )provided that D1 is invertible.

    24

  • Can be verified that M1(s)M11 (s) = I.

    w1 = D11 z1 D11 C1x1

    x1 = A1x1 +B1(D11 z1 D11 C1x1) = (AB1D11 C1)x1 +B1D11 z1

    Transpose or dual system MT1 (s)s=

    (AT1 C

    T1

    BT1 DT1

    ).

    Conjugate system M1 (s) =MT1 (s)s=

    ( AT1 CT1BT1 D

    T1

    ). Thus, M1 (j) =M

    1 (j).

    Feedback connection of M1(s) and M2(s).

    w1w M1(s)z = z1

    w2M2(s)z2

    The connection gives z = z1 = w2 and w1 = w + z2.

    Thus,

    z = C1x1 +D1(w + z2)

    z2 = C2x2 +D2z

    z = (I D1D2)1(C1x1 +D1C2x2 +D1w)z2 = (I D2D1)1(D2C1x1 + C2x2 +D2D1w)x1 = A1x1 +B1w +B1(I D2D1)1(D2C1x1 + C2x2 +D2D1w)x2 = A2x2 +B2(I D1D2)1(C1x1 +D1C2x2 +D1w)

    (I M1(s)M2(s))1M1(s)s=

    A1 +B1(I D2D1)1D2C1 B1(I D2D1)1C2 B1 +B1(I D2D1)1D2D1B2(I D1D2)1C1 A2 +B2(I D1D2)1D1C2 B2(I D1D2)1D1(I D1D2)1C1 (I D1D2)1C2 (I D1D2)1D1

    25

  • System Poles and ZerosJ.C. Juang

    Let M(s) = D + C(sI A)1B s=(

    A BC D

    ). The eigenvalues of A are the poles of

    M(s).

    The system matrix of M(s) is defined as

    Q(s) =

    [A sI BC D

    ]which is a polynominal matrix.

    The normal rank of Q(s), denoted normal rank, is the maximally possible rank ofQ(s) for at least one s C.

    A complex number o C is called an invariant zero of the system realization if itsatisfies

    rank

    [A oI B

    C D

    ]< normal rank

    [A sI BC D

    ] The invariant zeros are not changed by constant state feedback, constant outputinjection, or similarity transformation.

    Suppose that[A sI BC D

    ]has full-column normal rank. Then o C is an invari-

    ant zero if and only if there exist 0 6= x Cn and w Cm such that[A oI B

    C D

    ] [xw

    ]= 0

    Moreover, if w = 0, then o is also a nonobservable mode.

    When the system is square, i.e., equal number of input and output, then theinvariant zeros can be computed by solving a generalized eigenvalue problem.[

    A BC D

    ] [xw

    ]=

    [I 00 0

    ] [xw

    ]

    for some generalized eigenvalue and generalized eigenvector

    [xw

    ].

    Suppose that[A sI BC D

    ]has full-row normal rank. Then o C is an invariant

    zero if and only if there exist 0 6= y Cn and v Cp such that[y v

    ] [ A oI BC D

    ]= 0

    Moreover, if v = 0, then o is also a noncontrollable mode.

    26

  • The system M(s) has full-column normal rank if and only if[A sI BC D

    ]has

    full-column normal rank.

    Note that [A sI BC D

    ]=

    [I 0

    C(A sI)1 I] [

    A sI B0 M(s)

    ]and

    normal rank

    [A sI BC D

    ]= n+ normal rankM(s)

    Let M(s) be a pm transfer matrix and let (A,B,C,D) be a minimal realization. Ifo is a zero of G(s) that is distinct from the poles, then there exists an input andinitial state such that the output of the system z(t) is zero for all t.

    Let xo and wo be such that[A oI B

    C D

    ] [xowo

    ]= 0

    i.e.,Axo +Bwo = oxo and Cxo +Dwo = 0

    Consider the input w(t) = woeot and initial state x(0) = xo.

    The output is

    z(t) = CeAtxo +

    t0

    CeA(t)Bw() d +Dw(t)

    = CeAtxo + CeAt

    t0

    e(oIA) (oI A)xo d +Dwoeot

    = CeAtxo + CeAt[e(oIA) |t0

    ]xo +Dwoe

    ot

    = 0

    27

  • Measure of System and Fundamental Equations

    28

  • H2 SpaceJ.C. Juang

    Let S be an open set in C and let f(s) be a complex-valued function defined on S.Then f(s) is said to be analytic at a point zo in S if it is differentiable at zo andalso at each point in some neighborhood of zo.

    If f(s) is analytic at zo then f has continuous derivatives of all orders at zo. A function f(s) is said to be analytic in S if it has a derivative or is analytic ateach point of S.

    A matrix-valued function is analytic in S if every element of the matrix is analyticin S.

    All real rational stable transfer function matrices are analytic in the right-halfplane.

    Maximum modulus theorem: If f(s) is defined and continuous on a closed-boundedset S and analytic on the interior of S, then |f(s)| cannot attain the maximum inthe interior of S unless f(s) is a constant.

    |f(s)| can only achieve its maximum on the boundary of S, i.e.,maxsS

    |f(s)| = maxsS

    |f(s)|

    where S denotes the boundary of S.

    The space L2 = L2(jR) is a Hilbert space of matrix-valued functions on jR andconsists of all complex matrix functions M such that

    trace[M

    (j)M(j)] d 0

    { 12pi

    trace[M( + j)M( + j)] d}

    It can be shown that

    M22 =1

    2pi

    trace[M(j)M(j)] d

    29

  • The real rational subspace of H2, which consists of all strictly proper and realrational stable transfer function matrices is denoted by RH2.

    The space H2 is the orthogonal complement of H2 in L2. If M is a strictly proper, stable, real rational transfer function matrix, then M H2and M H2 .

    Parsevals Relation: Isometric isomorphism between the L2 spaces in the timedomain and the L2 spaces in the frequency domain.

    L2(,) L2(jR)L2[0,) H2

    L2(, 0] H2 If m(t) L2(,) and its bilateral Laplace transform is M(s) L2(jR), then

    M2 = m2

    Define an orthogonal projection P+ : L2(,) 7 L2[0,) such that for any func-tion m(t) L2(,)

    P+m(t) =

    {m(t) for t 00 otherwise

    On the other hand, the operator P from L2(,) to L2(, 0] is defined as

    Pm(t) ={

    0 for t > 0m(t) for t 0

    Relationships among function spaces

    Inverse transform

    Laplace transform

    Inverse transform

    Laplace transform

    Inverse transform

    Laplace transform

    L2(, 0]

    L2(,)

    L2[0,)

    H2

    L2(jR)

    H2

    P+

    P

    P+

    P

    30

  • H SpaceJ.C. Juang

    The space L(jR) is a Banach space of matrix-valued functions that are essentiallybounded on jR, with norm

    M := ess supR

    max[M(j)]

    The rational subspace of L denoted by RL consists of all proper and real rationaltransfer function matrices with no poles on the imaginary axis.

    The space H is a subspace of L with functions that are analytic and boundedin the open right-half plane. The H norm is defined as

    M := supRe(s)>0

    max[M(s)] = supR

    max[M(j)]

    The real rational subspace of H is denoted by RH, which consists of all properand real rational stable transfer function matrices.

    The space H is a subspace of L with functions that are analytic and boundedin the open left-half plane. The H norm is defined as

    M := supRe(s)

  • Lyapunov EquationJ.C. Juang

    Given A Rnn and Q = QT Rnn, the equation on X Rnn

    ATX +XA+Q = 0

    is called a Lyapunov equation.

    Define the map : Rnn Rnn,(X) = ATX +XA

    Then the Lyapunov equation has a solution X if and only if Q img. The solution is unique if and only if is injective. The Lyapunov equation has a unique solution if and only if A has no two eigenvaluessum to zero.

    Assume that A is stable, then1. X =

    0eA

    T tQeAt dt.

    2. X > 0 if Q > 0 and X 0 if Q 0.3. If Q 0, then (Q,A) is observable if and only if X > 0.

    Suppose A, Q, and X satisfy the Lyapunov equation, then1. Rei(A) 0 if X > 0 and Q 0.2. A is stable if X > 0 and Q > 0.

    3. A is stable if (Q,A) is detectable, Q 0, and X 0. Proof:

    (a) Let there be x 6= 0 such that Ax = x with Re > 0.(b) Form x(ATX+XA+Q)x = 0 which can be reduced to 2Re ()xXx+xQx =

    0.

    (c) Thus xQx = 0(d) Leads to Qx = 0

    (e) Implies that

    [A IQ

    ]x = 0, which contradicts the detectability assump-

    tion.

    32

  • Controllability and Observability GramiansJ.C. Juang

    M(s)w z

    xo

    Consider the stable transfer function from M(s) with z = M(s)w. Assume thefollowing minimal realization

    M(s) = C(sI A)1BThat is,

    x = Ax+Bw

    z = Cx

    Suppose that there is no input from 0 to , then the output energy generated bythe initial state can be computed as follows:

    [

    0

    zT (t)z(t) dt]12

    = {xT0 [ 0

    eAT tCTCeAt dt]xo} 12

    = [xToWoxo]12

    where Wo =0eA

    T tCTCeAt dt is the observability gramian of the system M(s).

    Indeed, Wo is the positive definite solution of the Lyapunov equationATWo +WoA+ C

    TC = 0

    When A is stable, the system is observable if and only if Wo > 0. On the other hand, the input from to 0 that drives the state to xo satisfies

    xo =

    0

    eAtBw(t) dt

    If we minimize the input energy subject to the reachability condition, the optimalinput can be found to be

    w(t) = BT eAT tW1c xo

    where

    Wc =

    0

    eAtBBT eAT t dt =

    0

    eAtBBT eAT t dt

    33

  • The matrix Wc is the controllability gramian of the system M(s). It satisfies theLyapunov equation

    AWc +WcAT +BBT = 0

    When A is stable, the system is controllable if and only if Wc is positive definite. Note that the input energy (square) needed is

    [

    0

    wT (t)w(t) dt]12

    = {xTo [ 0

    eAtBBT eAT t dt]xo} 12

    = [xToW1c xo]

    12

    In summary, the observability gramian Wo determines the total energy in thesystem output starting from a given initial state (with no input) and the control-lability gramian Wc determines which points in the state space that can be reachedusing an input with total energy one.

    Both Wo and Wc depend on the realization. Through a similarity transformation,the realization (TAT1, TB, CT1) gives the gramians as TWcT T and TTWoT1,respectively.

    The eigenvalues of WcWo are invariant under similarity transformation, thus, asystem property.

    34

  • Balanced Realization and Hankel Singular ValuesJ.C. Juang

    Let Wc and Wo be the controllability gramian and observability gramian of thesystem (A,B,C), respectively, i.e.,

    AWc +WcAT +BBT = 0

    andATWo +WoA+ C

    TC = 0

    The Hankel singular values are defined as the square roots of the eigenvalues ofWcWo, which are independent of the particular realization.

    Let T be the matrix such that TWcWoT1 = =

    21 00 22

    . . .2n

    . The Hankel singular values of the system are 1, 2, , n (in descending order). The matrix T diagonalizes the controllability and observability gramians. Indeed,

    the new realization (TAT1, TB, CT1) admits TWcT T = TTWoT1 =

    1 00 2

    . . .n

    . A realization (A, B, C) is balanced if its controllability and observability gramiansare the same.

    The maximal gain from the past input to the future output is defined as the Hankelnorm.

    t

    x(t)

    xo

    M(s)h = supu(.)L2(,0)

    zL2(0,)uL2(,0)

    = supxo

    (xToWoxo)12

    (xToW1c xo)

    12

    = 12max(WcWo)

    = 1

    The Hankel norm is thus the maximal Hankel singular value.

    35

  • Quantification of SystemsJ.C. Juang

    The norm of a system is typically defined as the induced norm between its outputand input.

    M(s)w z

    Two classes of approaches1. Size of the output due to a particular signal or a class of signal

    2. Relative size of the output and input

    System 2-norm (SISO case). Let m(t) be the impulse response of the system M(s). Its 2-norm is defined as

    M2 = (

    |m(t)|2 dt) 12

    The system 2-norm can be interpreted as the 2-norm of the output due toimpulse input.

    By Parseval theorem,

    M2 = ( 12pi

    |M(j)|2 d) 12

    If the input signal w and output signal z are stochastic in nature. Let Szz()and Sww() be the spectral density of the output and input, respectively. Then

    Szz() = Sww() |M(j)|2

    Note that

    zrms = ( 12pi

    Szz() d)12 = (

    1

    2pi

    |M(j)|2 Sww() d) 12

    The system 2-norm can then be interpreted as the rms value of the outputsubject to unity spectral density white noise.

    36

  • System 2-norm (MIMO case). The system 2-norm is defined as

    M2 = ( 12pi

    trace[M(j)M(j)] d)12

    = (trace

    0

    m(t)mT (t) dt)12

    = (1

    2pi

    i

    i(M(j))2 d)

    12

    where m(t) is the impulse response.

    Let ei be the i-th standard basis vector of Rm. Apply the impulsive input (t)ei tothe system to obtain the impulse response zi(t) = m(t)ei. Then

    M22 =mi=1

    zi22

    The system 2-norm can be interpreted as1. The rms response due to white noise: M2 = zrms|w=white noise2. The 2-norm response due to impulse input M2 = z2|w=impulse

    The system norm can be regarded as the peak value in the Bode magnitude(singular value) plot.

    M = supRe s>0

    max(M(s))

    = supmax(M(j)) (When M(s) is stable)

    = supwrms 6=0

    Mwrmswrms

    = supw2 6=0

    Mw2w2

    = supw2=1

    Mw2

    System 1-norm: peak to peak ratio

    M1 = sup

    Mww

    37

  • State-Space Computation of 2-NormJ.C. Juang

    Consider the state space realization of a stable transfer function M(s)x = Ax+Bw

    z = Cx+Dw

    In 2-norm computation, A is assumed stable and D is zero. Recall the impulse response of M(s), m(t), is

    m(t) = CeAtB

    The 2-norm, according to the definition, satisfies

    M22 = trace 0

    mT (t)m(t) dt

    = trace [BT (

    0

    eAT tCTCeAt dt)B]

    = traceBTWoB

    Thus, the 2-norm of M(s) can be computed by solving a Lyapunov function forWo

    ATWo +WoAT + CTC = 0

    and taking the square root of the trace of BTWoB.

    M2 =traceBTWoB

    Similarly, let Wc be the controllabilty gramianAWc +WcA

    T +BBT = 0

    then,

    M2 =traceCWcCT

    38

  • H Norm ComputationJ.C. Juang

    The H norm of M(s) can be computed through a search of the maximal singularvalue of M(j) for all .

    M(s) = ess supmax(M(s))

    Let M(s) s=(

    A BC D

    ). Then

    M(s) < if and only if max(D) < and H has no eigenvalues on the imaginary axis where

    H =

    [A 0

    CTC AT]+

    [B

    CTD](2I DTD)1 [ DTC BT ]

    Computation of the H norm requires iterations on .

    M(s) < if and only if

    (s) = 2I M(s)M(s) > 0if and only if (j) is nonsingular for all if and only if 1(s) has no imaginaryaxis pole. But

    1(s)s=

    H [ B(2I DTD)1CTD(2I DTD)1]

    [(2I DTD)1DTC (2I DTD)1BT ] (2I DTD)1

    39

  • Linear Matrix InequalityJ.C. Juang

    Linear matrix inequality (LMI)

    F (x) = F0 +mi=1

    xiFi < 0

    where x Rm is the variable and the symmetric matrices Fi = F Ti Rnn fori = 1, ,m are given.

    The inequality < must be interpreted as the negative definite. The LMI is a convex constraint on x, i.e., the set {x|F (x) < 0} is convex. That is, ifx1 and x2 satisfy F (x1) < 0 and F (x2) < 0, then F (x1 + x2) < 0 for positive scalars and such that + = 1.

    Multiple LMIs can be rewritten as a single LMI in view of the concatenation

    F1(x) < 0, and F2(x) < 0 [F1(x) 00 F2(x)

    ]< 0

    Schur complement technique: Assume that Q(x) = QT (x), R(x) = RT (x), and S(x)depend affinely on x, then[

    Q(x) S(x)ST (x) R(x)

    ]< 0 R(x) < 0 and Q(x) S(x)R1(x)ST (x) < 0

    That is, nonlinear inequalities of Q(x) S(x)R1(x)ST (x) < 0 can be represented asan LMI.

    Let Z(x) be a matrix on x. The constraint Z(x) < 1 is equivalent to I >Z(x)ZT (x) and can be represented as the LMI[

    I Z(x)ZT (x) I

    ]> 0

    Let c(x) be a vector and P (x) be a symmetric matrix, the constraints P (x) > 0and cT (x)P1(x)c(x) < 1 can be represented as the LMI[

    P (x) c(x)cT (x) 1

    ]> 0

    The constrainttraceST (x)P1(x)S(x) < 1, P (x) > 0

    where P (x) = P T (x) and S(x) depend affinely on x can be restated as

    traceQ < 1, ST (x)P1(x)S(x) < Q, P (x) > 0

    and hence

    traceQ < 1,

    [Q ST (x)

    S(x) P (x)

    ]> 0

    40

  • Orthogonal complement of a matrix. Let P Rnm be of rank m < n, the orthogonalcomplement of P is a matrix P Rn(nm) such that

    P TP = 0

    and [P P

    ]is invertible.

    (Finslers lemma). Let P Rnm and R Rnn where rank(P ) = m < n. SupposeP is the orthogonal complement of P then

    PP T +R < 0

    for some real if and only ifPTRP < 0

    To see the above, note that

    PP T +R < 0

    [

    P T

    PT

    ](PP T +R)

    [P P

    ]< 0

    [P TPP TP + P TRP P TRP

    PTRP PTRP

    ]< 0

    PTRP < 0 and(P TPP TP ) + P TRP P TRP(PTRP)1PTRP < 0

    PTRP < 0

    Given P Rnm (rank m < n), Q Rnl (rank l < n), and R Rnn, there existsK Rml such that

    R + PKQT +QKTP T < 0

    if and only ifPTRP < 0 and QTRQ < 0

    where P and Q be the orthogonal complements of P and Q, respectively.

    41

  • Stability and Norm Computation: LMI ApproachJ.C. Juang

    Lyapunov Stabilityx = Ax

    is stable if and only if all the eigenvalues of A are in the left half plane if and onlyif there exists an

    X > 0

    such thatAX +XAT < 0

    H Norm Computation. The H norm of the system(

    A BC D

    )is less than if

    and only if 2I > DTD and there exists an X > 0 such that

    ATX +XA+ CTC + (XB + CTD)(2I DTD)1(BTX +DTC) = 0if and only if 2I > DTD and there exists an X > 0 such that

    ATX +XA+ CTC + (XB + CTD)(2I DTD)1(BTX +DTC) < 0if and only if [

    ATX +XA+ CTC XB + CTDBTX +DTC DTD 2I

    ]< 0

    if and only if there exists an X > 0 such that ATX +XA XB CTBTX I DTC D I

    < 0if and only if there exists a Y > 0 such that Y AT + AY B Y CTBT I DT

    CY D I

    < 0

    42

  • Recommended Matlab ExercisesJ.C. Juang

    Understand the operations of matlab, control toolbox, synthesis toolbox, LMItoolbox

    How to represent system matrices? How to compute norms and singular values of matrices? How to compute the norm of a signal? How to determine the poles and zeros? How to perform system algebra? How to solve a Lyapunov equation? How to compute the norm of a system? How to solve an LMI?

    43