Top Banner
Solution Manual for Adaptive Control Second Edition Karl Johan Åström Björn Wittenmark
46

Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Nov 21, 2014

Download

Documents

vervesolar
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solution Manualfor

Adaptive ControlSecond Edition

Karl Johan ÅströmBjörn Wittenmark

Page 2: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Preface

This Solution Manual contains solutions to selected problems in the secondedition of Adaptive Control published by Addison-Wesley 1995, ISBN 0-201-55866-1.

Page 3: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

PROBLEM SOLUTIONS

SOLUTIONS TO CHAPTER 1

1.5 Linearization of the valve shows that

∆v = 4v30∆u

The loop transfer function is then

G0(s)GPI(s)4v30

where GPI is the transfer function of a PI controller i.e.

GPI (s) = K(

1 +1

sTi

)The characteristic equation for the closed loop system is

sTi(s + 1)3 + K ⋅ 4v30(sTi + 1) = 0

with K = 0.15 and Ti = 1 we get

(s + 1)(s(s + 1)2 + 0.6v3

0

)= 0

(s + 1)(s3 + 2s2 + s + 0.6v30) = 0

The root locus of this equation with respect to vo is sketched in Fig. 1.According to the Routh Hurwitz criterion the critical case is

0.6v30 = 2 ⇒ v0 = 3

√103

= 1.49

Since the plant G0 has unit static gain and the controller has integralaction the steady-state output is equal to v0 and the set point yr. Theclosed-loop system is stable for yr = uc = 0.3 and 1.1 but unstable foryr = uc = 5.1. Compare with Fig. 1.9.

1

Page 4: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

2 Problem Solutions

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

Real Axis

Imag

Axi

s

Figure 1. Root locus in Problem 1.5.

1.6 Tune the controller using the Ziegler-Nichols closed-loop method. Thefrequency ωu, where the process has 180 ○ phase lag is first determined.The controller parameters are then given by Table 8.2 on page 382 where

Ku =1

jG0( iωu) j

we have

G0(s) =e−s/q

1 + s/q

arg G0( iω) = −ωq− arctan

ωq

= −π

q ω G0( iω) K Ti

0.5 1.0 0.45 1 5.242.0 0.45 1 2.624.1 0.45 1 1.3

A simulation of the system obtained when the controller is tuned for thesmallest flow q = 0.5 is shown Fig. 2. The Ziegler-Nichols method is notthe best tuning method in this case. In the Fig. 3 we show results for

Page 5: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 1 3

0 10 20 30 40

0

1

2

Process output

0 10 20 30 40

0

1

2

Control signal

Figure 2. Simulation in Problem 1.6. Process output and control signalare shown for q = 0.5 (full), q = 1 (dashed), and q = 2 (dotted). Thecontroller is designed for q = 0.5.

controller designed for q = 1 and in Fig. 4when the controller is designedfor q = 2.

1.7 Introducing the feedbacku = −k2 y2

the system becomes

dxdt

=

−1 0 0

0 −3 0

0 0 −1

x− k2

0

2

1

1 0 1 x +

1

0

0

u1

y1 = 1 1 0

x

The transfer function from u1 to y1 is

G(s) = 1 1 0

s + 1 0 0

2k2 s + 3 2k2

k2 0 s + 1 + k2

−1

1

0

0

=

s2 + (4− k2)s + 3 + k2

(s + 1)(s + 3)(s + 1 + k2)

Page 6: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

4 Problem Solutions

0 10 20 30 40

0

1

2

Process output

0 10 20 30 40

0

1

2

Control signal

Figure 3. Simulation in Problem 1.6. Process output and control signalare shown for q = 0.5 (full), q = 1 (dashed), and q = 2 (dotted). Thecontroller is designed for q = 1.

The static gain is

G(0) =3 + k2

3(1 + k2)

Page 7: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 1 5

0 10 20 30 40

0

1

2

Process output

0 10 20 30 40

0

1

2

Control signal

Figure 4. Simulation in Problem 1.6. Process output and control signalare shown for q = 0.5 (full), q = 1 (dashed), and q = 2 (dotted). Thecontroller is designed for q = 2.

Page 8: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

6 Problem Solutions

SOLUTIONS TO CHAPTER 2

2.1 The function V can be written as

V (x1 ⋅ ⋅ ⋅ xm) =n∑

i,j = 1

xixj (aij + aji)/2 +n∑

i= 1

bixi + c

Taking derivative with respect to xi we get

�V�xi

=n∑

j = 1

(aij + aji)xj + bi

In vector notation this can be written as

gradx V (x) = ( A + AT )x + b

2.2 The model is

yt = ϕ Tt θ + et =

ut ut−1

b0

b1

+ et

The least squares estimate is given as the solution of the normal equation

θ = (ΦT Φ)−1ΦT Y = ∑

u2t

∑utut−1∑

utut−1∑

u2t−1

−1 ∑ut yt∑

ut−1yt

(a) Input is a unit step

ut ={

1 t ≥ 0

0 otherwise

Evaluating the sums we get

θ =

N N − 1

N − 1 N − 1

−1

N∑1

yt

N∑2

yt

=

1 −1

−1N

N − 1

N∑1

yt

N∑2

yt

θ =

y1

1N − 1

N∑2

yt − y1

The estimation error is

θ − θ =

e1

1N − 1

N∑2

et − e1

Page 9: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 2 7

Hence

E(θ − θ )(θ − θ )T = (ΦT Φ)−1 ⋅ 1 =

1 −1

−1N

N − 1

1 −1

−1 1

when N →∞. Notice that the variance of the estimates do not go tozero as N →∞. Consider, however, the estimate of b0 + b1.

E( b0 + b1 − b0 − b1) = 1 1

1 −1

−1N

N − 1

1

1

=1

N − 1

With a step input it is thus possible to determine the combinationb0 + b1 consistently. The individual values of b0 and b1 can, however,not be determined consistently.

(b) Input u is white noise with Eu2 = 1 and u is independent of e.

Eu2t = 1 Eutut−1 = 0

cov(θ − θ ) = 1 ⋅ E(ΦTΦ)−1 =N 0

0 N − 1

−1

=

1N

0

01

N − 1

In this case it is thus possible to determine both parameters consis-tently.

2.3 Data generating process:

y(t) = b0u(t) + b1u(t− 1) + e(t) = ϕ T (t)θ 0 + e(t)

whereϕ T(t) = u(t), θ 0 = b0

ande(t) = b1u(t− 1) + e(t)

Model:y(t) = bu(t)

ory(t) = bu(t) + ε(t) = ϕ T (t)θ + ε(t)

whereε(t) = y(t) − y(t)

The least squares estimate is given by

ΦT Φ(θ − θ 0) = ΦT Ed Ed =

e(1)

...

e( N)

Page 10: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

8 Problem Solutions

1N

ΦT Φ =1N

N∑1

u2(t) → Eu2 N →∞

1N

ΦT Ed =1N

N∑1

u(t) e(t) =1N

N∑1

u(t) (b1u(t− 1) + e(t))

→ b1E (u(t)u(t− 1)) + E (u(t)e(t)) N →∞

(a)u(t) =

{1 t ≥ 1

0 t < 1

E(u2) = 1 E (u(t)u(t− 1)) = 1 Eu(t)e(t) = 0

Henceb = θ → θ 0 + b1 = b0 + b1 N →∞

i.e. b converges to the stationary gain

(b)

u(t) ∈ N(0,σ ) ⇒ Eu2 = σ 2 Eu(t)u(t− 1) = 0 Eu(t)e(t) = 0

Henceb → b0 N →∞

2.6 The model is

yt = ϕ Tt θ + ε t =

−yt−1 ut−1

︸ ︷︷ ︸ϕ T

t

a

b

︸ ︷︷ ︸θ

+ et + cet−1

︸ ︷︷ ︸ε t

The least squares estimate is given by the solution to the normal equation(2.5). The estimation error is

θ − θ = (ΦTΦ)−1ΦTε = ∑y2

t−1 −∑

yt−1ut−1

−∑

yt−1ut−1∑

u2t−1

−1 −∑ yt−1et − c∑

yt−1et−1∑ut−1et + c

∑ut−1et−1

Notice that ΦT and ε are not independent. ut and et are independent, yt

depends on et, et−1, et−2, . . . and yt depends on ut−1, ut−2, . . ..Taking mean values we get

E(θ − θ ) = E(ΦTΦ)−1E(ΦTε )

To evaluate this expression we calculate

E

∑y2

t−1 −∑

yt−1ut−1

−∑

yt−1ut−1∑

u2t−1

= N

Ey2t 0

0 Eu2t

Page 11: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 2 9

and

E−∑ yt−1et − c

∑yt−1et−1∑

ut−1et + c∑

ut−1et−1

=−cN Eyt−1et−1

0

Eyt−1et−1 = E (ayt−2 + but−2 + et−1) et−1 = σ 2

Since

yt =b

q + aut +

q + cq + a

et

Ey2t =

xb2

1 − a2 +1 − 2ac + c2

1− a2 σ 2

Eu2t = 1 Ee2

t = σ 2

we get

E( a− a)2 = −σ 2c(1− a2)

b2 + (1− 2ac + c2)σ 2

E( b− b)2 = 0

2.8 The model isy(t) = a + bt + e(t) = ϕ Tθ + e(t)

ϕ = 1

t

θ = a

b

According to Theorem 2.1 the solution is given by equation (2.6), i.e.

θ = (ΦT Φ)−1ΦT Y

where

ΦT = 1 1 1 1

1 2 3 ⋅ ⋅ ⋅ N

Y =

y(1)y(2)

...

y( N)

Hence

θ =

N∑t = 1

1N∑

t = 1

t

N∑t = 1

tN∑

t = 1

t2

−1

N∑t = 1

y(t)

N∑t = 1

ty(t)

=

2

N( N − 1) ((2N + 1)s0 − 3s1)

6N( N + 1)( N − 1) (−( N + 1)s0 + 2s1)

Page 12: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

10 Problem Solutions

where we have made use of

N∑t = 1

t =N( N + 1)

2

N∑t = 1

t2 =N( N + 1)( N + 2)

6

and introduced

s0 =N∑

t = 1

y(t) s1 =N∑

t = 1

ty(t)

The covariance of the estimate is given by

cov(θ ) =σ 2(ΦTΦ)−1 =

12N( N + 1)( N − 1)

( N + 1)(2N + 1)

6−

N + 12

−N + 1

21

Notice that the variance of b decreases as N−3 for large N but thevariance of a decreases as N−1. The reason for this is that the regressorassociated with a is 1 but the regressor associated with b is t. Notice thatthere are better numerical methods to solve for θ !

2.17

(a) The following derivation gives a formula for the asymptotic LS esti-mate

b = (ΦTΦ)−1ΦY =( N∑

k = 1

φ (k− 1)2)−1( N∑

k = 1

φ (k− 1) y(k))

=( 1

N

N∑k = 1

u(k− 1)2)−1( 1

N

N∑k = 1

u(k− 1) y(k))

→(E(u(k− 1)2)

)−1E (u(k− 1) y(k)) , as N →∞

The equations for the closed loop system are

u(k) = K (uc(k) − y(k))y(k) = y(k) + ay(k− 1) = bu(k− 1)

The signals u(k) and y(k) are stationary signals. This follows sincethe controller gain is chosen so that the closed loop system is stable.It then follows that E(u(k− 1)2) = E(u(k)2) and E(u(k− 1) y(k)) =E(bu(k − 1)2) = bE(u(k)2) exist and the asymptotic LS estimatebecomes b = b, i.e. we have an unbiased estimate.

Page 13: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 2 11

Estimator

b

1

q + a

ΣKΣ u c = 0 u

y

d

y

1−

Figure 5. The system redrawn.

(b) Similarly to (a), we get

( 1N

N∑k = 1

u(k− 1)2)−1( 1

N

N∑k = 1

u(k− 1) y(k))

→((u2(k− 1))0

)−1 (u(k− 1) y(k))0 , as N →∞

where (⋅)0 denote the stationary value of the argument. We have(u2(k− 1)

)0 = ((u(k))0)2

(u(k− 1) y(k))0 = (u(k))0 b ((u(k)0 + d0)

(u(k))0 = Hud(1)d0 = −K b

1 + a + K bd0

and the asymptotic LS estimate becomes

b =((u2(k− 1))0

)−1 (u(k− 1) y(k))0 = b(

1 +d0

(u(k))0

)= b

(1 −

1 + a + K bK b

)=

1 + aK

How do we interpret this result? The system may be redrawn as inFigure 5. Since Uc = 0, we have that u = K

q+ a y, and we can regardK

q+ a as the controller for the system in Figure 5. It is then obviousthat we have estimated the negative inverse of the static controllergain.

(c) Introduction of high pass regressor filters as in Figure 6 eliminates orat least reduces the influence from the disturbance d on the estimateof b. One choice of regressor filter could be Hf (q−1) = 1 − q−1, i.e. adifferentiator. Another possibility would be to introduce a constant in

Page 14: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

12 Problem Solutions

Estimator

KΣu c u y

Σ

d

bq + a

Hf Hf

1−

Figure 6. Introduction of regressor filters.

the regressor and then estimate both b and bd. The regression modelis in this case

y(t) =u(t− 1) 1

b

bd

= φ (t)Tθ

2.18 The equations for recursive least squares are

y(t) = ϕ T(t− 1)θ 0

θ (t) = θ (t− 1) + K (t)ε (t)ε (t) = y(t) − ϕ T(t− 1)θ (t− 1)K (t) = P(t)ϕ (t− 1)

=P(t− 1)ϕ (t− 1)

λ + ϕ T(t− 1)P(t− 1)ϕ (t− 1)P(t) =

(I − K (t)ϕ T(t− 1)

)P(t− 1)/λ

Since the quantity P(t)ϕ (t− 1) appears in many places it is convenientto introduce it as an auxiliary variable w = Pϕ . The following computercode is then obtained:

Input u,y: realParameter lambda: realState phi, theta: vector

P: symmetric matrix

Local variables w: vector, den : real"Compute residuale=y-phi^T*theta"update estimatew=P*phiden=w^T*phi+lambda

Page 15: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 2 13

theta=theta+w*e/den"Update covariance matrixP=(P-w*w^T/den)/lambda"Update regression vectorsphi=shift(phi)phi(1)=-yphi(n+1)=u

Page 16: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

14 Problem Solutions

SOLUTIONS TO CHAPTER 3

3.1 Given the process

H(z) =B(z)A(z) =

z + 1.2z2 − z + 0.25

Design specification: The closed system should have a pole that corre-spond to following characteristic polynomial in continuous time

s2 + 2s + 1 = (s + 1)2

This corresponds toAm(z) = z2 + am1z + am2

with {am1 = −(e−1 + e−1) = −2e−1

am2 = e−2

(a) Determine an indirect STR of minimal order. The controller shouldhave an integrator and the stationary gain should be 1.Solution:Choose Bm such that

Bm(1)Am(1) = 1

The integrator condition gives

R = R′(z− 1)We get the following conditions

BT = Bm Ao(1)AR + B S = Am Ao(2)

As B is unstable must Bm = B B ′m. This makes (1) ⇔ BT =

B B ′m Ao ⇔ T = B ′

m Ao. Choose B ′m such that

B(1) B ′m(1)

A(1) = 1 ⇒ B ′m(1) =

A(1)B(1)

The simplest way is to choose

B ′m = b′

m =A(1)B(1) =

0.252.2

Further we have

(z2 + a1z + a2)(z− 1)(z + r) + (b0z + b1)(s0z2 + s1z + s2)

= (z2 + am1z + am2)(z2 + ao1z + ao2)

Page 17: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 3 15

with a1 = −1, a2 = 0.25 and ao1 and ao2 chosen so that Ao is stable.Equating coefficients give

r − 1 + a1 + b0s0 = ao1 + am1

−r + a1(r− 1) + a2 + b0s1 + b1s0 = ao2 + am1ao1 + am2

−ar + a2(r− 1) + b0s2 + b1s1 = am1ao2 + am2ao1

−a2r + b1s2 = am2ao2

or1 b0 0 0

a1 − 1 b1 b0 0

a2 − a1 0 b1 b0

−a2 0 0 b1

r

s0

s1

s2

=

ao1 + am1 + 1 − a1

aa2 + am1ao1 + am2 + a1 − a2

am1ao2 + am2ao1 + a2

am2ao2

Now choose to estimate

θ = b1 b1 a1 a2

T

by equation 3.22 in the textbook.(b) As H(z) is not minimum phase we cancel B− = B between R and

S. This is difficult, see page 118 in the textbook. An indirect STR isgiven by Eq. 3.24.

Ao Am y = Ru + S y

with R = B−R, S = B−S, T = B ′m Ao. Furthermore we have

Ao Am ym = Ao Bmuc = Ao B B ′muc = BTuc = Tuc

y = R1

Ao Amu︸ ︷︷ ︸

= uf

+ S1

Ao Amy︸ ︷︷ ︸

= yf

ym = T1

Ao Amuc︸ ︷︷ ︸

= uc f

ε = y − ym = Ruf + S yf − Tuc f

Now estimate R, S and T with a recursive method in the aboveequation. Then cancel B and calculate the control signal.

(c) Take a = 0 in Example 5.7, page 206 in the textbook. This givesdt0

dt=−γ uce

ds0

dt=−γ ye

with e = y− ym = y− Gmuc.

Page 18: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

16 Problem Solutions

3.3 The process has the transfer function

G(s) =b

s + a⋅

qs + p

where p and q are known. The desired closed loop system has the transferfunction

Gm(s) =ω2

s2 + 2ξω s + ω2

Since a discrete time controller is used the transfer functions are sampled.We get

H(z) =b0z + b1

z2 + a1z + a2

Hm(z) =bm0z + bm1

z2 + am1z + am2

The fact that p and q are known implies that one of the poles of H isknown. This information will be disregarded. In the following we willassume

G(s) =1

s(s + 1)

With h = 0.2 this gives

H(z) =0.0187(z + 0.936)(z− 1)(z− 0.819)

Furthermore we will assume that ω = 2 and ζ = 0.7.Indirect STR:The parameters of a general pulse transfer function of second order isestimated by recursive least squares (See page 51). We have

θ = bo b1 a1 a2

T

ϕ (t) =u(t− 1) u(t− 2) −y(t− 1) −y(t− 2)

The controller is calculated by solving the Diophantine equation. We lookat two cases1.B canceled:

(z2 + a1z + a2)1 + b0(s0z + s1) = z2 + am1z + am2

z1 : a1 + b0s0 = am1 s0 =am1 − a1

b0

z0 : a2 + b0s1 = am2 s1 =am2 − a2

b0

Page 19: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 3 17

The controller is thus given by

R(z) = z + b1/b0

S(z) = s0z + s1

T (z) = t0z where t0 =1 + am1 + am2

b0

2. B not canceled:The design equation becomes

(z2 + a1z + a2)(z + r1) + (b0z + b1)(s0z + s1) = (z2 + am1z + am2)(z + ao1)Identification of coefficients of equal powers of z gives

z2 : a1 + r1 + b0s0 = am1 + ao1

z1 : a2 + a1r1 + b1s0 + b0s1 = am1ao1 + am2

z0 : a2r1 + b1s1 = am2ao1

The solution to these linear equations is

r1 =b2

1n1 − b0b1n2 + ao1am2b20

b20a2 − a1b0b1 + b2

1

s0 =n1 − r1

b0

s1 =b0n2 − b1n1 − r1(a1b0 − b1)

b20

wheren1 = a1 − am1 − ao1

n2 = am1ao1 + am2 − a2

The solution exists if the denominator of r1 is different from zero, whichmeans that there is no pole-zero cancellation. It is helpful to have accessto computer algebra for this problems e.g. Macsyma, Maple or Matem-atica! Figure 7 shows a simulation of the controller obtained when thepolynomial B is canceled. Notice the “ringing” of the control signal whichis typical for cancellation of a poorly damped zero. In this case we havez = −0.936. In Figure 8 we show a simulation of the controller with nocancellation. This is clearly the way to solve the problem.Direct STR:To obtain a direct self-tuning regulator we start with the design equation

AR + B S = Am Ao B +

HenceB + Am Ao y = ARy + B Sy = B Ru + B Sy

y = R(

B−

Ao Amu)

︸ ︷︷ ︸uf

+ S(

B−

Ao Amy)

︸ ︷︷ ︸yf

Page 20: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

18 Problem Solutions

0 10 20 30 40 50−2

−1

0

1

2 uc y

0 10 20 30 40 50

−10

0

10

u

Figure 7. Simulation in Problem 3.3. Process output and control signalare shown for the indirect self-tuning regulator when the process zero iscanceled.

From this model R and S can be estimated. The polynomial T is thengiven by

T =to Ao Bm

B−

where to is chosen to give the correct steady state gain. Again we separatetwo cases.1. Cancel B:If the polynomial B is canceled we have B + = z + b1/b0, B− = b0. Fromthe analysis of the indirect STR we know that no observer is needed inthis case and that the controller has the structure deg R = deg S = 1.Hence

y(t) = R(

bo

Amu(t)

)+ S

(bo

Amy(t)

)Since bo is not known we include it in the polynomial R and S andestimate it. The polynomial R then is not monic. We have

y(t) = (r0q + r1)(

1Am

u(t))

+ (s0q + s1)(

1Am

y(t))

To obtain a direct STR we thus estimate

θ = r0 r1 s0 s1

T

Page 21: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 3 19

0 10 20 30 40 50−2

−1

0

1

2 uc y

0 10 20 30 40 50

−10

0

10

u

Figure 8. Simulation in Problem 3.3. Process output and control signalare shown for the indirect self-tuning regulator when the process zero isnot canceled.

by RLS. The case r0 = 0 must be taken care of separately. FurthermoreT has the form T (q) = t0q where

BTAR + B S

=Btoq

B + bo︸ ︷︷ ︸B

Am=

toqAm

To get unit steady state gain choose

to = Am(1)A simulation of the system is shown in Fig. 9. We see the typical ringingphenomena obtained with a controller that cancels a poorly damped zero.To avoid this we will develop an algorithm where the process zero is notcanceled.2. No cancellation of process zero:We then have B + = 1 and B− = b0q+ b1. From the analysis of the indirectSTR we know that a first order observer is required, i.e. A0 = q + ao1. Wehave as before

y = R(

B−

Ao Amu)

︸ ︷︷ ︸uf

+ S(

B−

Ao Amy)

︸ ︷︷ ︸yf

(∗)

Page 22: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

20 Problem Solutions

0 10 20 30 40 50−2

−1

0

1

2 uc y

0 10 20 30 40 50

−10

0

10 u

Figure 9. Simulation in Problem 3.3. Process output and control signalare shown for the direct self-tuning regulator when the process zero iscanceled.

Since B− is not known we can, however, not calculate uf and yf . Onepossibility is to rewrite (*) as

y = RB−︸ ︷︷ ︸R′

(1

Ao Amu)

+ SB−︸ ︷︷ ︸S′

(1

Ao Amy)

and to estimate R′ and S′ as second order polynomials and to cancelthe common factor B− from the estimated polynomials. This is difficultbecause there will not be an exact cancellation. Another possibility is touse some estimate of B−. A third possibility is to try to estimate B−Rand B−S as a bilinear problem. In Fig. 8–11 we show simulation whenthe model (*) is used with

B− = 1

B− = q

B− =q + 0.4

1.4

B− =q − 0.4

0.6

Page 23: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 3 21

0 10 20 30 40 50−2

−1

0

1

2 uc y

0 10 20 30 40 50

−10

0

10 u

Figure 10. Simulation in Problem 3.3. Process output and control signalare shown for the direct self-tuning regulator when the process zero is notcanceled and when B− = 1.

3.4 The process has the transfer function

G(s) =b

s(s + 1)

with proportional feedback

u = k(uc − y)

we get the closed loop transfer function

Gcl(s) =kb

s2 + s + kb

The gain k = 1/b gives the desired result. Idea for STR: Estimate b anduse k = 1/b. To estimate b introduce

s(s + 1) y = bu

s(s + 1)(s + a)2︸ ︷︷ ︸

yf

y = b1

(s + a)2︸ ︷︷ ︸ϕ

u

Page 24: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

22 Problem Solutions

0 10 20 30 40 50−2

−1

0

1

2 uc y

0 10 20 30 40 50

−10

0

10

u

Figure 11. Simulation in Problem 3.3. Process output and control signalare shown for the direct self-tuning regulator when the process zero is notcanceled and when B− = q.

The equations on page 56 gives

dbdt

= Pϕ e = Pϕ ( yf − bϕ )

dPdt

= α P − Pϕ ϕ T P = α P − P2ϕ 2

With b(0) = 1, P(0) = 100, α = −0.1, and a = 1 we get the result shownin Fig. 14.

Page 25: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 3 23

0 10 20 30 40 50−2

−1

0

1

2 uc y

0 10 20 30 40 50

−10

0

10

u

Figure 12. Simulation in Problem 3.3. Process output and control signalare shown for the direct self-tuning regulator when the process zero is notcanceled and when B− = (q + 0.4)/1.4.

Page 26: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

24 Problem Solutions

0 10 20 30 40 50−2

−1

0

1

2 uc y

0 10 20 30 40 50

−10

0

10

u

Figure 13. Simulation in Problem 3.3. Process output and control signalare shown for the direct self-tuning regulator when the process zero is notcanceled and when B− = (q− 0.4)/0.6.

Page 27: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 3 25

0 5 10 15 20−2

0

2 uc y

0 5 10 15 20

0

5

10 u

0 5 10 15 20

0.2

0.6

1 b

Figure 14. Simulation in Problem 3.4. Process output, control signaland estimated parameter b are shown for the indirect continuous-timeself-tuning regulator.

Page 28: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

26 Problem Solutions

SOLUTIONS TO CHAPTER 4

4.1 The estimate b may be small because of a poor estimate. One possibilityis to use a projection algorithm where the estimate is restricted to be ina given range, bo ≤ b ≤ b1. This requires prior information about b:svalues. Another possibility is to replace

1

bby

b

b2 + P

where P is the variance of the estimate. Compare with the discussion ofcautious control on pages 356–358.

4.10 Using (4.21) the output can be written as

yt+ d =R∗

C∗ut +

S∗

C∗yt +

R∗1

C∗et+ do

= rout + ft

(∗)

Consider minimization of

J = y2t+ d + ρ u2

t ( +)

Introduce the expression (*)

J = (rout + ft)2 + ρ u2t

= (r2o + ρ )u2

t + 2rout ft + f 2t

= (r2o + ρ )

(u2

t +2rout ft

r2o + ρ

)+ f 2

t

= (r2o + ρ )

(ut +

ro ft

r2o + ρ

)2

−r2

o f 2t

r2o + ρ + f 2

t

Hence

J =1

r2o + ρ

(ft +

r2o + ρro

ut

)2

r2o + ρ f 2

t

=1

r2o + ρ

(ft +

(ro +

ρro

)ut

)2

r2o + ρ

f 2t

=1

r2o + ρ

(yt+ d +

ρro

ut

)2

r2o + ρ f 2

t

Since r2o + ρ is a constant we find that minimizing ( +) is the same as to

minimize

J1 = yt+ d +ρro

ut = ft +(

ro +ρro

)ut

Page 29: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 5 27

SOLUTIONS TO CHAPTER 5

5.1 The plant is

G(s) =1

s(s + a) =BA

The desired response is

Gm(s) =ω2

s2 + 2ζ ω s + ω2 =Bm

Am

(a) Gradient method or MIT rule. Use formulas similar to (5.9). In thiscase B + = 1 and Ao is of first order. The regulator structure is

R = s + r1 S = s0s + s1 T = t0 Ao

This gives the updating rules

dr1

dt= γ e

(1

Ao Amu)

ds0

dt= γ e

(p

Ao Amy)

ds1

dt= γ e

(1

Ao Amy)

dt0

dt= −γ e

(1

Ao Amuc

)(b) Stability theory approach 1. First derive the error equation. If all

process zeros are cancelled we have

Ao Am y = AR1 y + boSy = B R1u + boSy

= bo(Ru + Sy)Further

Ao Am ym = Ao Bmuc = boTuc

HenceAo Ame = Ao Am( y− ym) = bo(Ru + Sy− Tuc)

e =b

Ao Am(Ru + Sy− Tuc)

Since 1/Ao Am is not SPR we introduce a polynomial D such thatD/Ao Am is SPR. We then get

e =bD

Ao Am

(Ruf + Syf − Tuc f

)where

uf =1D

u yf =1D

y uc f =1D

uc

Page 30: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

28 Problem Solutions

(c) Stability theory approach 2. In this case we assume that all statesmeasurable. Process model:

x =−a 0

1 0

︸ ︷︷ ︸Ap

x + 1

0

︸ ︷︷ ︸Bp

y = 0 1

︸ ︷︷ ︸C

x

Control lawu = Lruc − Lx = θ 3uc − θ 1x1 − θ 2x2

The closed loop system is

x = ( Ap − Bp L)x + B Lruc = Ax + Buc

y = C x

where

A(θ ) = Ap − Bp L =−a − θ 1 −θ 2

1 0

B(θ ) = Bp Lr =

θ 3

0

The desired response is given by

xm = Am xm + Bmuc

where

Am =−2ζ ω −ω2

1 0

Bm =ω2

0

We have

( A− Am)T = 2ζ ω − a − θ 1 0

ω2 − θ 2 0

( B − Bm)T =

θ 3 −ω2 0

Introduce the state error

e = x − xm

we gete = x− xm = Ax + Buc − Am xm − Bmuc

= Ame + ( A− Am)x + ( B − Bm)uc

(∗)

The error goes to zero if Am is stable and

A(θ ) − Am = 0 (∗)

Page 31: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 5 29

B(θ ) − Bm = 0 ( +)It is thus necessary that A(θ ) and B(θ ) are such that there is a θfor which (*) and ( +) hold. Introduce the Lyapunov function

V = eT Pe + tr( A− Am)T Qa( A− Am)

+ tr( B − Bm)T Qb( B − Bm)Notice that

tr( A + B) = trA + trB

xT Ax = tr(xxT A) = tr( AxxT )tr( AB) = tr( B A)

we get

dVdt

= tr(

PeeT + PeeT + AT Qa( A− Am)

+ ( A− Am)T Qa A + B Qb( B − Bm) + ( B − Bm)T Qb B) ( +)

But from (*)

PeeT = P (Ame + ( A− Am)x + ( B − Bm)uc) eT

PeeT = Pe (Ame + ( A− Am)x + ( B − Bm)uc)T

Introducing this into ( +) and collecting terms proportional to ( A −Am)T we find that they are

2tr( A− Am)T (Qa A + PexT)Similarly we find that terms proportional to ( B − Bm) are

2tr( B − Bm)T (Qb B + PeuTc

)Hence

dVdt

= eT PAme + eT ATmPe

+ 2tr( A− Am)T (Qa A + PexT)+ 2tr( B − Bm)T (Qb B + PeuT

c

)Hence if the symmetric matrix P is chosen so that

ATm P + PAm = −Q

where Q is positive definite (can always be done if Am is stable!) andparameters are updated so that{

( A− Am)T(Qa A + PexT

)= 0

( B − Bm)T(Qb B + PeuT

c

)= 0

( +)

Page 32: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

30 Problem Solutions

we getdVdt

= −eT Qe

The equations for updating the parameters derived above can now beused. This gives2ζ ω − a− θ 1 0

ω2 − θ 2 0

Qa

−θ 1 −θ 2

0 0

+ PexT = 0

θ 3 −ω2 0Qb

θ 3

0

+ Peuc

= 0

Hence with Qa = I and Qb = 1

dθ 1

dt= p11e1x1 + p12e2x1 = (p11e1 + p12e2)x1

dθ 2

dt= p11e1x2 + p12e2x2 = (p11e1 + p12e2)x2

dθ 3

dt= −(p11e1 + p12e2)uc

where e1 = x1 − xm1 and e2 = x2 − xm2. It now remains to determineP such that

ATm P + PAm = −Q

Choosing ζ = 0.707,ω = 2 and

Q = 41.2548 11.3137

11.3137 16.0000

we get

P = 4 2

2 16

The parameter update laws become

dθ 1

dt= (4e1 + 2e2)x1

dθ 2

dt= (4e1 + 2e2)x2

dθ 3

dt= −(4e1 + 2e2)uc

A simulation of the system is given in Fig. 15.

5.2 The block diagram of the system is shown in Fig. 16. The PI version ofthe SPR rule is

dθdt

= −γ 1ddt

(uce) − γ 2uce (∗)

Page 33: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 5 31

0 10 20 30 40−2

−1

0

1

2 States x1 xm1 x2 xm2

0 10 20 30 40

0

0.5

1

Estimated parameters

Figure 15. Simulation in Problem 5.1. Top: Process and modelstates, x1 (full), xm1 (dashed), x2 (dotted), and xm2 (dash-dotted).Bottom: Controller parameters θ 3 (full), θ 1 (dashed), and θ 2 (dot-ted).

u c

ym

e

y

1s

1s

Σ−

θ

Π

θ0

Figure 16. Block diagram in Problem 5.2.

To derive the error equation we notice that

dym

dt= θ 0uc

dydt

= θ uc

Page 34: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

32 Problem Solutions

Hencededt

= (θ − θ 0)uc

we getd2edt2 =

dθdt

uc + (θ − θ 0) duc

dt

Inserting the parameter update law (*) into this we get

d2edt2 = −γ 1

(duc

dte + uc

dedt

)uc − γ 2u2

ce + (θ − θ 0) duc

dt

Henced2edt2 + γ 1u2

cdedt

+(

γ 1ucduc

dt+ γ 2u2

c

)e = (θ − θ 0) duc

dt

Assuming that uc is constant we get the following error equation

d2edt2 + γ 1u2

cdedt

+ γ 2u2ce = 0

Assuming that we want this to be a second order system with ω and ζwe get {

γ 1u2c = 2ζ ω γ 1 = 2ζ ω/u2

cγ 2u2

c = ω2 γ 2 = ω2/u2c

This gives an indication of how the parameters γ 1 and γ 2 should beselected. The analysis was based on the assumption that uc was constant.To get some insight into what happens when uc changes we will givea simulation where uc is a triangular wave with varying period. Theadaptation gains are chosen for different ω and ζ . Figure 17 shows whathappens when the period of the square wave is 20 and ω = 0.5, 1 and 2.Corresponding to the periods 12, 6 and 3. Figure 18 show what happenwhen uc is changed more rapidly.

5.6 The transfer function is

G(s) =b0s2 + b1s + b2

s2 + a1s + a2

The transfer function has no poles and zeros in the right half plane ifa1 ≥ 0, a2 ≥ 0, b0 ≥ 0, b1 ≥ 0, and b2 ≥ 0. Consider

G( iω) =B( iω)A( iω) ⋅

A(−iω)A(−iω)

The condition Re G( iω) ≥ 0 is equivalent to Re ( B( iω) A(−iω) ) ≥ 0. But

g(ω) = Re((−b0ω2 + iωb1 + b2)(−ω2 − iωa1 + a2)

)= b0ω4 + (a1b1 − b0a2 − b2)ω2 + a2b2

Page 35: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 5 33

0 5 10 15 200

5

10

Process and model output

0 5 10 15 200

0.5

1

1.5 Estimated parameter

0 5 10 15 200

5

10

Process and model output

0 5 10 15 200

0.5

1

1.5 Estimated parameter

Figure 17. Simulation in Problem 5.2 for a triangular wave of period 20.Left top: Process and model outputs, Left bottom: Estimated parameter θwhen ω = 0.5 (full), 1 (dashed), and 2 (dotted) for ζ = 0.7. Right top:Process and model outputs, Right bottom: Estimated parameter θ whenζ = 0.4 (full), 0.7 (dashed), and 1.0 (dotted) for ω = 1.

Completing the squares the function can be written as

g(ω) = b0

(ω2 +

a1b1 − b0a2 − b2

2b0

)2

+ a2b2 −(a1b1 − b0a2 − b2)2

4b0

When b0 = 0 the condition for g to be positive is that

a1b1 − b0a2 − b2 ≥ 0 ( i)If b0 > 0 the function g(ω) is nonnegative for all ω if either ( i) holds orif

a1b1 − b0a2 − b2 < 0

and

a2b2 >(a1b1 − b0a2 − b2)2

4b0

Example 1. Consider

G(s) =s2 + 6s + 8s2 + 4s + 3

We have a1b1 − b0a2 − b2 = 24 − 3 − 8 = 13 > 0. Hence the transferfunction G(s) is SPR. Example 2.

G(s) =3s2 + s + 1s2 + 3s + 4

Page 36: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

34 Problem Solutions

0 5 10 15 200

5

10

Process and model output

0 5 10 15 200

0.5

1

1.5 Estimated parameter

0 5 10 15 200

5

10

Process and model output

0 5 10 15 200

0.5

1

1.5 Estimated parameter

Figure 18. Simulation in Problem 5.2 for a triangular wave of period 5.Left top: Process and model outputs, Left bottom: Estimated parameter θwhen ω = 0.5 (full), 1 (dashed), and 2 (dotted) for ζ = 0.7. Right top:Process and model outputs, Right bottom: Estimated parameter θ whenζ = 0.4 (full), 0.7 (dashed), and 1.0 (dotted) for ω = 1.

we have a1b1 − a2b0 − b2 = 3 − 12 − 1 = −10. Furthermore

a2b2 = 4

(a1b1 − a2b0 − b2)2

4b0=

10012

Hence the transfer function G(s) is neither PR nor SPR.

5.7 Consider the systemdxdt

= Ax + B1u

y = C1x

where

B1 =

1

0...

0

Let Q be positive and let P be the solution of the Lyapunov equation

AT P + PA = −Q (∗)

Page 37: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 5 35

Define C1 asC1 = BT P =

p11 p12 . . . p1n

According to the Kalman-Yacobuvich Lemma the transfer function

G1(s) = C1(sI − A)−1B1

is then positive definite. This transfer function can, however, be writtenas

G(s) =p11sn−1 + p12sn−2 + . . . + p1n

sn + a1sn−1 + . . . + an

Since there are good numerical algorithms for solving the Lyapunov equa-tion we can use this result to construct transfer functions that are SPR.The method is straightforward.

1. Choose A stable.2. Solve (*) for given Q positive.3. Choose B as

B(s) = p11sn−1 + p12sn−2 + . . . + p1n

5.11 Let us first solve the underlying design problem for systems with knownparameters. This can be done using pole placement. Let the plant dy-namics be

y =BA

u

and let the controller beRu = Tuc − sy

The basic design equation is then

AR + B S = Am Ao (∗)

In this case we haveA = (s + a)(s + p)B = bq

Am = s2 + 2ζ ω s + ω2

We need an observer of at least first order. The design equation (*) thenbecomes

(s + a)(s + p)(s + r1) + bq(s0s + s1) = (s2 + 2ζ ω s + ω2)(s + a0) ( +)

where Ao = s + ao is the observer polynomial. The controller is thus ofthe form

dudt

+ r1u = t0uc − s0 y − s1dydt

Page 38: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

36 Problem Solutions

It has four parameters that have to be updated r1, t0, s0, and s1. If no priorknowledge is available we thus have to update these four parameters.When parameter p is known it follows from the design equation ( +) thatthere is a relation between the parameters and it then suffices to estimatethree parameters. This is particularly easy to see when the observerpolynomial is chosen as Ao(s) = s + p. Putting s = −p in ( +) gives

−s0p + s1 = 0

Hences1 = ps0 (∗∗)

In this particular case we can thus update t0, s0, and r1 and computes1 from (**). Notice, however, that the knowledge of q is of no valuesince q always appear in combination with the unknown parameter b.The equations for updating the parameters are derived in the usual way.With a0 = p equation ( +) simplifies to

(s + a)(s + r1) + bqs0 = s2 + 2ζ ω s + ω2

IntroducingA′(s) = s + a S′(s) = s0 T ′ = t0

we get

y =BT ′

A′ R + B S′ uc

�e�t0

=B

A′ R + B S′ uc �B

Amuc

�e�s0

= −BT ′B

( A′R + B S′)2 uc = −B

A′ R + B S′ y � −BAm

y

�e�r1

= −A′ BT ′

( A′R + B S′)2 uc = −A′

A′ R + B S′ y � −A′Am

y

= −A′Am

BA

u = −B

Am(s + p) u

The MIT rule then gives

dr1

dt= γ

(1

(s + p) Amu)

e

ds0

dt= γ

(1

Amy)

e

dt0

dt= −γ

(1

Amuc

)e

A simulation of the controller is given in Fig. 19.

Page 39: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 5 37

0 20 40 60 80 100−2

−1

0

1

2 Model and process output

0 20 40 60 80 100

0.5

1

1.5

2

Estimated parameters

Figure 19. Simulation in Problem 5.11. Top: Process (full) and model(dashed) output. Bottom: Estimated parameters r1 (full), s0 (dashed), andt0 (dotted).

5.12 The closed loop transfer function is

Gcl(s) =kb

s2 + s + kb

This is compatible with the desired dynamics. The error is

e = y− ym =kb

p2 + p + kbuc − ym

Hence�e�k

=b

p2 + p + kbuc −

b2k(p2 + p + kb)2 uc

=b

p2 + p + kb(uc − y)

� b1

p2 + p + 1(uc − y) p =

ddt

The following adjustment rule is obtained from the MIT rule

dkdt

= −γ ′ �e�k

e = −γ ′b︸ ︷︷ ︸γ

(1

p2 + p + 1(uc − y)

)e

Page 40: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

38 Problem Solutions

0 20 40 60−2

0

2 ym y, gamma = 0.05

0 20 40 600

1

2 k, gamma = 0.05

0 20 40 60−2

0

2 ym y, gamma = 0.3

0 20 40 600

1

2 k, gamma = 0.3

0 20 40 60−2

0

2 ym y, gamma = 1

0 20 40 600

1

2 k, gamma = 1

Figure 20. Simulation in Problem 5.12. Left: Process (full) and model(dashed) output. Right: Estimated parameter k for different values of γ .

A simulation of the system is given Fig. 20. This shows the behavior ofthe system when uc is a square wave.

Page 41: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 6 39

SOLUTIONS TO CHAPTER 6

6.1 The process is given by

G(s) =b

s(s + a)and the regressor filter should be

Gf (s) =1

Af (s) =1

Am(s) =1

s2 + 2ζ ω s + ω2

The controller is given by

U(s) =s0s + s1

s + r1Y (s) +

t0(s + ao)s + r1

uc(s)

For the estimation of process parameters we use a continuous-time RLSalgorithm. The process is of second order and the controller is of firstorder. The regressor filter is of second order and both inputs and outputsshould be filtered. Hence we need seven states in ξ . The process parame-ters are contained in θ , and the controller parameters in ϑ . The relationbetween these are given by

ϑ =

r1

s0

s1

t0

=

2ζ ω + ao − a

(ao2ζ ω + ω2 − ar1)/b

ω2ao/b

ω2/b

= χ(θ )

To find A(ϑ ), B(ϑ ), C(ϑ ) and D(ϑ ) we start by finding realizations fory, yf , uf and the controller. We get

ddt

y

y

=−a 0

1 0

y

y

+ b

0

u

ddt

y f

y f

=−2ζ ω −ω2

1 0

y f

y f

+ 1

0

y

andddt

u f

u f

=−2ζ ω −ω2

1 0

u f

u f

+ 1

0

u

and the control law can be rewritten as

u = −s0 y + t0uc +−s1 + r1s0

p + r1y +

aot0 − t0r1

p + r1uc

We need one state for the controller and it can be realized as

x = −r1 x + (−s1 + r1s0) y + (aot0 − r1t0)uc

u = x− s0 y + t0uc

Page 42: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

40 Problem Solutions

Combining the states results in

ddt

y

y

y f

yf

u f

u f

x

=

−a −bs0 0 0 0 0 b

1 0 0 0 0 0 0

0 1 −2ζ ω −ω2 0 0 0

0 0 1 0 0 0 0

0 −s0 0 0 −2ζ ω −ω2 1

0 0 0 0 1 0 0

0 −s1 + r1s0 0 0 0 0 −r1

×

y

y

y f

yf

u f

u f

x

+

bt0

0

0

0

t0

0

aot0 − r1t0

uc

which defines the relation

dξdt

= A(ϑ )ξ + B(ϑ )uc

Now we need to express e and ϕ in the states so that we find the C andD matrices. The estimator tries to find the parameters in

yf =b

p(p + a) uf

which is rewritten as

p2 yf =−pyf uf

a

b

= ϕ Tθ 0

Clearly

ϕ = 0 0 −1 0 0 0 0

0 0 0 0 0 1 0

y

y

y f

yf

u f

u f

x

and

e = p2 yf − ϕ Tθ = −2ζ ω y f −ω2 yf + y + a y f − bu f

Page 43: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 6 41

If we use the relations a = −r1 + 2ζ ω + ao and b = ω2/t0 then e can bewritten as

e = −2ζ ω y f −ω2 yf + y + (−r1 + 2ζ ω + ao) y f −ω2

t0uf

= 0 1 −r1 + ao 0 0 −ω2

t00

y

y

y f

yf

u f

u f

x

Combining the expressions for ϕ and e gives

e

ϕ

=

0 1 −r1 + ao 0 0 −ω2

t00

0 0 −1 0 0 0 0

0 0 0 0 0 1 0

y

y

y f

yf

u f

u f

x

= C(ϑ )ξ

i.e. D(ϑ ) = 0. As given in the problem description, the estimator isdefined by

dθdt

= Pϕ e

dPdt

= α P − Pϕ ϕ T P

where P is a 2 × 2 matrix and e and ϕ are given above.

6.3 The averaged equations for the parameter estimates are given by (6.54)on page 303. In this particular case we have

G(s) =ab2

(s + a)(s + b)2

Gm(s) =a

s + a

Page 44: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

42 Problem Solutions

To use the averaged equations we need

avg(

(Gmuc) (Guc))

= avg(

v(

b(p + b)2 v

))=

12

v2 ⋅b2

b2 + ω2 cos 2ϕ =a2u2

0b2

2(a2 + ω2)(b2 + ω2)(2 cos2 ϕ − 1

)=

a2b2u20

2(a2 + ω2)(b2 + ω2)

(2b2

ω2 + b2 − 1)

=a2b2u2

0(b2 −ω2)2(a2 + ω2)(b2 + ω2)2

where we have introduced

uc = u0 sinω t

v =a

p + auc

ϕ = atanωb

Similarly we have

avg (ucGuc) =u2

0

2jGj cos (2ϕ + ϕ 1)

=u2

0

2⋅

ab2

√a2 + ω2 ⋅ (b2 + ω2)

(cos 2ϕ cos ϕ 1 − sin 2ϕ sin ϕ 1)

=u2

0ab2(ab2 −ω2(a + 2b))2(a2 + ω2)(b2 + ω2)2

whereϕ = atan

ωb

ϕ 1 = atanωa

It follows from the analysis on page 302–304 that the MIT rule gives astable system as long as ω < b while the stability condition for the SPRrule is

ω <√

aa + 2b

b

with b = 10a we getω MIT = 10a

ω SPR = 2.18a

6.10 The adaptive system was designed for a process with the transfer function

G(s) =b

s + a(1)

The controller has the structure

u = θ 1uc − θ 2 y (2)

Page 45: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

Solutions to Chapter 6 43

The desired response is

Gm(s) =bm

s + am(3)

Combining (1) and (2) gives the closed loop transfer function

Gcl =bθ 1

s + a + bθ 2

Equating this with Gm(s) given by (3) gives

bθ 1 = bm

a + bθ 2 = am

If these equations are solved for θ 1 and θ 2 we obtain the controllerparameters that give the desired closed loop system. Conversely if theequations are solved for a and b we obtain the parameters of the processmodel that corresponds to given controller parameters. This gives

a = am − bmθ 2/θ 1

b = bm/θ 1

The parameters a and b can thus be interpreted as the parameters of themodel the controller believes in. Inserting the expressions for θ 1 and θ 2

from page 318 we get a(ω) =

229− 31ω2

259−ω2

b(ω) =458

259−ω2

( +)

When

ω =√

22931

= 2.7179

we get a(ω) = 0. The reson for this is that the value of the plant transferfunction

G(s) =458

(s + 1)(s2 + 30s + 229) (∗)

is G(2.7179i) = −0.6697i. The transfer function of the plant is thuspurely imaginary. The only way to obtain a purely imaginary value ofthe transfer function

G =b

s + a(∗∗)

is to make a = 0. Also notice that b(2.7179i) = 1.8203 which givesG(2.7179i) = −0.6697i. When ω =

√259 = 16.09 we get infinite values of

a and b. Notice that G( i√

259) = −0.0587 that is real and negative. Theonly way to make G( iω) negative and real is to have infinite large values

Page 46: Adaptive Control 2nd. Edt. by Karl.J.astrom - Solution Manuel

44 Problem Solutions

of a and b. It is thus easy to explain the behavior of the algorithm from thesystem identification point of view. The controller can be interpreted asif it is fitting a model (**) to the process dynamics (*). With a sinusoidalinput it is possible to get a perfect fit and the parameters are given by( +).