Lecture 2: Real-Time Parameter Estimation • Least Squares and Regression Models • Estimating Parameters in Dynamical Systems • Examples c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 1/25
Lecture 2: Real-Time Parameter Estimation
Least Squares and Regression Models
Estimating Parameters in Dynamical Systems
Examples
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 1/25
Preliminary Comments
Real-time parameter estimation is one of methods of systemidentification
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 2/25
Preliminary Comments
Real-time parameter estimation is one of methods of systemidentification
The key elements of system identification are:
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 2/25
Preliminary Comments
Real-time parameter estimation is one of methods of systemidentification
The key elements of system identification are: Selection of model structure (linear, nonlinear, linear in
parameters, . . . )
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 2/25
Preliminary Comments
Real-time parameter estimation is one of methods of systemidentification
The key elements of system identification are: Selection of model structure (linear, nonlinear, linear in
parameters, . . . ) Design of experiments (input signal, sampling period, . . . )
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 2/25
Preliminary Comments
Real-time parameter estimation is one of methods of systemidentification
The key elements of system identification are: Selection of model structure (linear, nonlinear, linear in
parameters, . . . ) Design of experiments (input signal, sampling period, . . . ) Parameter estimation (off-line, on-line, . . . )
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 2/25
Preliminary Comments
Real-time parameter estimation is one of methods of systemidentification
The key elements of system identification are: Selection of model structure (linear, nonlinear, linear in
parameters, . . . ) Design of experiments (input signal, sampling period, . . . ) Parameter estimation (off-line, on-line, . . . ) Validation mechanism (depends on application)
All these step should be presentin Real-Time Parameter Estimation Algorithms!
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 2/25
Parameter Estimation: Problem Formulation
Suppose that a system (a model of experiment) isy(i) = 1(i)
01 + 2(i)
02 + + n(i)
0n
=[1(i), 2(i), . . . , n(i)
]
(i)T 1n
01.
.
.
0n
0 n1
= (i)T 0
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 3/25
Parameter Estimation: Problem Formulation
Suppose that a system (a model of experiment) isy(i) = 1(i)
01 + 2(i)
02 + + n(i)
0n
=[1(i), 2(i), . . . , n(i)
]
(i)T 1n
01.
.
.
0n
0 n1
= (i)T 0
Given the data{[y(i), (i)]
}Ni=1
={[y(1), (1)], . . . , [y(N), (N)]
}The task is to find (estimate) the n unknown values
0 =[01,
02, . . . ,
0n
]T
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 3/25
What kind of solution is expected?
Algorithm solving the estimation problem should be as follows.
Estimates should be computed in a numerically efficientway (an analytical formula is preferable!)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 4/25
What kind of solution is expected?
Algorithm solving the estimation problem should be as follows.
Estimates should be computed in a numerically efficientway (an analytical formula is preferable!)
Computed estimates (the mean value) should be close tothe real estimated values (be unbiased), provided that themodel is correct.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 4/25
What kind of solution is expected?
Algorithm solving the estimation problem should be as follows.
Estimates should be computed in a numerically efficientway (an analytical formula is preferable!)
Computed estimates (the mean value) should be close tothe real estimated values (be unbiased), provided that themodel is correct.
Estimates should have small variances, preferablydecreasing with N (more data > better estimate).
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 4/25
What kind of solution is expected?
Algorithm solving the estimation problem should be as follows.
Estimates should be computed in a numerically efficientway (an analytical formula is preferable!)
Computed estimates (the mean value) should be close tothe real estimated values (be unbiased), provided that themodel is correct.
Estimates should have small variances, preferablydecreasing with N (more data > better estimate).
The algorithm, if possible, should have intuitive clearmotivation.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 4/25
Least Squares Algorithm:
Consider the function (loss-function) to be minimized
VN() =1
2
{(y(1) (1)T)2 + + (y(N) (N)T)2
}
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 5/25
Least Squares Algorithm:
Consider the function (loss-function) to be minimized
VN() =1
2
{(y(1) (1)T
(1)
)2 + + (y(N) (N)T (N)
)2}
=1
2
{(1)2 + (2)2 + + (N)2
}=
1
2ETE
HereE =
[(1), (2), . . . , (N)
]T
= Y
and
Y =[y(1), y(2), . . . , y(N)
]T
, =
(1)T
(2)T
.
.
.
(N)T
Nn
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 5/25
Theorem (Least Square Formula):The function VN() is minimal at = , which satisfies theequation
T = T Y
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 6/25
Theorem (Least Square Formula):The function VN() is minimal at = , which satisfies theequation
T = T Y
If det(T
)6= 0, then
the minimum of the function VN() is unique,
the minimum value is 12Y TY 1
2Y T
(T
)1TY and
is attained at
=(T
)1TY
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 6/25
Proof:
The loss-function can be written as
VN() =1
2(Y )T (Y )
It reaches its minimal value at if
VN() =
[VN()
1, . . . ,
VN()
n
]= 0 with =
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 7/25
Proof:
The loss-function can be written as
VN() =1
2(Y )T (Y )
It reaches its minimal value at if
VN() =
[VN()
1, . . . ,
VN()
n
]= 0 with =
This equation is
VN() =1
22 ()
T (Y ) = T (Y ) = 0
( (T a) = (aT ) = aT , (T A) = T (A+AT ) )
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 7/25
Proof:
The loss-function can be written as
VN() =1
2(Y )T (Y )
It reaches its minimal value at if
VN() =
[VN()
1, . . . ,
VN()
n
]= 0 with =
This equation is
VN() =1
22 ()
T (Y ) = T (Y ) = 0
( (T a) = (aT ) = aT , (T A) = T (A+AT ) ) HenceT = T Y
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 7/25
Proof (cond):If the matrix T is invertible, then we solve can find thesolution of T = T Y :
=(T
)1TY
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 8/25
Proof (cond):If the matrix T is invertible, then we solve can find thesolution of T = T Y :
=(T
)1TY
VN() =1
2(Y )T (Y ) =
1
2Y TYY T+
1
2TT
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 8/25
Proof (cond):If the matrix T is invertible, then we solve can find thesolution of T = T Y :
=(T
)1TY
VN() =1
2(Y )T (Y ) =
1
2Y TYY T+
1
2TT
and since
Y T = Y T(T
)1 (T
) = T
(T
)
by completing the square
VN() =1
2Y TY
1
2T(T
)
independent of
+1
2( )T
(T
)( )
0, > 0 for 6=
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 8/25
Comments:The formula can be re-written in terms of y(i), (i) as
=(T
)1TY =
(Ni=1
(i)(i)
)1( Ni=1
(i)y(i)
)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Comments:The formula can be re-written in terms of y(i), (i) as
=(T
)1TY = P (N)
(Ni=1
(i)y(i)
)
where P (N) =(T
)1=
(Ni=1
(i)(i)
)1
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Comments:The formula can be re-written in terms of y(i), (i) as
=(T
)1TY = P (N)
(Ni=1
(i)y(i)
)
where P (N) =(T
)1=
(Ni=1
(i)(i)
)1
The condition det(T
)6= 0 is called an excitation condition.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Comments:The formula can be re-written in terms of y(i), (i) as
=(T
)1TY = P (N)
(Ni=1
(i)y(i)
)
where P (N) =(T
)1=
(Ni=1
(i)(i)
)1
The condition det(T
)6= 0 is called an excitation condition.
To assign different weights for different time instants, considerVN() =
12
{w1(y(1)(1)
T)2+ +wN(y(N)(N)T)2
}= 1
2
{w1(1)
2+w2(2)2+ +wN(N)
2}=
1
2ETWE
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Comments:The formula can be re-written in terms of y(i), (i) as
=(T
)1TY = P (N)
(Ni=1
(i)y(i)
)
where P (N) =(T
)1=
(Ni=1
(i)(i)
)1
The condition det(T
)6= 0 is called an excitation condition.
To assign different weights for different time instants, considerVN() =
12
{w1(y(1)(1)
T)2+ +wN(y(N)(N)T)2
}= 1
2
{w1(1)
2+w2(2)2+ +wN(N)
2}=
1
2ETWE
=(TW
)1TWY
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Exponential forgetting
The common way to assign the weights is Least Squares withExponential Forgetting:
VN() =1
2
Ni=1
Ni(y(i) (i)T
)2
with 0 < < 1. What is the solution formula?
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 10/25
Stochastic NotationUncertainty is often convenient to represent stochastically.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 11/25
Stochastic NotationUncertainty is often convenient to represent stochastically. e is random if it is not known in advance and could take
values from the set {e1, . . . , ei, . . . } with probabilities pi. The mean value or expected value is
E e =i
pi ei, E is the expectation operator.
For a realization {ei}Ni=1 : E e 1N
Ni=1 ei
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 11/25
Stochastic NotationUncertainty is often convenient to represent stochastically. e is random if it is not known in advance and could take
values from the set {e1, . . . , ei, . . . } with probabilities pi. The mean value or expected value is
E e =i
pi ei, E is the expectation operator.
For a realization {ei}Ni=1 : E e 1N
Ni=1 ei
The variance or dispersion characterizes possibledeviations from the mean value:
2 = var(e) = E (e E e)2.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 11/25
Stochastic NotationUncertainty is often convenient to represent stochastically. e is random if it is not known in advance and could take
values from the set {e1, . . . , ei, . . . } with probabilities pi. The mean value or expected value is
E e =i
pi ei, E is the expectation operator.
For a realization {ei}Ni=1 : E e 1N
Ni=1 ei
The variance or dispersion characterizes possibledeviations from the mean value:
2 = var(e) = E (e E e)2.
Two random variables e and f are independent ifE (e f) = (E e) (E f) or cov(e, f) = 0.
where cov(e, f) = E((eE e)(f E f)
)is covariance.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 11/25
Statistical Properties of LS Estimate:
Assume that the model of the system is stochastic, i.e.
y(i) = (i)T0 + e(i){ Y (N) = (N)0 + E(N)
}.
Here 0 is vector of true parameters, 0 i N +,{e(i)
}={e(0), e(1), . . .
}is a sequence of independent
equally distributed random variables with Ee(i) = 0,{(i)
}is either a sequence of random variables
independent of e or deterministic.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 12/25
Statistical Properties of LS Estimate:
Assume that the model of the system is stochastic, i.e.
y(i) = (i)T0 + e(i){ Y (N) = (N)0 + E(N)
}.
Here 0 is vector of true parameters, 0 i N +,{e(i)
}={e(0), e(1), . . .
}is a sequence of independent
equally distributed random variables with Ee(i) = 0,{(i)
}is either a sequence of random variables
independent of e or deterministic.
Pre-multiplying the model by((N)T(N)
)1(N)T , gives(
(N)T(N))1
(N)TY (N) =
=((N)T(N)
)1(N)T
((N)0 + E(N)
)cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 12/25
Statistical Properties of LS Estimate:
Assume that the model of the system is stochastic, i.e.
y(i) = (i)T0 + e(i){ Y (N) = (N)0 + E(N)
}.
Here 0 is vector of true parameters, 0 i N +,{e(i)
}={e(0), e(1), . . .
}is a sequence of independent
equally distributed random variables with Ee(i) = 0,{(i)
}is either a sequence of random variables
independent of e or deterministic.
Pre-multiplying the model by((N)T(N)
)1(N)T , gives
= 0 +((N)T(N)
)1(N)T E(N)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 12/25
Theorem: Suppose the data are generated by
y(i) = (i)T0 + e(i){ Y (N) = (N)0 + E(N)
}where
{e(i)
}={e(0), e(1), . . .
}is a sequence of
independent random variables with zero mean and variance 2.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 13/25
Theorem: Suppose the data are generated by
y(i) = (i)T0 + e(i){ Y (N) = (N)0 + E(N)
}where
{e(i)
}={e(0), e(1), . . .
}is a sequence of
independent random variables with zero mean and variance 2.Consider the least square estimate
=((N)T(N)
)1(N)TY (N).
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 13/25
Theorem: Suppose the data are generated by
y(i) = (i)T0 + e(i){ Y (N) = (N)0 + E(N)
}where
{e(i)
}={e(0), e(1), . . .
}is a sequence of
independent random variables with zero mean and variance 2.Consider the least square estimate
=((N)T(N)
)1(N)TY (N).
If det(N)T(N) 6= 0, then E = 0, i.e. the estimate is unbiased; the covariance of the estimate is
cov = E( 0
) ( 0
)T
= 2((N)T(N)
)1cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 13/25
Regression Models for FIR
FIR (Finite Impulse Response) or MA (Moving Average Model):y(t) = b1 u(t 1) + b2 u(t 2) + + bn u(t n)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 14/25
Regression Models for FIR
FIR (Finite Impulse Response) or MA (Moving Average Model):y(t) = b1 u(t 1) + b2 u(t 2) + + bn u(t n)
y(t) = [u(t 1), u(t 2), . . . , u(t n)] =(t1)T
b1
b2.
.
.
bn
=
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 14/25
Regression Models for FIR
FIR (Finite Impulse Response) or MA (Moving Average Model):y(t) = b1 u(t 1) + b2 u(t 2) + + bn u(t n)
y(t) = [u(t 1), u(t 2), . . . , u(t n)] =(t1)T
b1
b2.
.
.
bn
=
Note the change of notation for the typical case when the RHSdoes no have the term b0 u(t): (i) (t 1) to indicatedependence of date on input signals up to t 1th.
However, we keep: Y (N) = (N)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 14/25
Regression Models for ARMA
IIR (Infite Impulse Response) or ARMA (Autoregressive MovingAverage Model):(
qn + a1 qn1 + + an
) A(q)
y(t) =(b1 q
m1 + b2 qm2 + + bm
) B(q)
u(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 15/25
Regression Models for ARMA
IIR (Infite Impulse Response) or ARMA (Autoregressive MovingAverage Model):(
qn + a1 qn1 + + an
) A(q)
y(t) =(b1 q
m1 + b2 qm2 + + bm
) B(q)
u(t)
q is the forward shift operator:
q u(t) = u(t+ 1), q1 u(t) = u(t 1)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 15/25
Regression Models for ARMA
IIR (Infite Impulse Response) or ARMA (Autoregressive MovingAverage Model):(
qn + a1 qn1 + + an
) A(q)
y(t) =(b1 q
m1 + b2 qm2 + + bm
) B(q)
u(t)
q is the forward shift operator:
q u(t) = u(t+ 1), q1 u(t) = u(t 1)
y(t) = [y(t 1), . . . ,y(t n), u(t n+m 1), . . . , u(t n)] =(i1)T
a1.
.
.
an
b1.
.
.
bm
=
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 15/25
Problem 2.2
Consider the FIR model
y(t) = b0u(t) + b1u(t 1) + e(t), t = 1, 2, 3, . . . , N
where{e(t)
}is a sequence of independent normalN (0, )
random variables
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 16/25
Problem 2.2
Consider the FIR model
y(t) = b0u(t) + b1u(t 1) + e(t), t = 1, 2, 3, . . . , N
where{e(t)
}is a sequence of independent normalN (0, )
random variables
Determine LS estimates for b0, b1 when u(t) is a step.Analyze the covariance of the estimate, when N
Make the same investigation when u(t) is white noise withunit variance.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 16/25
Solution (Problem 2.2):For the model
y(t) = b0u(t) + b1u(t 1) + e(t)
the regression form is readily seen
y(t) =[u(t), u(t 1)
] [ b0b1
]+ e(t) = (t)T + e(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 17/25
Solution (Problem 2.2):For the model
y(t) = b0u(t) + b1u(t 1) + e(t)
the regression form is readily seen
y(t) =[u(t), u(t 1)
] [ b0b1
]+ e(t) = (t)T + e(t)
Then LS estimate is
=((N)T(N)
)1(N)TY (N)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 17/25
Solution (Problem 2.2):For the model
y(t) = b0u(t) + b1u(t 1) + e(t)
the regression form is readily seen
y(t) =[u(t), u(t 1)
] [ b0b1
]+ e(t) = (t)T + e(t)
Then LS estimate is
=
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
Tu(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N 1)
1
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
Ty(1)
y(2)
.
.
.
y(N)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 17/25
Solution (Cond):The formula for an arbitrary input signal u(t)
=
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
Tu(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N 1)
1
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
Ty(1)
y(2)
.
.
.
y(N)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 18/25
Solution (Cond):The formula for an arbitrary input signal u(t)
=
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
Tu(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N 1)
1
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
Ty(1)
y(2)
.
.
.
y(N)
becomes
=
Nt=1 u(t)2 Nt=1 u(t)u(t 1)N
t=1 u(t)u(t 1)N
t=1 u(t 1)2
1
Nt=1 u(t)y(t)N
t=1 u(t 1)y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 18/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
Nt=1 u(t)2 Nt=1 u(t)u(t 1)N
t=1 u(t)u(t 1)N
t=1 u(t 1)2
1
Nt=1 u(t)y(t)N
t=1 u(t 1)y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
N Nt=1 u(t)u(t 1)N
t=1 u(t)u(t 1)N
t=1 u(t 1)2
1
Nt=1 u(t)y(t)N
t=1 u(t 1)y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
N N 1N 1
Nt=1 u(t 1)
2
1
Nt=1 u(t)y(t)N
t=1 u(t 1)y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
N N 1N 1 N 1
1
Nt=1 u(t)y(t)N
t=1 u(t 1)y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
N N 1N 1 N 1
1
Nt=1 y(t)N
t=2 y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=1
N(N 1) (N 1)2
N 1 (N 1)(N 1) N
Nt=1 y(t)N
t=2 y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
1 11
N
N 1
Nt=1 y(t)N
t=2 y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):Suppose that u(t) is a unit step applied at t = 1
u(t) =
{1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
Nt=1 y(t)
Nt=2 y(t)
N
t=1 y(t) +N
N 1
Nt=2
y(t)
=
y(1)
1
N 1
Nt=1
y(t) y(1)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):How to compute the covariance of ? Consider
0 =
y(1)
1N1
Nt=1 y(t) y(1)
b0b1
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):How to compute the covariance of ? Consider
0 =
y(1)
1
N 1
Nt=1
y(t) y(1)
b0b1
=
{b0 1 + b1 0 + e(1)
} b0
Nt=2
{b0 1 + b1 1 + e(t)
}+ y(1)
N 1 y(1) b1
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):How to compute the covariance of ? Consider
0 =
y(1)
1
N 1
Nt=1
y(t) y(1)
b0b1
=
e(1)
1
N 1
Nt=2
e(t) e(1)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):The estimation error is
0 =
e(1)
1
N 1
Nt=2
e(t) e(1)
The covariance E( 0)( 0)T of estimate is then
cov() =
Ee(1)2 E
{N
t=2e(t)
N1 e(1)
}e(1)
E{
N
t=2e(t)
N1 e(1)
}e(1) E
{N
t=2e(t)
N1 e(1)
}2
=
2 22 2
(N1
(N1)2 1
)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):The estimation error is
0 =
e(1)
1
N 1
Nt=2
e(t) e(1)
The covariance E( 0)( 0)T of estimate is then
cov() =
Ee(1)2 E
{N
t=2e(t)
N1 e(1)
}e(1)
E{
N
t=2e(t)
N1 e(1)
}e(1) E
{N
t=2e(t)
N1 e(1)
}2
=
2 22 2 N
N1
= 2
1 11 N
N1
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):To conclude with the unit step input signal LS estimate is
=
b0b1
=
y(1)
1
N 1
Nt=1
y(t) y(1)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 21/25
Solution (Cond):To conclude with the unit step input signal LS estimate is
=
b0b1
=
y(1)
1
N 1
Nt=1
y(t) y(1)
The covariance of such estimate looks as
E( 0)( 0)T = 2
1 11 N
N1
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 21/25
Solution (Cond):To conclude with the unit step input signal LS estimate is
=
b0b1
=
y(1)
1
N 1
Nt=1
y(t) y(1)
The covariance of such estimate looks as
E( 0)( 0)T = 2
1 11 N
N1
As a number of measured data increases N
E( 0)( 0)T
1 11 1
and does NOT IMPROVE!cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 21/25
Solution (Problem 2.2):Consider now the model
y(t) = b0u(t) + b1u(t 1) + e(t)
when the input signal is a white noise with unit variance
Eu(t)2 = 1, Eu(t)u(s) = 0 if t 6= s
and when u and e are independent
Eu(t)e(s) = 0 t, s
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 22/25
Solution (Problem 2.2):Consider now the model
y(t) = b0u(t) + b1u(t 1) + e(t)
when the input signal is a white noise with unit variance
Eu(t)2 = 1, Eu(t)u(s) = 0 if t 6= s
and when u and e are independent
Eu(t)e(s) = 0 t, s
Such assumptions imply that
Ey(t)u(t) = b0, Ey(t)u(t 1) = b1
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 22/25
Solution (Problem 2.2):The LS estimate becomes dependent on a realization of thestochastic variable u(t); we need to consider its mean value
E = E
Nt=1 u(t)2 Nt=2 u(t)u(t 1)N
t=2 u(t)u(t 1)N
t=2 u(t 1)2
1
Nt=1 u(t)y(t)N
t=2 u(t 1)y(t)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 23/25
Solution (Problem 2.2):The LS estimate becomes dependent on a realization of thestochastic variable u(t); we need to consider its mean value
E = E
N(
1
N
N
t=1u(t)2
)(N 1)
(1
N1
N
t=2u(t)u(t 1)
)
(N 1)(
1
N1
N
t=2u(t)u(t 1)
)(N 1)
(1
N1
N
t=2u(t 1)2
)
1
N(
1
N
N
t=1u(t)y(t)
)
(N 1)(
1
N1
N
t=2u(t 1)y(t)
)
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 23/25
Solution (Problem 2.2):The LS estimate becomes dependent on a realization of thestochastic variable u(t); we need to consider its mean value
E
N Eu(t)2 (N 1)Eu(t)u(t 1)
(N 1)Eu(t)u(t 1) (N 1)Eu(t 1)2
1
N Eu(t)y(t)
(N 1)Eu(t 1)y(t)
=
N 0
0 (N 1)
1
Nb0
(N 1)b1
=
b0b1
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 23/25
Solution (Problem 2.2):To compute the covariance, use the formula in Theorem
cov( 0) = 2E ((N)T(N))1
= 2
N 0
0 (N 1)
1
=
1N 0
0 1N1
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 24/25
Solution (Problem 2.2):To compute the covariance, use the formula in Theorem
cov( 0) = 2E ((N)T(N))1
= 2
N 0
0 (N 1)
1
=
1N 0
0 1N1
It converges to zero!
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 24/25
Next Lecture / Assignments:
Next lecture (April 13, 10:00-12:00, in A206Tekn): RecursiveLeast Squares, modifications, continuous-time systems.
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 25/25
Next Lecture / Assignments:
Next lecture (April 13, 10:00-12:00, in A206Tekn): RecursiveLeast Squares, modifications, continuous-time systems.
Solve problems 2.5 and 2.6
cLeonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 25/25
egin {minipage}[b]{1.75ggnat } extcolor {blue}{f Lecture 2:} strut Real-Time Parameter Estimation end {minipage}Preliminary CommentsPreliminary CommentsPreliminary CommentsPreliminary CommentsPreliminary CommentsPreliminary Comments
Parameter Estimation: Problem FormulationParameter Estimation: Problem Formulation
What kind of solution is expected?What kind of solution is expected?What kind of solution is expected?What kind of solution is expected?
Least Squares Algorithm:Least Squares Algorithm:
Proof:Proof:Proof:
Proof (con'd):Proof (con'd):Proof (con'd):
Comments:Comments:Comments:Comments:Comments:
Exponential forgettingStochastic NotationStochastic NotationStochastic NotationStochastic Notation
Statistical Properties of LS Estimate:Statistical Properties of LS Estimate:Statistical Properties of LS Estimate:
Regression Models for FIRRegression Models for FIRRegression Models for FIR
Regression Models for ARMARegression Models for ARMARegression Models for ARMA
Problem 2.2Problem 2.2
Solution (Problem 2.2):Solution (Problem 2.2):Solution (Problem 2.2):
Solution (Con'd):Solution (Con'd):
Solution (Con'd):Solution (Con'd):Solution (Con'd):Solution (Con'd):Solution (Con'd):Solution (Con'd):Solution (Con'd):Solution (Con'd):Solution (Con'd):
Solution (Con'd):Solution (Con'd):Solution (Con'd):Solution (Con'd):Solution (Con'd):
Solution (Con'd):Solution (Con'd):Solution (Con'd):
Solution (Problem 2.2):Solution (Problem 2.2):
Solution (Problem 2.2):Solution (Problem 2.2):Solution (Problem 2.2):
Solution (Problem 2.2):Solution (Problem 2.2):
Next Lecture / Assignments:Next Lecture / Assignments: