Introduction Classification Least Squares Estimation Recursive Least Squares Estimation Weighted Least Squares Discrete-Time Kalman Filter State Space Identification System Identification Satish Nagarajaiah, Prof., CEVE & MEMS, Rice July 16, 2010 Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
55
Embed
System Identi cation - bridge.t.u-tokyo.ac.jp · State Space Identi cation System Identi cation Satish Nagarajaiah, Prof., CEVE & MEMS, Rice July 16, 2010 ... n mu m 1 y r 1 = C r
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
System Identification
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice
July 16, 2010
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
4 Recursive Least Squares EstimationDerivationStatistical Analysis of the RLS Estimator
5 Weighted Least Squares
6 Discrete-Time Kalman FilterFeatures
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Outline II
Derivations
7 State Space IdentificationWeighting Sequence ModelState-space Observer ModelLinear Difference Model
ARX ModelPulse Response ModelPseudo-Inverse
Physical Interpretation of SVDApproximation ProblemBasic EquationsCondition Number
Eigen Realization Algorithm
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
DefinitionObjective
Definition
System identification is the process of developing or improving themathematical representation of a physical system usingexperimental data. There are three types of identificationtechniques: Modal parameter identification andStructural-model parameter identification (primarily used instructural engineering) and Control-model identification(primarily used in mechanical and aerospace systems). The primaryobjective of system identification is to determine the systemmatrices, A, B, C, D from measured/analyzed data often withnoise. The modal parameters are computed from the systemmatrices.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
DefinitionObjective
Objective
The main aim of system identification is to determine amathematical model of a physical/dynamic system from observeddata. Six key steps are involved in system identification (1)develop an approximate analytical model of the structure, (2)establish levels of structural dynamic response which are likely tooccur using the analytical model and characteristics of anticipatedexcitation sources, (3) determine the instrumentation requirementsneeded to sense the motion with prescribed accuracy and spatialresolution,(4) perform experiments and record data, (5) applysystem identification techniques to identify the dynamiccharacteristics such as system matrices,modal parameters, andexcitation and input/output noise characteristics, and (6)refine/update the analytical model based on identified results.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Non-parametric ModelsParametric Models
Parameteric and Non-parameteric Models
Parametric Models: Choose the model structure and estimatethe model parameters for best fit.
Non-parametric Models: Model structure is not specified apriori but is instead determined from data. Non parametrictechniques rely on the Cross correlation function (CCF) Ryu /Auto correlation function (ACF) Ruu and Spectral DensityFunctions Syu / Suu (Fourier transform of CCF and ACF) toestimate the transfer function/frequency response function ofthe model
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Non-parametric ModelsParametric Models
Non-parametric Models
Frequency Response Function (FRF)
Y (jω) = H(jω)U(jω)
FRF - Nonparametric
H(jω) =Syu(jω)
Suu(jω)
Impulse Response Function (IRF)
y(t) =
t∫0
h(t − τ)u(τ) dτ
[Note: IRF and FRF form a Fourier Transform Pair]Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Non-parametric ModelsParametric Models
Parametric Models
TF Models (SISO)
Y (s) =bmsm + bm−1sm−1 + · · ·+ b1s + bo
sn + an−1sn−1 + · · ·+ a1s + aoU(s)
In this model structure, we choose m and n and estimateparameters b0, · · · , bm, a0, · · · , an−1.
Time-domain Models (SISO)
dny
dtn+ an−1
dn−1y
dtn−1+ · · ·+ a1
dy
dt+ aoy(t)
= bmdmu
dtm+ bm−1
dm−1u
dtm−1+ · · ·+ b1
du
dt+ bou(t)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
The parameters n, r , m are given and the models parametersA, B, C, D are to be estimated.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Non-parametric ModelsParametric Models
Parametric Models
Transfer Function Matrix Models (MIMO)
Y (s) =
H11 (s) . . . H1m (s)...
. . ....
Hr1 (s) · · · Hrm (s)
U (s)
which can be written as:
Y (s) = H(s)U(s)
=[C(sI− A)−1B + D
]U(s)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Non-parametric ModelsParametric Models
Parametric Models
System identification can be grouped into frequency domainidentification methods and time-domain identificationmethods. We will focus mainly on discrete-time domain modelidentification and state-space identification:
1 Discrete Time-domain Models (SISO)2 State Space Models (MIMO)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Least Squares Estimation
Consider a second-order discrete model of the form,
The objective is to estimate the parameter vectorpT = [a1 a2 a3 a4] using the vector of input and outputmeasurements. Making the substitution,
hT = [−y(k − 1) − y(k − 2) u(k) u(k − 1)]
we can writey(k) = hT p
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Least Squares Estimation
Let us say, we have k sets of measurements. Then, we canwrite the above equation in matrix form as,
y1
y2...
yk
=
h11 h12 · · · h1n
h21. . .
.... . .
hk1 hkn
p1
p2...
pn
yi = hT
i p i = 1, 2, · · · , k (1)
In matrix form, we can write,
y = HT p
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Least Squares Estimation
In least-squares estimation, we minimize the followingperformance index:
J =[y −HT p
]T [y −HT p
]= yT y − yT HT p− pHy + pT HHT p (2)
Minimizing the performance index in eq. 2 with respect to p,
∂J
∂p=
∂
∂p
[yT y − yT HT p− pHy + pT HHT p
]= 0
= −Hy −Hy + 2HHT p = 0
which results in the expression for the parameter estimate as:
p =(
HHT)−1
Hy (3)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
DerivationStatistical Analysis of the RLS Estimator
Derivation
Limitations of Least Squares Estimation
The parameter update law in eq. 3 involves operating in a batchmode. For every k + 1th measurement, the matrix inverse(HHT
)−1needs to be re-calculated. This is a cumbersome
operation and it is best if it can be avoided.
In a recursive estimator, there is no need to store all theprevious data to compute the present estimate. Let us use thefollowing simplified notations:
Pk =(
HHT)−1
andBk = Hy
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
DerivationStatistical Analysis of the RLS Estimator
Derivation
Hence, the parameter update law in eq. 3 can be written as:
pk = PkBk
In the recursive estimator, the matrices Pk , Bk are updatedas follows:
Bk+1 = Bk + hk+1yk+1 (4)
In order to update Pk , the following update law is used:
Pk+1 = Pk −Pkhk+1hT
k+1Pk
1 + hTk+1Pkhk+1
(5)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
DerivationStatistical Analysis of the RLS Estimator
Derivation
Note that the update for matrix Pk+1 does not involve matrixinversion.The updates for Pk , Bk can then be used to update theparameter vector as follows:
pk+1 = Pk+1Bk+1
pk = PkBk (6)
Combining these equations,
pk+1 − pk = Pk+1Bk+1 − PkBk
Substituting eqs. 4 and 5 in the above equationwe get:
pk+1 = pk + Pkhk+1
(1 + hT
k+1Pkhk+1
)−1 (yk+1 − hT
k+1pk
)(7)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
DerivationStatistical Analysis of the RLS Estimator
Statistical Analysis
Consider the scalar form of the equation again:
yi = hTi p i = 1, 2, · · · , k
In the presence of measurement noise, it becomes,
yi = hTi p + ni i = 1, 2, · · · , k
With the folowing assumptions1 Average value of the noise is zero, that is, E (ni ) = 0, where E
is the expectation operator.2 Noise samples are uncorrelated, that is
E (ninj) = E (ni )E (nj) = 0, i 6= j3 E (n2
i ) = r , the covariance of noise.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
DerivationStatistical Analysis of the RLS Estimator
Statistical Analysis
Recalling eq. 6:pk = PkBk (8)
This can be expanded as:
pk =
[k∑
i=1
hihTi
]−1 k∑i=1
hiyi (9)
Taking E () on both sides, we get,
E (pk) = E (p) = p (10)
This makes the estimator unbiased estimator, that is theexpected value of the estimate is equal to to that of thequantity being estimated.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
DerivationStatistical Analysis of the RLS Estimator
Statistical Analysis
Now, let us look at the covariance of the error,
Cov = E[(pk − pk) (pk − pk)T
](11)
Which upon simplification, we get
Cov = Pk r (12)
It can be shown that Pk decreases as k increases. Hence, asmore measurements become available, the error reduces andconverges to the true value of p. Hence, this is known as aconsistent estimator.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Weighted Least Squares
Extension of RLS Method
The scalar formulation can be extended to a MIMO(multi-input multi-output) system.
A weighting matrix is introduced to emphasize the relativeimportance of one parameter over the other.
Consider eq. 1. Extending this to the MIMO case andincluding measurement noise,
yi = HTi p + ni , i = 1, 2, · · · , k
where, yi is l × 1, Hi is n × l , p is n × 1 and ni is l × 1.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Weighted Least Squares
The performance index,J is defined by,
J =k∑
i=1
(yi −HT
i p)T (
yi −HTi p)
Minimizing J with respect to p, we get,
p =
(k∑
i=1
HiHTi
)−1 k∑i=1
Hiyi
The above equation is a batch estimator. The recursive LS estimator canbe calculated by proceeding the same way as was done for the scalar case.Defining,
Pk =
(k∑
i=1
HiHTi
)−1
Bk =k∑
i=1
Hiyi (13)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Weighted Least Squares
The parameter update rule is given by,
pk+1 = pk + PkHk+1
(yk+1 −HT
k+1pk
)Now, if we introduce a weighting matrix, W into theperformance index, we get
J =k∑
i=1
(yi −HT
i p)T
W(
yi −HTi p)
(14)
The minimization of eq. 14 leads to
p =
(k∑
i=1
HiWHTi
)−1 k∑i=1
HiWyi
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
Weighted Least Squares
Once again, defining
Pk =
(k∑
i=1
HiWHTi
)−1
Bk =k∑
i=1
HiWyi (15)
the recursive relationships are given by,
pk+1 = pk + Pk+1Hk+1W(
yk+1 −HTk+1pk
)(16)
and
Pk+1 = Pk − PkHk+1
[W−1 + HT
k+1PkHk+1
]−1HT
k+1Pk (17)
Assuming that the noise samples are uncorrelated, or,
E{[
ninTi
]}=
{0 i 6= jR i = j
It can be shown that by choosing W = R−1 produces the minimumcovariance estimator. In other words, the estimation error is the minimum.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
FeaturesDerivations
Discrete-Time Kalman Filter
Kalman Filter is the most widely used state estimation toolused for control and identification.
LS, RLS, WLS deals with the estimation of system parameters
Kalman filter deals with the estimation of states for adynamical system.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
FeaturesDerivations
Discrete-Time Kalman Filter
Consider the linear discrete-system given by,
xk = Axk−1 + Gwk−1
yk = HT xk + nk
Note : The parameter vector p is replaced by x, consistent with theterminology we have adopted for representing states.wk is a n × 1, is noise measurements with E() = 0 and COV () = Qxk is the n × 1 state vectorA is the state matrix assumed to be knownnk is a l × 1 vector of output noise with E() = 0 and COV () = Ryk is l × 1 vector of measurementsG is n × n, H is l × n and they are assumed to be known
The objective is to estimate the states, xk based on k observations ofy. A Recursive filter is used for this purpose. This recursive filter iscalled Kalman filter.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
FeaturesDerivations
Fundamental difference between the WLS for Dynamicalcase and Non-dynamic case
Non-dynamic case, at time tk−1, an estimate xk−1 needs to beproduced and the covariance estimate xk−1 needs to be updated.These quantities do not change between tk−1 and tk because,xk−1 = xk .In the dynamic case, xk−1 6= xk , since the state evolves between thetime-steps k − 1 and k . That means a prediction is needed of whathappens to these state estimates and the covariance estimatesbetween measurements.Recall the WLS estimator in eq. 16 and eq. 17
xk = xk−1 + PkHkW(
yk −HTk xk−1
)Pk = Pk−1 − Pk−1Hk
[W−1 + HT
k Pk−1Hk
]−1HT
k Pk−1
In this estimator we cannot replace xk−1 with xk−1|k−1 as xk ischanging between tk−1 and tk . The same applies to Pk−1.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
FeaturesDerivations
Discrete-Time Kalman Filter
Consider the state estimate equation. If we know the stateestimate based on k − 1 measurements, xk−1|k−1 and thestate matrix, A, then, we can predict the quantity xk|k−1
using the relationship,
xk|k−1 = Axk−1|k−1
We can write the state estimate equation as,
xk|k = xk|k−1 + Pk|kHR−1(
yk −HT xk|k−1
)(18)
The above equation assumes that the weighting matrix,W = R−1.Similarly, it can be shown that the covariance estimate is,
Pk|k = Pk|k−1 − Pk|k−1H[R + HT Pk|k−1H
]−1HT Pk|k−1
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identification
FeaturesDerivations
Discrete-Time Kalman Filter
Note that the matrix, H is constant. The quantity Pk|k−1 canbe calculated as,
Pk|k−1 = E([xk − xk|k−1][xk − xk|k−1]T
)= APk−1|k−1AT + GQGT
In summary, the following steps are for the discrete-timeKalman filter:
Eq. 22 is known as the weighting-sequence model it does notinvolve any state measurements, and depends only on inputs.
The output y(k) is a weighted sum of input valuesu(0), u(1), u(k)
Weights CB, CAB, CA2B, · · · are called Markov parameters.
Markov parameters are invariant to state transformations
Since, the Markov parameters are the pulse responses of thesystem, they must be unique for a given system.
Note that the input-output description in eq. 22 is valid onlyunder zero initial conditions (steady-state). It is not applicableif the transient effects are present in the system.
In this model, there is no need to consider the exact nature ofthe state equations.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Models discussed so far (weighting sequence, ARX, etc.) arerelated in terms of the system matrices, A,B,C and D.If these matrices are known, all the models can be deriveddescribing the IO relationships.The system Markov parameters and the observer markovparameters play an important role in system identificationusing IO descriptions.Starting from the initial conditions, x(0) we get :
where V + is called the pseudo-inverse of the matrix. The matrix,V becomes square in case if single-input system. ARX modelscan be expressed in this form.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
which means that the output at any step k, y(k) can be expressed interms of p previous output and input measurements, i.e.,y(k − 1), · · · , y(k − p) and u(k − 1), · · · , u(k − p).
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Say, Am×nXn×1 = bm×1 ⇒ X = A+b, m equations in nunknowns.It has a unique (consistent) solution if:Rank[A,b] = Rank(A) = nIt has infinite(fewer linear independent equations thanunknowns) number of solutions if: Rank[A,b] = Rank(A) < nIt has no solutions(inconsistent) if: Rank[A,b] > Rank(A)Note that [A,b] is an augmented matrix.Rank is the number of linearly independent columns or rows.Due to the presence of noise, system identification mostlyproduces a set of inconsistent equations. These can be dealtwith what is known as the Singular Value Decomposition(SVD).
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
The nonzero singular values are unique, but U and V are not.U and V are square matrices.The columns of U are called the left singular vectors and thecolumns of V are called the right singular vectors of A.Since, U and V are orthonormal matrices, they obey therelationship,
UT U = Im×m = U−1U
VT V = In×n = V−1V (36)
From eq. 35, if A = UΣVT then,
Σ = UT AV
Σm×n =
[Σk×k 0k×n−k
0m−k×k 0m−k×n−k
]Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
SVD is closely related to the eigen-solution in case ofsymmetric positive-definite matrices AAT and AT A.
A = UΣVT ⇒ AT = VΣT UT
Hence, the non-zero singular values of A are the positivesquare roots of the non-zero eigenvalues of AT A or AAT .The columns of U are the eigenvectors corresponding to theeigenvalues of AAT andThe columns of V are the eigenvectors corresponding to theeigenvalues of AT A.If A consists of complex elements, then the transpose isreplaced by complex-conjugate transpose.Definitions of condition number and rank are closely relatedto the singular values.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
Rank: The rank of a matrix is equal to the number ofnon-zero singular values. This is the most reliable method ofrank determination. Typically, a rank tolerance equal to thesquare of machine precision is chosen and the singular valuesabove it are counted to determine the rank.In order to calculate the Pseudo-inverse of matrix A, denotedby A+ using SVD,
A+ = V1Σ−1UT1 = V1diag [σ−1
1 , σ−12 , · · · , σ−1
k ]UT1 (37)
where,
A = UΣVT =
[U1
... U2
] [Σ1 00 0
] VT1
· · ·VT
2
and
A = U1Σ1VT1
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
If α ≥ n and β ≥ n, the matrix H(k − 1) is of rank n.Substituting the Markov parameters from eq. 38 intoH(k − 1), we can factorize the Hankel matrix as:
H(k − 1) = PαAk−1Qβ (39)
ERA starts with the SVD of the Hankel matrix
H(0) = RΣST (40)
where the columns of R and S are orthonormal and Σ is,
Σ =
[Σn 00 0
]in which 0’s are zero matrices of appropriate dimensions, andΣn = diag [σ1, σ2, · · · , σi , σi+1, · · · , σn] andσ1 ≥ σ2 ≥ · · · ≥ σi ≥ σn ≥ 0.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification
IntroductionClassification
Least Squares EstimationRecursive Least Squares Estimation
The materials presented in this short course are a condensedversion of lecture notes of a course taught at Rice University
References
1. Jer-Nan Juang, Applied System Identification, Prentice Hall.2. Jer-Nan Juang and M. Q. Phan, Identification and control ofmechanical systems, Cambridge Press.3. DeRusso et al., State variables for engineers, Wiley Interscience.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identification