Least-Squares Estimation Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2018 • Estimating unknown constants from redundant measurements – Least-squares – Weighted least-squares • Recursive weighted least-squares estimator Copyright 2018 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/MAE3546.html http://www.princeton.edu/~stengel/OptConEst.html 1 Perfect Measurement of a Constant Vector • Given – Measurements, y , of a constant vector, x • Estimate x • Assume that output, y , is a perfect measurement and H is invertible y = Hx y: (n x 1) output vector H: (n x n) output matrix x : (n x 1) vector to be estimated • Estimate is based on inverse transformation ˆ x = H −1 y 2
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Least-Squares Estimation Robert Stengel
Optimal Control and Estimation, MAE 546, Princeton University, 2018
• Estimating unknown constants from redundant measurements– Least-squares– Weighted least-squares
• Recursive weighted least-squares estimator
Copyright 2018 by Robert Stengel. All rights reserved. For educational use only.http://www.princeton.edu/~stengel/MAE3546.html
http://www.princeton.edu/~stengel/OptConEst.html
1
Perfect Measurement of a Constant Vector
• Given – Measurements, y, of a constant vector, x
• Estimate x
• Assume that output, y, is a perfect measurement and H is invertible
y = H xy: (n x 1) output vectorH: (n x n) output matrixx : (n x 1) vector to be estimated
• Estimate is based on inverse transformation
x = H−1 y2
Imperfect Measurement of a Constant Vector
• Given – “Noisy” measurements, z, of a
constant vector, x• Effects of error can be reduced if
measurement is redundant• Noise-free output, y
y = H x• Measurement of output with error, z
z = y + n = H x + n z: (r x 1) measurement vectorn : (r x 1) error vector
y: (r x 1) output vectorH: (r x n) output matrix, r > nx : (n x 1) vector to be estimated
Weighted Least-Squares Estimate of a Constant Vector
xTHTS−1H − zTS−1H⎡⎣ ⎤⎦ = 0
xTHTS−1H = zTS−1HWeighted left pseudo-inverse provides the solution
x = HTS−1H( )−1HTS−1 z
Necessary condition for a minimum
13
Optimal estimate of average jelly bean weight
x = 1 1 ... 1⎡⎣ ⎤⎦
a11 0 ... 00 a22 ... 0... ... ... ...0 0 ... arr
⎡
⎣
⎢⎢⎢⎢⎢
⎤
⎦
⎥⎥⎥⎥⎥
11...1
⎡
⎣
⎢⎢⎢⎢
⎤
⎦
⎥⎥⎥⎥
⎛
⎝
⎜⎜⎜⎜⎜
⎞
⎠
⎟⎟⎟⎟⎟
−1
1 1 ... 1⎡⎣ ⎤⎦
a11 0 ... 00 a22 ... 0... ... ... ...0 0 ... arr
⎡
⎣
⎢⎢⎢⎢⎢
⎤
⎦
⎥⎥⎥⎥⎥
z1z2...zr
⎡
⎣
⎢⎢⎢⎢⎢
⎤
⎦
⎥⎥⎥⎥⎥
S−1 ! A =
a11 0 ... 00 a22 ... 0... ... ... ...0 0 ... arr
⎡
⎣
⎢⎢⎢⎢⎢
⎤
⎦
⎥⎥⎥⎥⎥
x = x = HTS−1H( )−1HTS−1 z
Return of the Jelly Beans
Error-weighting matrix
x =aii
i=1
r
∑ zi
aiii=1
r
∑ 14
Weighted Estimate of x (scalar)
Weighted Least Squares (“Kriging”) Estimates
(Wiener–Kolmogorov Interpolation between measurement points)
15
y
x
! Curve, y(x), between measurement points, xi, is the mean of a stationary process with covariance derived from the measurements, y(xi) or other known source
! Curve, y(x), is a distance-weighted linear combination of all of the points
a) Normalize the cost function according to expected measurement error, SA
J = 12εTSB
−1ε = 12z −H x( )T SB−1 z −H x( )
b) Normalize the cost function according to expected measurement residual, SB
16
dim SA( ) = dim SB( ) = r × r( )
Measurement Error Covariance, SA
SA = E z − y( ) z − y( )T⎡⎣ ⎤⎦
= E z −Hx( ) z −Hx( )T⎡⎣ ⎤⎦= E nnT⎡⎣ ⎤⎦ ! R
Expected value of outer product of measurement error vector
17
Measurement Residual Covariance, SB
SB = E εεT⎡⎣ ⎤⎦
= E z −Hx( ) z −Hx( )T⎡⎣ ⎤⎦
= E Hε + n( ) Hε + n( )T⎡⎣ ⎤⎦
ε = z −Hx( )
Expected value of outer product of measurement residual vector
Requires iteration (“adaptation”) of the estimate to find SB
SB = HE εεT⎡⎣ ⎤⎦HT +HE εnT( ) + E nεT( )HT + E nnT( )
! HPHT +HM +MTHT +R where
P = E x − x( ) x − x( )T⎡⎣ ⎤⎦M = E x − x( )nT⎡⎣ ⎤⎦R = E nnT⎡⎣ ⎤⎦
18
Recursive Least-Squares Estimation
! Prior unweighted and weighted least-squares estimators use “batch-processing” approach! All information is gathered prior to processing! All information is processed at once
! Recursive approach! Optimal estimate has been made from prior
measurement set! New measurement set is obtained! Optimal estimate is improved by incremental
change (or correction) to the prior optimal estimate
19
Prior Optimal EstimateInitial measurement set and state