Top Banner
A Lecture on Model Predictive Control Jay H. Lee School of Chemical and Biomolecular Engineering Center for Process Systems Engineering Georgia Inst. of Technology Prepared for Pan American Advanced Studies Institute Program on Process Systems Engineering
137

Jay H Lee_MPC Lecture Notes

Nov 01, 2014

Download

Documents

Vnomiks

Jay H. Lee's lecture notes on Model Predictive Control as shown on the CACHE University website. Prepared for Pan American Advanced Studies Institute Program on Process Systems Engineering.

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Jay H Lee_MPC Lecture Notes

A Lecture on Model Predictive Control

Jay H. Lee

School of Chemical and Biomolecular EngineeringCenter for Process Systems Engineering

Georgia Inst. of Technology

Prepared for Pan American Advanced Studies Institute Program on

Process Systems Engineering

Page 2: Jay H Lee_MPC Lecture Notes

Schedule

•Lecture 1: Introduction to MPC•Lecture 2: Details of MPC Algorithm

and Theory•Lecture 3: Linear Model Identification

Page 3: Jay H Lee_MPC Lecture Notes

Lecture 1

Introduction to MPC

- Motivation- History and status of industrial use of MPC- Overview of commercial packages

Page 4: Jay H Lee_MPC Lecture Notes

Key Elements of MPC

• Formulation of the control problem as an (deterministic) optimization problem

• On-line optimization

• Receding horizon implementation (with feedback update)

( )

( )),(

0,

,min

1

0

iii

iii

p

iiiiu

uxFxuxg

uxi

=≥

+

=∑φ

Repeat!move.current theas solution Implement

ynumericall problemon optimizati theSolveState)Current (Estimated ˆSet ,At

0

0

u

xxkt k==

Eqn. HJB)( 00 xu µ=?

Page 5: Jay H Lee_MPC Lecture Notes

Popularity of Quadratic Objective in Control

• Quadratic objective

• Fairly general– State regulation– Output regulation– Setpoint tracking

• Unconstrained linear least squares problem has an analytical solution. (Kalman’s LQR)

• Solution is smooth with respect to the parameters• Presence of inequality constraints → no analytical solution

∑∑−

==

+1

00

m

ii

Ti

p

ii

Ti RuuQxx

kk

kkk

CxyBuAxx

=+=+1

Linear State Space System Model

Page 6: Jay H Lee_MPC Lecture Notes

Classical Process Control

⎟⎟⎠

⎞⎜⎜⎝

⎛++= ∫

t

DI

c dtdedttekp

0

')'(11 ττ

PID Controllers

Lead / Lag FiltersSwitches

Min, Max SelectorsIf / Then LogicsSequence Logics

Other Elements

• Regulation• Constraint handling • Local optimization

Ad Hoc Strategies,Heuristics

• Inconsistent performance• Complex control structure• Not robust to changes and failures• Focus on the performance of a local unit

• Model is not explicitly used inside the control algorithm

• No clearly stated objective and constraints

Page 7: Jay H Lee_MPC Lecture Notes

Example 1: Blending System

• Control rA and rB• Control q if possible• Flowrates of

additives are limited

Classical Solution

Page 8: Jay H Lee_MPC Lecture Notes

Model-Based Optimal Control

Set-point

r(t)Input

u(t)Output

y(t)

),(),,( uxgyuxfx ==&

Controller Plant

( )⎭⎬⎫

⎩⎨⎧

+∑−

=−

1

0,,),(min

10

p

ippii

uuxux

p

φφK

),(

0)(0),(

uxfx

xguxg

pp

iii

=

≥≥

&

Path constraints

Terminal constraints

Model constraints

stage-wisecost

terminalcost

Open-Loop Optimal Control Problem

Page 9: Jay H Lee_MPC Lecture Notes

Model-Based Optimal Control

Set-point

r(t)

Input

u(t)Output

y(t)

),(),,( uxgyuxfx ==&

Measurements

Controller Plant

( )⎭⎬⎫

⎩⎨⎧

+∑−

=−

1

0,,),(min

10

p

ippii

uuxux

p

φφK

),(

0)(0),(

uxfx

xguxg

pp

iii

=

≥≥

&

Path constraints

Terminal constraints

Model constraints

stage-wisecost

terminalcost

Open-Loop Optimal Control Problem• Open-loop optimal solution is not robust

• Must be coupled with on-line state / model parameter update

• Requires on-line solution for each updated problem

• Analytical solution possible only in a few cases (LQ control)

• Computational limitation for numerical solution, esp. back in the ’50s and ’60s

Page 10: Jay H Lee_MPC Lecture Notes

– At time k, solve the open-loop optimal control problem on-line with x0 = x(k)

– Apply the optimal input moves u(k) = u0

– Obtain new measurements, update the state and solve the OLOCP at time k+1 with x0 = x(k+1)

– Continue this at each sample time

Model Predictive Control (Receding Horizon Control)

Implicitly defines the feedback law u(k) = h(x(k))

Page 11: Jay H Lee_MPC Lecture Notes

Analogy to Chess Playing

MyMove

The Opponent’sMove

New State

my move

his move

my move

Opponent(The Plant)

I(The Controller)

Page 12: Jay H Lee_MPC Lecture Notes

Operational Hierarchy Before and After MPC

Unit 1 - Conventional Structure

Global Steady-StateOptimization(every day)

Local Steady-StateOptimization(every hour)

DynamicConstraintControl(every minute)

SupervisoryDynamicControl(every minute)

Basic DynamicControl(every second)

Plant-Wide Optimization

Unit 1 Local Optimization Unit 2 Local Optimization

High/Low Select Logic

PID Lead/Lag PID

SUM SUM

ModelPredictiveControl(MPC)

Unit 1 Distributed ControlSystem (PID)

Unit 2 Distributed ControlSystem (PID)

FCPC

TCLC

FCPC

TCLC

Unit 2 - MPC Structure

Page 13: Jay H Lee_MPC Lecture Notes

Example: Blending System

• Control rA and rB• Control q if possible• Flowrates of

additives are limited

Classical Solution

MPC:Solve at

each time k

( )

( )( )

1,3,,1,)()()(

*)|(*)|(

*)|(min

maxmin

2

2

1

2

1,,)(),(),( 321

<<=≤≤

−++−++

−+∑=

−+=

γ

γL

L

iujuuqkikq

rkikr

rkikr

iii

BB

p

iAA

pkkjjujuju

p = Size of prediction window

Page 14: Jay H Lee_MPC Lecture Notes

Optimization and Control

Page 15: Jay H Lee_MPC Lecture Notes

An Exemplary Application (1)

Page 16: Jay H Lee_MPC Lecture Notes

An Exemplary Application (2)

Page 17: Jay H Lee_MPC Lecture Notes

Industrial Use of MPC

• Initiated at Shell Oil and other refineries during late 70s.• The most applied advanced control technique in the process

industries.• >4600 worldwide installations + unknown # of “in-house”

installations (Result of a survey in yr 1999).• Majority of applications (67%) are in refining and petrochemicals.

Chemical and pulp and paper are the next areas.• Many vendors specializing in the technology

– Early Players: DMCC, Setpoint, Profimatics– Today’s Players: Aspen Technology, Honeywell, Invensys, ABB

• Models used are predominantly empirical models developed through plant testing.

• Technology is used not only for multivariable control but for most economic operation within constraint boundaries.

Page 18: Jay H Lee_MPC Lecture Notes

MPC Industry Consolidation

Honeywell (Profimatics)

CPC-V

(late 1995)Update

(Late 2000)

CCI GE

DOT

Pavilion

AdersaNeuralwareSetpointDMCC

AspentechAdersaCCIDMCCDOT ProductsHoneywellLitwinNeuralwarePavilionPredictive ControlsProfimaticsSetpointTreiber ControlMDC Technology

Litwin SimSci FoxboroInvensys

PCL

MDCEmerson Elec.

FRSI

Treiber ABB

Page 19: Jay H Lee_MPC Lecture Notes

Linear MPC Vendors and Packages

• Aspentech– DMCplus– DMCplus-Model

• Honeywell– Robust MPC Technology (RMPCT)

• Adersa– Predictive Functional Control (PFC)– Hierarchical Constraint Control (HIECON)– GLIDE (Identification package)

• MDC Technology (Emerson)– SMOC (licensed from Shell)– Delta V Predict

• Predictive Control Limited (Invensys)– Connoisseur

• ABB– 3d MPC

Page 20: Jay H Lee_MPC Lecture Notes

Result of a Survey in 1999 (Qin and Badgwell)

Page 21: Jay H Lee_MPC Lecture Notes

Nonlinear MPC Vendors and Packages

• Adersa– Predictive Functional Control (PFC)

• Aspen Technology– Aspen Target

• Continental Controls– Multivariable Control (MVC): Linear Dynamics + Static Nonlinearity

• DOT Products– NOVA Nonlinear Controller (NLC): First Principles Model

• Pavilion Technologies– Process Perfecter: Linear Dynamics + Static Nonlinearity

x Ax B u B vy g x Cx NN x

k k u k v k

k k k k

+ = + += = +1

( ) ( )

Page 22: Jay H Lee_MPC Lecture Notes

Results of a Survey in 1999 for Nonlinear MPC

Page 23: Jay H Lee_MPC Lecture Notes

Controller Design and Tuning Procedure

1. Determine the relevant CV’s, MV’s, and DV’s2. Conduct plant test: Vary MV’s and DV’s & record

the response of CV’s3. Derive a dynamic model from the plant test data 4. Configure the MPC controller and enter initial

tuning parameters5. Test the controller off-line using closed loop

simulation 6. Download the configured controller to the

destination machine and test the model predictions in open-loop mode

7. Commission the controller and refine the tuning as needed

Page 24: Jay H Lee_MPC Lecture Notes

Role of MPC in the Operational Hierarchy

Plant-Wide Optimization

Local Optimization

Multivariable Control

Distributed ControlSystem (PID)

FCPC

TCLC

Determine plant-wide the optimal operating condition for

the day

Make fine adjustments for local units

Take each local unit to the optimal condition fast but

smoothly without violating constraints MPC

Page 25: Jay H Lee_MPC Lecture Notes

Local Optimization

• A separate steady-state optimization to determine steady-state targets for the inputs and outputs; RMPCT introduced a dynamicoptimizer recently

• Linear Program (LP) for SS optimization; the LP is used to enforce input and output constraints and determine optimal input and output targets for the thin and fat plant cases

• The RMPCT and PFC controllers allow for both linear and quadratic terms in the SS optimization

• The DMCplus controller solves a sequence of separate QPs to determine optimal input and output targets; CV’s are ranked in priority so that SS control performance of a given CV will never be sacrificed to improve performance of lower priority CV’s; MV’s are also ranked in priority order to determine how extra degrees offreedom is used

Page 26: Jay H Lee_MPC Lecture Notes

Dynamic Optimization

( )u u u uM = −0 1 1T T

MT T

, ,... k≤ ≤u u u

( )x x uk k kf+ =1 ,

( )y x bk k kg+ + += +1 1 1

At the dynamic optimization stage, all of the controllers can bedescribed (approximately) as minimizing a performance index with up to three terms; an output penalty, an input penalty, and an input rate penalty:

2 2 21 1

1 0 0j j j

P M My uk j k j k jj j j

J − −+ + += = =

= + ∆ +∑ ∑ ∑Q S Re u e

A vector of inputs uM is found which minimizes J subject to constraints on the inputs and outputs:

k∆ ≤ ∆ ≤ ∆u u u

k≤ ≤y y y

Page 27: Jay H Lee_MPC Lecture Notes

Dynamic Optimization

• Most control algorithms use a single quadratic objective

• The HIECON algorithm uses a sequence of separate dynamic optimizations to resolve conflicting control objectives; CV errors are minimized first, followed by MV errors

• Connoisseur allows for a multi-model approach and an adaptive approach

• The RMPCT algorithm defines a funnel and finds the optimal trajectory yr and input uM which minimize the following objective:

subject to a funnel constraint

2 211,

minr Mk j

P rk j k j M ssj

J+

+ + −== − + −∑ SQy u

y y u u

Page 28: Jay H Lee_MPC Lecture Notes

Output Trajectories

• Move suppression is necessary when reference trajectory is not used

quadratic penalty

past future

Setpoint

quadratic penalty

past future

Zone

quadratic penalty

past future

Reference trajectory

quadratic penalty

past future

Funnel

Honeywell’s RMPCTSoft Constraint, “Zone Control”

Aspen Tech’s DMC ID-COM, Adersa’s

Page 29: Jay H Lee_MPC Lecture Notes

Output Horizon

past future

Finite horizon

prediction horizon P

Coincidence points

Page 30: Jay H Lee_MPC Lecture Notes

Input Parameterization

Multiple moves (with blocking)u

control horizon

Single move (extreme blocking)u

Basis function (parametrized)

u

Page 31: Jay H Lee_MPC Lecture Notes

Process Model Types

Model Type Origin Linear/Nonlinear Stable/Unstable

Differential physics L,NL S,UEquations

State-Space physics L,NL S,Udata

Laplace Transfer physics L S,UFunction data

ARMAX/NARMAX data L,NL S,U

Convolution data L S(Finite Impulseor Step Response)

Other data L,NL S,U(Polynomial,Neural Net)

Page 32: Jay H Lee_MPC Lecture Notes

Identification Technology• Most products use PRBS-like or multiple steps test signals. Glide us

es non-PRBS signals• Most products use FIR, ARX or step response models

– Glide uses transfer function G(s)– RMPCT uses Box-Jenkins– SMOC uses state space models

• Most products use least squares type parameter estimation: – prediction error or output error methods– RMPCT uses prediction error method– Glide uses a global method to estimate uncertainty

• Connoisseur has adaptive capability using RLS• A few products (DMCplus, SMOC) have subspace identification metho

ds available for MIMO identification• Most products have uncertainty estimate, but most products do not m

ake use of the uncertainty bound in control design

Page 33: Jay H Lee_MPC Lecture Notes

Summary• MPC is a mature technology!

– Many commercial vendors with packages differing in model form, objective function form, etc.

– Sound theory and experience• Challenges are

– Simplifying the model development process• plant testing & system identification• nonlinear model development

– State Estimation• Lack of sensors for key variables

– Reducing computational complexity• approximate solutions, preferably with some guaranteed properties

– Better management of “uncertainty”• creating models with uncertainty information (e.g., stochastic model)• on-line estimation of parameters / states• “robust” solution of optimization

Page 34: Jay H Lee_MPC Lecture Notes

FCCU Debutanizer

~20% under capacity

Page 35: Jay H Lee_MPC Lecture Notes

Debutanizer Diagram

Reflux

Fan

Slurry Pump Around

PCT

RVP

Pressure

Flooding Tray 20 Temp.

Feed

Pre-Heater

160 F

400 F

190 lb

From Stripper

To Deethanizer

Gasoline to blending

TC

TC

PC

TC

Page 36: Jay H Lee_MPC Lecture Notes

Process Limitation

Operation Problems:• Overloading

-- over design capacity.

• Flooding -- usually jet flooding, causing very poor separation.

• Lack of Overhead Fan Cooling-- especially in summer.

Consequences:

• High RVP, giving away Octane Number• High OVHD C5, causing problems at Alky.

Page 37: Jay H Lee_MPC Lecture Notes

Control Objectives

Constrained Control:

• Preventing safety valve from relieving• Keep the tower from flooding• Keep RVP lower than its target.

• Regulate OVHD PCT or C5 at spec.• Rejecting disturbance not through slurry, if possible.

Regulatory Control:

Page 38: Jay H Lee_MPC Lecture Notes

Real-Time Optimization

Optimization Objectives:

• Minimizing energy consumed

• Minimizing overhead reflux• Minimizing overhead cooling required

• Minimizing overhead pressure• Maximizing separation efficiency.

While maintaining PCT, RVP on their specifications

Page 39: Jay H Lee_MPC Lecture Notes

MPC Configuration

Fan Output

Reflux

Pre-heater By-Pass

Reboiler By-Pass

OVHD Pressure

Internal Reflux

Pre-heater By-Pass

Reboiler By-Pass

MPC

PCTFanRVP

Diff. PressureFeed Temp.

Tray 20 Temp.OVHD C5 %

FeedStripper Bottom Temp.

Controlled

Disturbances

Manipulated

R-PID

R-PID

Page 40: Jay H Lee_MPC Lecture Notes

MPC’s MV Moves

Pressure (lb)

Feed-heater by-pass

Reboiler by-pass

Reflux

minute

Optimal Point Reached

0

20

40

60

80

100

120

140

160

180

200

0 500 1000 1500 2000 2500 3000

Page 41: Jay H Lee_MPC Lecture Notes

Reflux vs. Feed

12

14

16

18

20

22

24

26

28

30

32

0 500 1000 1500 2000 2500 3000

Reflux

Feed

minute

An indication that separation

efficiency increased.

Page 42: Jay H Lee_MPC Lecture Notes

Reflux-to-Feed Ratio

0.42

0.44

0.46

0.48

0.5

0.52

0.54

0 500 1000 1500 2000 2500 3000minute

Handling Flood

Page 43: Jay H Lee_MPC Lecture Notes

Handling Flood

Potential Flooding

minute

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

3.2

0 500 1000 1500 2000 2500 3000

DifferentialPressure

Across Tower

Page 44: Jay H Lee_MPC Lecture Notes

Product Spec's

0

20

40

60

80

100

120

140

160

180

0 500 1000 1500 2000 2500 3000

Top pressure compensated temperature (PCT)

Bottom Reid Vapor Pressure (RVP)

minute

Page 45: Jay H Lee_MPC Lecture Notes

MPC Operation Overview

Reflux

FanPCT

RVP

Pressure

Flooding Tray 20 Temp.Feed

Pre-Heater

160 F

400 F

190 lb

Gasoline to blending

Feed Tray

Tray 20

TemperatureTray #

After MPC is on

Top Tray

Page 46: Jay H Lee_MPC Lecture Notes

MPC Control Results (1)

When RMPCT was turned on

Debutanizer Bottom C4- Total (I-C4, N-C4 & C4=-)

Date

C4- Total

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

%

April, 1991

MPC

Page 47: Jay H Lee_MPC Lecture Notes

MPC Control Results (2)

RVP

Date

Debutanizer Bottom RVP Daily Average

When RMPCT was turned on

April, 1991

5

5.5

6

6.5

7

7.5

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

MPC

Page 48: Jay H Lee_MPC Lecture Notes

MPC Control Results (3)

2

4

6

8

10

12

14

16

18

20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

When RMPCT was turned on

Debutanizer Top C5+ Total (I-C5, N-C5 & C5=+)

C5+ Total%

DateApril, 1991

MPC

Page 49: Jay H Lee_MPC Lecture Notes

Other Benefits

• Reflux is 15% lower than before

• Separation efficiency is increased

• Now have room for lower RVP or PCT, if needed

• Variance on PCT and RVP is reduced; Note: Variance on the tray-20 temperature isincreased, and it should be!!

• Energy saving (~10%)

• OVHD fan maximum bound may be avoided

• Flooding is eliminated.

Page 50: Jay H Lee_MPC Lecture Notes

Acknowledgment

• Professor S. Joe Qin, UT, Austin• Dr. Joseph Lu, Honeywell, Phoenix, AZ• Center for Process Systems Engineering

Industrial Members

Page 51: Jay H Lee_MPC Lecture Notes

Lecture 2

Details of MPC Algorithms and Theory

- Impulse and step response models and the prediction equation

- Use of state estimation

- Optimization

- Infinite-horizon MPC and stability

- Use of nonlinear models

Page 52: Jay H Lee_MPC Lecture Notes

Important Ingredients of an MPC Algorithm

• Dynamic Model → Prediction Model– Predicted future outputs = Function of current “state”

(stored in memory) + feedforward measurement + feedback measurement correction + future input adjustments

• Objective and Constraints• Optimization Algorithm• Receding Horizon Implementation

Page 53: Jay H Lee_MPC Lecture Notes

General Setup

Previous State(in Memory)

Current StateNew Input Move(Just Implemented)

To Optimization

State Update

Prediction Modelfor Future Outputs

Future Input Moves(To Be Determined)

Feedback / FeedforwardMeasurements

Prediction Measurement

Correction

Dynamic Model State:Compact representationOf the past input record

?

Page 54: Jay H Lee_MPC Lecture Notes

Options

• Model Types– Finite Impulse Response Model or Step Response Model– State-Space Model– Linear or Nonlinear

• Measurement Correction– To the prediction (based on open-loop state calculation)– To the state (through state estimation)

• Objective Function– Linear or Quadratic– Constrained or Unconstrained

Page 55: Jay H Lee_MPC Lecture Notes

Prediction Model for Different Model Types

• Finite Impulse Response Model• Step Response Model• State-Space Model

Page 56: Jay H Lee_MPC Lecture Notes

Sample-Data (Computer) Control

Model relates input samples to output samples

L

L

,,,

,,,

210

210

yyy

vvv↓

v can be a MV (u) or a

Measured DV (d)

Page 57: Jay H Lee_MPC Lecture Notes

Finite Impulse Response Model (1)

Assumptions:• H0 = 0: no immediate effect • The response settles back in n steps s.t. Hn+1 = Hn+2

= … = 0: “Finite Impulse Response” (reasonable for stable processes)

Page 58: Jay H Lee_MPC Lecture Notes

Finite Impulse Response Model (2)

Linear Model→ “Superposition Principle”

)()1()( 1 nkvHkvHky n −++−= L

Page 59: Jay H Lee_MPC Lecture Notes

Finite Impulse Response Model (3)

• State

• State Update: Easy!

( ) ( 1), , ( )T Tx k v k v k n⎡ ⎤= − −⎣ ⎦L

⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢

−+−

−−

)()1(

)2()1(

nkvnkv

kvkv

M

⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢

+−+−

)1()2(

)1()(

nkvnkv

kvkv

M

)()()1( 0 kvkxMkxinsertionshift

Τ+=+

drop

insert

n past input samples(includes both MVs and

measured DVs)

T

Page 60: Jay H Lee_MPC Lecture Notes

Finite Impulse Response Model (4)• Prediction

⎥⎥

⎢⎢

⎡+

⎥⎥

⎢⎢

−+Ψ+

⎥⎥

⎢⎢

−Ψ

⎥⎥

⎢⎢

−+Ψ+

⎥⎥

⎢⎢

−Ψ=

⎥⎥⎥

⎢⎢⎢

+

+

)(

)(

)1(

)(

)(

)1()1(

)(

)(

)1(

21

21

|

|1

ke

ke

pkd

kd

nkd

kdpku

ku

nku

ku

y

y

dd

uu

kpk

kk

MMM

MMM

)()()1()( 1 kCxnkvHkvHky n =−++−= L

)()()( kykyke m −=Future input

samples(to be decided)

Feedforward term: Past disturbance samples stored in memory

Feedforward term: new measurement(Assume d(k)=d(k=1)=….=d(k+p-1))

Feedback Error

Correction

Past input samples stored in memory

Predicted future output samples

Model prediction of yModel prediction error

Dynamic Matrices(made of impulse response

coefficients)

Model error+Unmeasured disturbanceMeasured output

Page 61: Jay H Lee_MPC Lecture Notes

Step Response Model (1)

Assumptions:• S0 = 0: no immediate effect • The response settles in n steps s.t. Sn= Sn+1 = …= S∞: the same

as the finite impulse response assumption• Relationship with the impulse response coefficients:

Page 62: Jay H Lee_MPC Lecture Notes

Step Response Model (2)

•State

0 1( ) ( ), , ( )T Tnx k y k y k−⎡ ⎤= ⎣ ⎦L

0)1()( w/ )()( ==+∆=∆+= Lkukuikykyi

)()( Also)( )()( Note

0

1

kykykykyky nn

==== ∞− L

x(k)

n future outputs assuming the input remains constant at

the most recent value

T

Page 63: Jay H Lee_MPC Lecture Notes

x(k+1)

M1 x(k)

Pictorial Representation of the State Update

Page 64: Jay H Lee_MPC Lecture Notes

Step Response Model (3)

• State Update

⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢

)()(

)()(

1

2

1

0

kyky

kyky

n

n

M

⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢

++

++

)1()1(

)1()1(

1

2

1

0

kyky

kyky

n

n

M

)()()1( 1 kvSkxMkxresponsestepshift

∆+=+

drop

S1 S2

Sn-1

Sn

Page 65: Jay H Lee_MPC Lecture Notes

Step Response Model (4)

Effect of future input moves (to be determined)

x(k)

•Prediction

Page 66: Jay H Lee_MPC Lecture Notes

⎥⎥⎥

⎢⎢⎢

⎡+

⎥⎥⎥

⎢⎢⎢

−+∆

∆Ω

⎥⎥⎥

⎢⎢⎢

−+∆

∆Ω+

⎥⎥⎥

⎢⎢⎢

=⎥⎥⎥

⎢⎢⎢

+

+

)(

)(

)1(

)(

)1(

)(

)(

)(1

|

|1

ke

ke

pkd

kd

pku

ku

ky

ky

y

y

d

u

pkpk

kk

MM

MMM

Step Response Model (4)

• Prediction [ ] )(0,,0,1)()( 0 kxkyky L==)()()( kykyke m −=

Future input moves

(to be decided)

Feedforward term: new measurement(Assume ∆d(k+1)=…= ∆ d(k+p-1)=0)

Feedback ErrorCorrection

The “state” stored in memory

Predicted future output samples

Model prediction of y(k)Model prediction error

Dynamic Matrices(made of step

response coefficients)

Page 67: Jay H Lee_MPC Lecture Notes

State-Space Model (1)

)()()()()()1(

kCzkykdBkuBkAzkz du

=++=+

zCy

dBuBzAz du

~

~~~

=

++=&

)(),,(

zgyduzfz

==&

Niiuiy L,1),(),( =

)()()()()(or

)()()()()(

sdsGsusGsy

zdzGzuzGzy

d

du

+=

+=

Fundamental Model

Linearization

Discretization

Test Data

I/O modelIdentification

State-SpaceRealization

State-SpaceIdentification

Page 68: Jay H Lee_MPC Lecture Notes

State-Space Model (2)

⎟⎟⎠

⎞⎜⎜⎝

+=+++=+

)1()1()()()()1(

kCzkykdBkuBkAzkz du

⎟⎟⎠

⎞⎜⎜⎝

=−+−+−=

)()()1()1()1()(

kCzkykdBkuBkAzkz du

( ))()()()()1()1()1(

)()()()1(

kdBkuBkzACkykykzCky

kdBkuBkzAkz

du

du

∆+∆+∆+=+

→+∆=+∆∆+∆+∆=+∆

-

[ ] ⎥⎦

⎤⎢⎣

⎡∆=

∆⎥⎦

⎤⎢⎣

⎡+∆⎥

⎤⎢⎣

⎡+⎥

⎤⎢⎣

⎡∆⎥⎦

⎤⎢⎣

⎡=⎥

⎤⎢⎣

⎡++∆

)()(

0)(

)()()()(0

)1()1(

kykz

Iky

kdCBB

kuCBB

kykz

ICAA

kykz

d

d

u

u

)()()()()()1(

kxkykdkukxkx du

Ξ=∆Γ+∆Γ+Φ=+

State Update

Page 69: Jay H Lee_MPC Lecture Notes

⎥⎥⎥

⎢⎢⎢

⎡+

⎥⎥⎥

⎢⎢⎢

−+∆

∆Ω

⎥⎥⎥

⎢⎢⎢

−+∆

∆Ω+

⎥⎥⎥

⎢⎢⎢

ΞΦ

ΞΦ=

⎥⎥⎥

⎢⎢⎢

+

+

)(

)(

)1(

)(

)1(

)()(

|

|1

ke

ke

pkd

kd

pku

kukx

y

y

d

u

pkpk

kk

MM

MMM

State-Space Model (3)

• Prediction)()( kxky Ξ=

)()()( kykyke m −=Future input

moves(to be decided)

Feedforward term: new measurement(Assume ∆d(k+1)=…= ∆ d(k+p-1)=0)

Feedback ErrorCorrection

The “state” stored in “memory”

Predicted future output samples

Model prediction of y(k)Model prediction error

Dynamic Matrix(made of step

response coefficients)

Page 70: Jay H Lee_MPC Lecture Notes

Summary

• Regardless of model form, one gets the prediction equation in the form of

• Assumptions– Measured DV (d) remains constant at the current value of d(k)– Model prediction error (e) remains constant at the current value

of e(k)

44 344 21

M4444 34444 21

43421

M

)(

)(

)(

|

|1

)1(

)()()()(

kU

u

kbknown

edx

kY

kpk

kk

pku

kuLkeLkdLkxL

y

y

≡+

+

⎥⎥⎥

⎢⎢⎢

−+∆

∆++∆+=

⎥⎥⎥

⎢⎢⎢

Page 71: Jay H Lee_MPC Lecture Notes

Ramp Type Extrapolation

)1()1()())1()(()()(

−+∆==+∆=∆−−+=+

pkdkdkdkekeikeike

L

e(k)-e(k-1)

• For Integrating Processes, Slow Dynamics

Page 72: Jay H Lee_MPC Lecture Notes

Use of State Estimation

Page 73: Jay H Lee_MPC Lecture Notes

Measurement Correction of State

Previous State Estimate(in “Memory”)

Current State Estimate

Prediction Modelfor Future Outputs

New Input Move(Just Implemented)

Future Input Moves(To Be Determined)

To Optimization

Feedback / FeedforwardMeasurements

State Update

Prediction Measurement

Correction

State Estimate:Compact representation

Of the past input and measurement record

Page 74: Jay H Lee_MPC Lecture Notes

State Update Equation

))()(()()()()1(

kxkyKkdkukxkx

m

du

Ξ−+∆Γ+∆Γ+Φ=+

• K is the update gain matrix that can be found in various ways– Pole placement: Not so effective with systems with

many states (most chemical processes)– Kalman filtering: Requires a stochastic model of form

White noises of known covariancesEffect of unmeasured disturbances and noise

Can be obtained using, e.g., subspace ID

)()()()()()()()1(

kkxkykwkdkukxkx du

ν+Ξ=+∆Γ+∆Γ+Φ=+

Page 75: Jay H Lee_MPC Lecture Notes

Prediction Equation

⎥⎥⎥

⎢⎢⎢

⎡+

⎥⎥⎥

⎢⎢⎢

−+∆

∆Ω

⎥⎥⎥

⎢⎢⎢

−+∆

∆Ω+

⎥⎥⎥

⎢⎢⎢

ΞΦ

ΞΦ=

⎥⎥⎥

⎢⎢⎢

+

+

)(

)(

)1(

)(

)1(

)()(

|

|1

ke

ke

pkd

kd

pku

kukx

y

y

d

u

pkpk

kk

MM

MMM

Additional measurement correction NOT needed here!

Contains past feedback measurement corrections

Page 76: Jay H Lee_MPC Lecture Notes

What Are the Potential Advantages?

• Can handle unstable processes– Integrating processes, run-away processes

• Cross-channel measurement update– More effective update of output channels with delays or

measurement problems based on other channels.

• Systematic handling of multi-rate measurements• Optimal extrapolation of output error and filtering of noise

(based on the given stochastic system model)

Process Delays,Measurement Difficulties,

Slow Sampling

y1

y2

Unmeasured inputs

Measured inputs

Early, robustupdate through

modeled correlation

Page 77: Jay H Lee_MPC Lecture Notes

Optimization

Page 78: Jay H Lee_MPC Lecture Notes

Objective Function

• Minimization Function: Quadratic cost (as in DMC)

– Consider only m input moves by assuming ∆u(k+j)=0 for j≥m– Penalize the tracking error as well as the magnitudes of adjustments

• Use the prediction equation.

)()(*)(*)()(1

0|

1| ikuikuyyyykV u

m

i

Tkik

yp

i

Tkik +∆Λ+∆+−Λ−= ∑∑

=+

=+

( ) ( ) )()(diag)(*)()(diag*)()( T kUkUYkYYkYkV muT

my ∆Λ∆+−Λ−=

)()()( kULkbkY mum∆+=Substitute

First m columns of Lu

)()()()()()( kckUkgkUHkUkV mT

mTm +∆+∆∆= constant

Page 79: Jay H Lee_MPC Lecture Notes

Constraints

Substitute the prediction equation and rearrange to

)()( khkUC m ≥∆

Page 80: Jay H Lee_MPC Lecture Notes

Optimization Problem

• Quadratic Program

• Unconstrained Solution

• Constrained Solution– Must be solved numerically

)()(such that

)()()()(min)(

khkUC

kUkgkUHkU

m

mT

mTmkUm

≥∆

∆+∆∆∆

)(21)( 1 kgHkUm

−−=∆

Page 81: Jay H Lee_MPC Lecture Notes

Quadratic Program

• Minimization of a quadratic function subject to linear constraints

• Convex and therefore fundamentally tractable• Solution methods

– Active set method: Determination of the active set of constraints on the basis of the KKT condition

– Interior point method: Use of barrier function to “trap” the solution inside the feasible region, Newton iteration

• Solvers– Off-the-shelf software, e.g., QPSOL– Customization is desirable for large-scale problems

Page 82: Jay H Lee_MPC Lecture Notes

Two-Level Optimization

Steady-State Optimization (Linear Program)

)()()1()()1()(

)()(

))(,(min

|

|

|)(

kuLkbymkukukuku

kcku

yC

kuyL

sssk

s

ss

ks

skkus

∆+=−+∆++∆+−=

≥⎥⎦

⎤⎢⎣

L

StateFeedforward Measurement

Feedback Error

Steady-State Prediction Eqn.

To Dynamic Optimization (Quadratic Program) Dynamic Prediction Eqn.

Optimal Setting Values (setpoints) )(, **

| kuy sk∞

(sometimes input deviations are included in the quadratic objective function)

Page 83: Jay H Lee_MPC Lecture Notes

Use of Infinite Prediction Horizonand Stability

Page 84: Jay H Lee_MPC Lecture Notes

Use of ∞ Prediction Horizon – Why?

• Stability guarantee– The optimal cost function can be shown to be the

control Lyapunov function• Less parameters to tune• More consistent, intuitive effect of weight

parameters• Close connection with the classical

optimal control methods, e.g., LQG control

Page 85: Jay H Lee_MPC Lecture Notes

Step Response Model Case

)()(*)(*)()(1

0|

1| ikuikuyyyykV u

m

i

Tkik

y

i

Tkik +∆Λ+∆+−Λ−= ∑∑

=+

=+

* constraint extrawith

)()(*)(*)()(

|1

1

0|

1

1|

yy

ikuikuyyyykV

knmk

um

i

Tkik

ynm

i

Tkik

=

+∆Λ+∆+−Λ−=

−++

=+

−+

=+ ∑∑

Must be at y* for the cost to be bounded

Page 86: Jay H Lee_MPC Lecture Notes

Additional Comments

• Previously, we assumed finite settling time.• Can be generalized to state-space models

– More complicated procedure to turn the ∞-horizon problem into a finite horizon problem

– Requires solving Lyapunov equation to get the terminal cost matrix– Also, must make sure that output constraints will be satisfied

beyond the finite horizon → construction of output admissible set

• Use of a sufficiently large horizon (p≈ m+ the settling time) should have a similar effect

• Can we always satisfy the settling constraint?– y=y* may not be feasible due to input constraints or insufficient m

→ use two-level approach

Page 87: Jay H Lee_MPC Lecture Notes

Two-Level Optimization

Steady-State Optimization (Linear Program or Quadratic Program)

Optimal Setting Values (setpoints)

)(, **| kuy sk∞

feasible. be toguaranteed is Constraint *||1 kknmk yy ∞−++ =

* *1 |Constraint 1 s k m n- |k k∆u(k) ∆u(k m - ) ∆u y y+ + ∞+ + + = → =L

Dynamic Optimization (∞-horizon MPC)

Page 88: Jay H Lee_MPC Lecture Notes

Use of Nonlinear Model

Page 89: Jay H Lee_MPC Lecture Notes

Difficulty (1)

)(),,(

xgyduxfx

==&

The prediction equation is nonlinear w.r.t. u(k), ……, u(k+p-1)

Nonlinear Program (Not so nice!)

))(()())(),(),(()1(

kxgkykdkukxFkx

==+Discretization?

( )

( ) )()(),1(),(),(

)()(),1()),(),(),(()())(),(),((

|

|2

|1

kekdpkukukxFgy

kekdkukdkukxFFgykekdkukxFgy

pkpk

kk

kk

+−+=

++=

+=

+

+

+

Lo

M

o

o

OrthogonalCollocation

Page 90: Jay H Lee_MPC Lecture Notes

Difficulty (2)

ν+=+=

)(),,(

xgywduxfx&

State Estimation

( ))(()()(),,()1(1

kxgkykKduxfkx m

k

k

−+=+ ∫+

Extended Kalman Filtering

• Computationally more demanding steps, e.g., calculation of K at each time step• Based on linearization at each time step – not optimal, may not be stable• Best practical solution at the current time• Promising alternative: Moving Horizon Estimation (requires solving NLP)• Difficult to come up with an appropriate stochastic system model (no ID technique)

Page 91: Jay H Lee_MPC Lecture Notes

Practical Algorithm

444 3444 21

MM

M

sAdjustmentInput Future ofEffect Linearized

2

1

|

|2

|1

)1(

)1()(

)(

),,(

),,(

),,(

⎥⎥⎥⎥

⎢⎢⎢⎢

−+∆

+∆∆

Ω+

⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢

=

⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢

+

+

+

+

+

+

pku

kuku

k

duxf

duxf

duxf

y

yy

u

pk

k

k

k

k

k

kpk

kk

kk

EKF

x(k)

Model integration withconstant input u=u(k-1)

and d=d(k) Linear or Quadratic Program

Linear prediction equation

Dynamic Matrix based on the linearized model at the

current state and input values

Page 92: Jay H Lee_MPC Lecture Notes

Additional Comments / Summary

• Some refinements to the “Practical Algorithm” are possible– Use the previously calculated input trajectory (instead of the

constant input) in the integration and linearization step

– Iterate between integration/linearization and control input calculation

• Full-blown nonlinear MPC is still computationally prohibitive in most applications

Page 93: Jay H Lee_MPC Lecture Notes

Lecture 3

Linear Model Identification- Model structure- Parameter/model estimation- Error analysis- Plant testing- Data pretreatment- Model validation

Page 94: Jay H Lee_MPC Lecture Notes

System Identification

Building a dynamic system model using data obtained from the plant

Page 95: Jay H Lee_MPC Lecture Notes

Why Important?

• Almost all industrial MPC applications use an empirical model obtained through system identification

• Poor model → Poor Prediction → Poor Performance• Up to 80% of time is spent on this step• Direct interaction with the plant

– Cost factor, safety issues, credibility issue

• Issues and decisions are sufficiently complicated that systematic procedures must be used

Page 96: Jay H Lee_MPC Lecture Notes

Steps and Decisions Involved

Plant Test

Pretreatment

ID Algorithm

ValidationNo

Yes

Raw Data

Conditioned Data

Model

• Test signal (shape, size of perturbation)• Closed-loop or open-loop?• One-input-at-a-time or simultaneous?• How long?• Etc.

• Outlier removal• Pre-filtering

• Model structure• (Parameter) estimation algorithm

• Source of validation data• Criterion

End Objective: Control

Page 97: Jay H Lee_MPC Lecture Notes

Model Structure

Page 98: Jay H Lee_MPC Lecture Notes

Model Structure (1)

• I/O Model

effect of inputs effect of disturbances, noise

( ) ( ) ( ) ( ) ( )y k G q u k H q e k= +14243 14243

PlantDynamics

DisturbanceModel

Model

Σ Σ

Process NoiseOutput Noise

Inputs

MeasuredOutputs

White noise sequence

Models auto- and cross-correlations of the residual (not physical cause-effect)

Assume wolog that H(0)=1

Page 99: Jay H Lee_MPC Lecture Notes

SISO I/O Model Structure (1)• FIR (Past inputs only)

• ARX (Past inputs and outputs: “Equation Error”)

• ARMAX (Moving average of the noise term)

)()()1()( 1 kemkubkubky m +−++−= L

)()()1()()1()( 11 kemkubkubnkyakyaky mn +−++−+−++−= LL

)()1()()()1()()1()(

1

11

nkeckeckemkubkubnkyakyaky

n

mn

−++−++−++−+−++−=

L

LL

1)(,)( 11 =++= −− qHqbqbqG m

mL

nn

nn

mm

qaqaqH

qaqaqbqbqG −−−−

−−

−−−=

−−−++

=LL

L1

11

1

11

11)(,

1)(

nn

nn

nn

mm

qaqaqcqcqH

qaqaqbqbqG −−

−−

−−

−−

−−−+++

=−−−

++=

L

L

L

L1

1

11

11

11

11)(,

1)(

Page 100: Jay H Lee_MPC Lecture Notes

SISO I/O Model Structure (2)

• Output Error (No noise model or white noise error)

• Box Jenkins (More general than ARMAX)

nn

nn

nn

mm

qdqdqcqcqH

qaqaqbqbqG −−

−−

−−

−−

−−−+++

=−−−

++=

L

L

L

L1

1

11

11

11

11)(,

1)(

)()(~)();()1()(~)1(~)(~

11

kekykymkubkubnkyakyaky mn

+=−++−+−++−= LL

1)(,1

)( 11

11 =

−−−++

= −−

−−

qHqaqa

qbqbqG nn

mm

L

L

Page 101: Jay H Lee_MPC Lecture Notes

MIMO I/O Model Structure• Inputs and outputs are vectors. Coefficients are matrices.• For example, ARX model becomes

• Identifiability becomes an issue– Different sets of coefficient matrices giving exactly same G(q) and H(

q) through pole/zero cancellations → Problems in parameter estimation → Requires special parameterization to avoid problem

)()()1()()1()(

1

1

kemkuBkuBnkyAkyAky

m

n

+−++−+−++−=

L

L

( ) ( )( ) 11

1

11

111

)(

)(−−−

−−−−−

−−−=

++−−−=n

n

mm

nn

qAqAIqH

qBqBqAqAIqG

L

LL

is an matrix and is an matrixi y y i y uA n n B n n× ×

Page 102: Jay H Lee_MPC Lecture Notes

State Space Model• Deterministic

• Combined Deterministic / Stochastic

• Identifiability can be an issue here too– State coordinate transformation does not change the I/O relationship

)()()()()()1(

kekCxkykBukAxkx

+=+=+

)()()()()()()1(

kekCxkykKekBukAxkx

+=++=+

)()()()()1(

kCxkykBukAxkx

d

dd

=+=+

)()()()()()1(

kekCxkykKekAxkx

ss

ss

+=+=+

+

Effect of deterministic input Auto- and cross-correlation of the residual

Output Error Structure

ARMAXStructure

Page 103: Jay H Lee_MPC Lecture Notes

Parameter (Model)Estimation

Page 104: Jay H Lee_MPC Lecture Notes

Overview

Parameter(Model)

EstimationData

Model Structure

Model StructureSelection

ModelFor

Validation

Prediction Error Method

IVMethod

StatisticalMethod

• MLE• Bayesian

Subspace IDMethod

ETFE• Frequency Domain

Page 105: Jay H Lee_MPC Lecture Notes

Prediction Error Method

• Predominant method at current time• Developed by Ljung and coworkers• Flexible

– Can be applied to any model structure– Can be used in recursive form

• Well developed theories and software tools– Book by Ljung, System ID Toolbox for MATLAB

• Computational complexity depends on the model structure– ARX, FIR → Linear least squares– ARMAX, OE, BJ → Nonlinear optimization

• Not easy to use for identifying multivariable models

Page 106: Jay H Lee_MPC Lecture Notes

Prediction Error Method

• Put the model in the predictor form

• Choose the parameter values to minimize the sum of the prediction error for the given data

– ARX, FIR → Linear least squares,– ARMAX, OE, BJ → Nonlinear least squares

( )( )

( ))(),()(),()()(

)(),()(),()(),()(),()(),()(

11|

delay 1least at Contains

11|

kuqGkyqHykyke

kuqGkyqHIkuqGykeqHkuqGky

kk

kk

θθ

θθθ

θθ

−=−=

−−+=

→+=

−−

−− 4434421

⎭⎬⎫

⎩⎨⎧ ∑

=

N

k

keN 1

2

2)(1min

θ ( ))(),()(),()( 1 kuqGkyqHke θθ −= −

Page 107: Jay H Lee_MPC Lecture Notes

Subspace Method

• More recent development• Dates back to the classical realization theories but rediscov

ered and extended by several people• Identifies a state-space model• Some theories and software tools• Computationally simple

– Non-iterative, linear algebra• Good for identifying multivariable models

– No special parameterization is needed• Not optimal in any sense• May need a lot of data for good results• May be combined with PEM

– Use SS method to obtain an initial guess for PEM

Page 108: Jay H Lee_MPC Lecture Notes

Main Idea of the SS-ID Method (1)

Assumed Form of the Underlying Plant

)()()()()()()1(

kkCxkykwkBukAxkx

ν+=++=+

)()()()()()()1(

kekCxkykKekBukAxkx

+=++=+

Innovation Form (Steady-State Kalman Filter)

Equivalent to the above in I/O sense

Identify A, B, C, K, Cov(e) within some similarity transformationWe are free to choose the state coordinates

Page 109: Jay H Lee_MPC Lecture Notes

Main Idea of the SS-ID Method (2)

⎥⎥⎥

⎢⎢⎢

)(

)1(

nky

kyM

⎥⎥⎥

⎢⎢⎢

)(

)1(

nku

kuM

⎥⎥⎥

⎢⎢⎢

)(

)(1

kx

kx

n

M

⎥⎥⎥

⎢⎢⎢

−−+

1|1

1|

knk

kk

y

yM

⎥⎥⎥

⎢⎢⎢

−+ )2(

)(

nku

kuM

⎥⎥⎥

⎢⎢⎢

⎡+

−−+

1|1

1|

knk

kk

e

eM

Past outputs

Past inputs

State(Minimum Storage)

An upper boundof the state dimension n

1L

2L

oΓ⎥⎥⎥⎥

⎢⎢⎢⎢

−1nCA

CAC

M

ExtendedObservability

Matrix

Future inputs

FutureOutput

Prediction

PredictionError

Page 110: Jay H Lee_MPC Lecture Notes

Main Idea of the SS-ID Method (3)

[ ] ++−

−+ ++⎥

⎤⎢⎣

⎡Γ= 00

3210 EUL

UY

LLYM

o 43421

• Find M through linear least squares– Consistent estimation since E0+ is independent of the regres

sors– Oblique projection of data matrices

• Perform SVD on M and find n as well as Γo

[ ] ⎥⎦

⎤⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡≈

Σ= T

Tn

PP

QQM2

211

000

The column space of Q1 is the state space

2/11 no Q Σ=Γ

Some variations exist among different algorithms in terms

of picking the state basis

Page 111: Jay H Lee_MPC Lecture Notes

Main Idea of the SS-ID Method(4)

• Obtain the data for x(k)

• Obtain the data for x(k+1) in a similar manner• Obtain A,B and C through linear regression

– Consistent estimation since the residual is independent of the regressor

• Obtain K and Cov(e) by using the residual data

⎥⎦

⎤⎢⎣

⎡Γ=

−−

)()(

)(inverse pseudo kU

kYkx o

43421Residual

)()(

)()(

0)()1(

⎥⎦

⎤⎢⎣

⎡+⎥

⎤⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡=⎥

⎤⎢⎣

⎡ +kekKe

kykx

CBA

kykx

Page 112: Jay H Lee_MPC Lecture Notes

Properties

• N4SID (Van Overschee and DeMoor)– Kalman filter interpretation– Proof of asymptotically unbiasedness of A, B, and C– Efficient algorithm using QR factorization– -

• CVD (Larimore)– Founded on statistical argument– Same idea but the criterion for choosing the state basis (Q1) diff

ers a bit from N4SID – based on “correlation” between past I/O data and future output data, rather than minimization of the prediction error for the given data

Page 113: Jay H Lee_MPC Lecture Notes

Alternative

• MOESP (Verhaegen)

BACo →→Γ ,

Page 114: Jay H Lee_MPC Lecture Notes

Error Analysis

Page 115: Jay H Lee_MPC Lecture Notes

Error Types

• Bias: Error due to structural mismatch– Bias = the error as # of data points → ∞– Independent of # of data points collected– Bias distribution (e.g., in the frequency domain) depends on the

input spectrum, pre-filtering of the data, etc.– Frequency-domain bias distribution under PEM - by Ljung

• Variance: Error due to limited availability of data– Vanishes as # of data points → ∞– Depends on the number of parameters, the number of data poin

ts, S/N ratio, etc. but not on pre-filtering– Asymptotic distribution (as n, N → ∞):

• Main tradeoff– Richer structure (more parameters) → Bias↓, Variance↑

44 344 21ratio signaltoNoise

)())ˆ(cov(

−−

− Φ⊗Φ≈ dT

uN NnGvec

Page 116: Jay H Lee_MPC Lecture Notes

Plant Test for Data Generation

Page 117: Jay H Lee_MPC Lecture Notes

Test Signals

• Very Important– Signal-to-noise ratio → Distribution and size of the variance– Bias distribution

• Popular Types– Multiple steps: Power mostly in the low-frequency region→ Good e

stimation of steady-state gains (even with step disturbances) but generally poor estimation of high frequency dynamics

– PRBS: Flat spectrum → Good estimation of the entire frequency response, given the error also has a flat spectrum (often not true)

– Combine steps w/ PRBS?

Page 118: Jay H Lee_MPC Lecture Notes

Multi-Input Testing (MIT) vs. Single Input Testing (SIT)

• MIT gives better signal-to-noise ratio for a given testing time

• Control-relevant data generation requires MIT• MIT can be necessary for identification of highly i

nteractive systems (e.g., systems with large RGA)• SIT is often preferred in practice because of the m

ore predictable effect on the on-going operation

Page 119: Jay H Lee_MPC Lecture Notes

Open-Loop vs. Closed-Loop

C G0

d

y

u_

Dither Signalr

Location 1 Location 2

G0

d yPerturbation Signal r

u

Open-Loop Testing

Closed-Loop Testing

Page 120: Jay H Lee_MPC Lecture Notes

Pros and Cons of Closed-Loop Testing

• Pros– Safer, less damaging effect on the on-going operation– Generates data that are more relevant to closed-loop c

ontrol• Cons

– Correlation between input perturbations and disturbances / noise through the feedback.

– Many algorithms can fail or give problems – They give “bias” unless the assumed noise structure is perfect

Page 121: Jay H Lee_MPC Lecture Notes

Important Points from Analysis

• External perturbations (“dither”) are necessary.– Perturbations due to error feedback hardly contributes to variance

reduction (since they are correlated to the errors)

– The level of external perturbation signals also contribute to the size of bias due to the feedback-induced correlation

• Specialized algorithm may be necessary to avoid bias

dTr

uN NnGvec Φ⊗Φ≈ −)())ˆ(cov(

Portion of the input spectrum due to the dithering

100 )(ˆ −ΦΦ−=− ueuN HHGGE µ

43421321

spectruminput tofeedback noise of

oncontributi Relative

1

ratiosignaltoNoise

11 )()( −

−−

−− ΦΦ×Φ=ΦΦ ueuueueu P

Output error spectrum

Page 122: Jay H Lee_MPC Lecture Notes

Different Approaches to Model Identification with Closed-Loop Data

ˆ NG →

,...,1),(),( NiiuiyDN ==

• Indirect Approach,...,1),(),( NiiriyDN ==

• Joint I/O Approach,...,1),(),(),( NiiriuiyDN ==

• Two-Stage or Projection Approach,...,1),(),(1 NiiriuDN ==

)ˆ and(ˆ NN HG→

yrNT → N

CIG GN ˆ 1yr

NyrN )T(Tˆ

⎯⎯⎯⎯⎯ →⎯−−=

)T,T( urN

yrN→ NG

1urN

yrNN )T(TG ⎯⎯⎯⎯ →⎯

−=

~

~

2

1

y(i)D

u(i)DN

N

=

=→

• Direct Approach

Page 123: Jay H Lee_MPC Lecture Notes

Data Pretreatment

Page 124: Jay H Lee_MPC Lecture Notes

Main Issues (1)

• Time-consuming but very important• Remove outliers• Remove portions of data corresponding to u

nusual disturbances or operating conditions• Filter the data

– Affects bias distribution (emphasize or de-emphasize different frequency regions)

– Does NOT improve the S/N ratio – often a misconception

Page 125: Jay H Lee_MPC Lecture Notes

Main Issues (2)

• Difference the data? (∆y = y(k) − y(k-1), ∆u(k) = u(k) − u(k-1))– Removes trends (e.g., effect of step disturbances, set

point changes) that can destroy the effectiveness of many ID methods (e.g., subspace ID)

– Often used in practice– Also removes the input power in the low-frequency re

gion. (PRBS → zero input power at ω = 0)

– Amplifies high-frequency parts of the data (e.g., noise), so low-pass filtering may be necessary

Page 126: Jay H Lee_MPC Lecture Notes

Model Validation

Page 127: Jay H Lee_MPC Lecture Notes

Overview

• Use fresh data different from the data used for model building

• Various methods– Size of the prediction error– “Whiteness” of the prediction error– Cross correlation test (e.g., prediction error and inputs)

• Good prediction with test data but poor prediction with validation data– Sign of “overfit”– Reduce the order or use more compact structures like ARM

AX (instead of ARX)

Page 128: Jay H Lee_MPC Lecture Notes

Concluding Remarks on Linear ID

• System ID is often the most expensive and difficult part of model-based controller design

• Involves many decisions that affect– Plant operation during testing– Eventual performance of the controller

• Good theories and systematic tools are available• System ID can also be used for constructing monitorin

g models– Subspace identification– Trend model, not a causal model

→ Active testing is not needed

Page 129: Jay H Lee_MPC Lecture Notes

Deterministic Multi-Stage Optimization

( )⎭⎬⎫

⎩⎨⎧

+∑−

=−

1

0,,),(min

10

p

jppjj

uuxux

p

φφK

),(

0)(

0),(

1 jjj

pp

jjj

uxfx

xg

uxg

=

+

Path constraints

Terminal constraints

Model constraints

stage-wisecost

terminalcost

• General formulation for deterministic control and scheduling problems.

• Continuous and integer state / decision variables possible

• In control, p=∞ case is solved typically.• Uncertainty is not explicitly addressed.

Page 130: Jay H Lee_MPC Lecture Notes

Solution Approaches

• Analytical approach: 50s-70s– Derivation of closed form optimal policy ( )

requires solution to HJB equation (hard!)• Numerical approach: 80s-now

– Math programming (LP, QP, NLP, MILP, etc.):• Fixed parameter case solution• Computational limitation for large-size problem (e.g., when p= ∞).

– Parametric programming: • General parameter dependent solution (e.g., a lookup table)• Significantly higher computational burden

– Practical solution:• Resolve the problem on-line whenever parameters are updated or

constraints are violated (e.g., in Model Predictive Control or Reactive Scheduling).

( )jj xu *µ=

Page 131: Jay H Lee_MPC Lecture Notes

Stochastic Multi-Stage Decision Problem with Recourse

⎭⎬⎫

⎩⎨⎧∑

== 0)(),(min

jjj

j

xuuxE

jj

φαµ

[ ] ζ

ωα

≥≥

=<<

+

0),(Pr

),,(10

1

jj

jjjhj

uxg

uxfxDiscount factor

Markov sys. model

Chance constraint

• Next “holy-grail” of control: A general form for control, scheduling, and other real-time decision problems in an uncertain dynamic environment.

• No satisfactory solution approach currently available.

Page 132: Jay H Lee_MPC Lecture Notes

Limitation of Stochastic Programming Approach

Simple case of 2 scenarios (↑ or↓) per stage

kx

↑+1k

x

↓+1k

x

↑↑+ 2k

x

↑↓+2k

x

……

……………

↓↑+2k

x

Total number of decision variables= (1+2+4+…+2p-1) nu

Number of branches to evaluate for each candidate decisions = 2p

ku

↑+1k

u

↓+1k

u

↑↑+ 2k

u

↑↓+2k

u

↓↑+2k

u

↓↓+2k

x↓↓

+2ku

Page 133: Jay H Lee_MPC Lecture Notes

• Total number of decision variables = (1 + S + S2 +…+ Sp-1) nu

• Number of branches to evaluate for each decision candidate = Sp

• Not feasible for large S (large number of scenarios) and/or large p (large number of stages)– practically limited to two stage problems with a

small number of scenarios.• Current practical approach: Evaluate

most likely branch(es) only. BUT highly limited!

Stochastic Programming ApproachGeneral case (S number of scenarios per stage)

Page 134: Jay H Lee_MPC Lecture Notes

J*(x) is a solution to Bellman Equation

Dynamic Programming (DP)

• The concept of cost-to-go

– Represents future costs under (optimal) control

– Parameterizes the solution as a function of state x

( )∑∞

+=

−−⇒1

*1* )(,)(kj

jjkj

k xxxJ µφα

( ) ( ) ),(),(min ** uxfJuxExJ hUuαφ +=

( ) ( ) ( ) *argmin , ( , )hu

x x u J f x uµ φ= +

Page 135: Jay H Lee_MPC Lecture Notes

Value Iteration Approach to Solving DP

[ ]),((),(min)( *

)(

*kkhkkkuk uxfJuxExJ αφ +=

sampling & discretization

[ ])ˆ(),(min)(1 ξαξφξ i

u

i JuEJ +=+

Value iterationDiscretization of entire state space

Curse of Dimensionality

(State & Action Spaces)

i = i+1

Page 136: Jay H Lee_MPC Lecture Notes

Approximate Dynamic Programming (ADP)

• Bellman equation needs to be solved– Curse of dimensionality! Not suitable for high dimens. sys.

• Key idea of ADP– To find approximate cost-to-go function

– Use simulations under known suboptimal policy to sample a very small “relevant” fraction of the states and initialize cost-to-go value table.

– Iteratively improve the policy and cost-to-go function• Iterate over only the sampled points in the state space• Use interpolation to evaluate the cost-to-go values for non-

sampled points.

x∀ ∈X

Page 137: Jay H Lee_MPC Lecture Notes

)(:)(~kk xJxxJ →=

Approximation of Cost-to-Go

converged

solution

Approximate Dynamic Programming (ADP)Approximate Value Iteration

Monte-Carlo Simulations• Closed-loop w/ suboptimal policies • MPC, PI, etc.

• State and input trajectories: • Initial cost-to-go:

kk ux ,i

ixJ ∑= φα)(

Sampled State Points

( )[ ])(~)(min)(1 x,ufJx,uExJ i

Uu

i αφ +=∈

+

sampled state points

Iteration on the Bellman Equation

Sample Avg.

( )[ ])(~)(minarg)( x,ufJx,uExuu

∗+== αφµ

On-line Decision

*~J1+← ii