Top Banner
Sampled-Data Model Predictive Control for Constrained Continuous Time Systems Rolf Findeisen, Tobias Raff, and Frank Allg¨ ower Institute for Systems Theory and Automatic Control, University of Stuttgart, Germany {findeise,raff,allgower}@ist.uni-stuttgart.de Summary. Typically one desires to control a nonlinear dynamical system in an optimal way taking constraints on the states and inputs directly into account. Classically this problem falls into the field of optimal control. Often, however, it is difficult, if not impossible, to find a closed solution of the corresponding Hamilton-Jacobi-Bellmann equation. One possible control strategy that over- comes this problem is model predictive control. In model predictive control the solution of the Hamilton-Jacobi-Bellman equation is avoided by repeat- edly solving an open-loop optimal control problem for the current state, which is a considerably simpler task, and applying the resulting control open-loop for a short time. The purpose of this paper is to provide an introduction and overview to the field of model predictive control for continuous time systems. Specifically we consider the so called sampled-data nonlinear model predic- tive control approach. After a short review of the main principles of model predictive control some of the theoretical, computational and implementation aspects of this control strategy are discussed and underlined considering two example systems. Key words. Model predictive control, constrained systems, sampled-data 1 Introduction Many methods for the control of dynamical systems exist. Besides the ques- tion of stability often the achieved performance as well as the satisfaction of constraints on the states and inputs are of paramount importance. One classical approach to take these points into account is the design of an op- timal feedback controller. As is well known, however, it is often very hard, if not impossible, to derive a closed solution for the corresponding feedback controller. One possible approach to overcome this problem is the application of model predictive control (MPC), often also referred to as receding horizon control or moving horizon control. Basically in model predictive control the
25

Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Apr 28, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for

Constrained Continuous Time Systems

Rolf Findeisen, Tobias Raff, and Frank Allgower

Institute for Systems Theory and Automatic Control,University of Stuttgart, Germany{findeise,raff,allgower}@ist.uni-stuttgart.de

Summary. Typically one desires to control a nonlinear dynamical system in anoptimal way taking constraints on the states and inputs directly into account.Classically this problem falls into the field of optimal control. Often, however,it is difficult, if not impossible, to find a closed solution of the correspondingHamilton-Jacobi-Bellmann equation. One possible control strategy that over-comes this problem is model predictive control. In model predictive controlthe solution of the Hamilton-Jacobi-Bellman equation is avoided by repeat-edly solving an open-loop optimal control problem for the current state, whichis a considerably simpler task, and applying the resulting control open-loopfor a short time. The purpose of this paper is to provide an introduction andoverview to the field of model predictive control for continuous time systems.Specifically we consider the so called sampled-data nonlinear model predic-tive control approach. After a short review of the main principles of modelpredictive control some of the theoretical, computational and implementationaspects of this control strategy are discussed and underlined considering twoexample systems.

Key words. Model predictive control, constrained systems, sampled-data

1 Introduction

Many methods for the control of dynamical systems exist. Besides the ques-tion of stability often the achieved performance as well as the satisfactionof constraints on the states and inputs are of paramount importance. Oneclassical approach to take these points into account is the design of an op-timal feedback controller. As is well known, however, it is often very hard,if not impossible, to derive a closed solution for the corresponding feedbackcontroller. One possible approach to overcome this problem is the applicationof model predictive control (MPC), often also referred to as receding horizoncontrol or moving horizon control. Basically in model predictive control the

Page 2: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

2 Findeisen, Raff, Allgower

optimal control problem is solved repeatedly at specific sampling instants forthe current, fixed system state. The first part of the resulting open-loop inputis applied to the system until the next sampling instant, at which the optimalcontrol problem for the new system state is solved again. Since the optimalcontrol problem is solved at every sampling instant only for one fixed initialcondition, the solution is much easier to obtain than to obtain a closed so-lution of the Hamilton-Jacobi-Bellmann partial differential equation (for allpossible initial conditions) of the original optimal control problem.In general one distinguishes between linear and nonlinear model predictivecontrol (NMPC). Linear MPC refers to MPC schemes that are based on lin-ear dynamical models of the system and in which linear constraints on thestates and inputs and a quadratic cost function are employed. NMPC refersto MPC schemes that use for the prediction of the system behavior nonlinearmodels and that allow to consider non-quadratic cost functions and nonlinearconstraints on the states and inputs. By now linear MPC is widely used inindustrial applications [40, 41, 75, 77, 78]. For example [78] reports more than4500 applications spanning a wide range from chemicals to aerospace indus-tries. Also many theoretical and implementation issues of linear MPC theoryhave been studied so far [55, 68, 75]. Many systems are, however, inherentlynonlinear and the application of linear MPC schemes leads to poor perfor-mance of the closed-loop. Driven by this shortcoming and the desire to di-rectly use first principles based nonlinear models there is a steadily increasinginterrest in the theory and application of NMPC.Over the recent years many progress in the area of NMPC (see for exam-ple [1, 17, 68, 78]) has been made. However, there remain a series of openquestions and hurdles that must be overcome in order that theoretically wellfounded practical application of NMPC is possible. In this paper we focuson an introduction and overview of NMPC for continuous time systems withsampled state information, i.e. we consider the stabilization of continuous timesystems by repeatedly applying input trajectories that are obtained from thesolution of an open-loop optimal control problem at discrete sampling in-stants. In the following we shortly refer to this as sampled-data NMPC. Incomparison to NMPC for discrete time systems (see e.g. [1, 17, 68]) or instan-taneous NMPC [68], where the optimal input is recalculated at all times (noopen-loop input signal is applied to the system), the inter sampling behaviorof the system while the open-loop input is applied must be taken into account,see e.g. [25, 27, 44, 45, 62].In Section 2 we review the basic principle of NMPC. Before we focus on thetheoretical questions, we shortly outline in Section 2.3 how the resulting open-loop optimal control problem can be solved. Section 3 contains a discussionon how stability in sampled-data NMPC can be achieved. Section 4 discussesrobustness issues in NMPC and Section 5 considers the output feedback prob-lem for NMPC. Before concluding in Section 8 we consider in Section 6 thesampled-data NMPC control of a simple nonlinear example system and in Sec-tion 7 the pendulum benchmark example considered throughout this book.

Page 3: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 3

2 Principles of Sampled-Data Model Predictive Control

In model predictive control the input applied to the system (1) is given bythe repeated solution of a (finite) horizon open-loop optimal control problemsubject to the system dynamics, state and input constraints: Based on mea-surements obtained at a sampling time (in the following denoted by ti), thecontroller predicts the dynamic behavior of the system over the so called con-trol/prediction horizon Tp and determines the input such that an open-loopperformance objective is minimized. Under the assumption that the predic-tion horizon spans to infinity and that there are no disturbances and no modelplant mismatch, one could apply the resulting input open-loop to the systemand achieve (under certain assumptions) convergence to the origin. However,due to external disturbances, model plant mismatch and the use of finite pre-diction horizons the actual predicted state and the true system state differ.Thus, to counteract this deviation and to suppress the disturbances it is nec-essary to in cooperate feedback. In model predictive control this is achieved byapplying the obtained optimal open-loop input only until the next samplinginstant at which the whole process – prediction and optimization – is repeated(compare Figure 1), thus moving the prediction horizon forward.

closed-loop

state x

closed-loop

input u

closed-loop

state x

closed-loop

input u

control/prediction horizon Tp control/prediction horizon Tp

ti+1ti

predicted state x

ti + Tp ti+1ti ti+1 + Tpti+2

predicted state x

open loop input u

sampling time ti sampling time ti+1

open loop input u

Fig. 1. Principle of model predictive control.

The whole procedure can be summarized by the following steps:

1. Obtain estimates of the current state of the system2. Obtain an admissible optimal input by minimizing the desired cost func-

tion over the prediction horizon using the system model and the currentstate estimate for prediction

3. Implement the obtained optimal input until the next sampling instant4. Continue with 1.

Considering this control strategy various questions such as closed-loop stabil-ity, robustness to disturbances/model uncertainties and the efficient solutionof the resulting open-loop optimal control problem arise.

Page 4: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

4 Findeisen, Raff, Allgower

2.1 Mathematical Formulation of Sampled-Data NMPC

Throughout the paper we consider the stabilization of time-invariant nonlinearsystems of the form

x(t) = f(x(t), u(t)) a.e. t ≥ 0, x(0) = x0, (1)

where x ∈ Rn denotes the system state and u ∈ R

m is the control or inputto the system. We assume that the vector field f : R

n×Rm → R

n is locallyLipschitz continuous with f(0, 0) = 0. The objective is to (optimally) stabilizethe system subject to the input and state constraints: u(t)∈U ⊂R

m, x(t)∈X ⊆ R

n, ∀t ≥ 0, where U ⊂ Rm is assumed to be compact and X ⊆ R

n isassumed to be simply connected with (0, 0)∈X×U .

Remark 1. (Rate constraints on the inputs) If rate constraints

u(t) ∈ U , ∀t ≥ 0 (2)

on the inputs must be considered, they can be transformed to the given formby adding integrators in the system before the inputs, see for example Sec-tion 7. Note, however, that this transforms the input constraint u ∈ U toconstraints on the integrator states.

We denote the solution of (1) (if it exists) starting at a time t1 from astate x(t1), applying a (piecewise continuous) input u : [t1, t2] → R

m byx(τ ; u(·), x(t1)), τ ∈ [t1, t2]. In sampled-data NMPC an open-loop optimalcontrol problem is solved at the discrete sampling instants ti. We assume thatthese sampling instants are given by a partition π of the time axis:

Definition 1. (Partition) A partition is a series π = (ti), i∈N of (finite)positive real numbers such that t0 = 0, ti < ti+1 and ti → ∞ for i → ∞.Furthermore, π := supi∈N(ti+1−ti) denotes the upper diameter of π and π :=inf i∈N(ti+1−ti) denotes the lower diameter of π.

Whenever t and ti occur together, ti should be taken as the closest previoussampling instant with ti < t. The input applied in between the samplinginstants, i.e. in the interval [ti, ti+1), in NMPC is given by the solution of theopen-loop optimal control problem

minu(·)∈L

[0,Tp]∞

J(x(ti), u(·)) (3a)

subject to:

˙x(τ)=f(x(τ), u(τ)), x(ti)=x(ti) (3b)

u(τ)∈U , x(τ)∈X τ ∈ [ti, ti + Tp] (3c)

x(ti + Tp)∈E . (3d)

Page 5: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 5

Here the bar denotes predicted variables, i.e. x(·) is the solution of (3b) drivenby the input u(·) : [ti, ti + Tp] → U with the initial condition x(ti). Thedistinction between the real system state x of (1) and the predicted state xin the controller is necessary since due to the moving horizon nature even inthe nominal case the predicted states will differ from the real states at leastafter one sampling instant. As cost functional J minimized over the controlhorizon Tp ≥ π > 0 we consider

J(x(ti), u(·)) :=

∫ ti+Tp

ti

F (x(τ), u(τ))dτ + E(x(ti + Tp)), (4)

where the stage cost F : X ×U → X is assumed to be continuous, satisfiesF (0, 0)=0, and is lower bounded by positive semidefinite function αF : R →R

+0 , i.e. αF (x) ≤ F (x, u) ∀(x, u) ∈ X × U . We furthermore assume that the

autonomous system f(x, 0) is zero state detectable via α(x), i.e. ∀(x0) ∈ X ,αF (x(τ ; x0)) = 0 ⇒ x(τ ; x0) as t → ∞, where x(τ ; x0) denotes the solutionof the system x = f(x, 0) starting from x(0) = x0. The so called terminalregion constraint E and the so called terminal penalty term E are typicallyused to enforce stability or to increase the performance of the closed-loop, seeSection 3.The solution of the optimal control problem (3) is denoted by u?(·; x(ti)).It defines the open-loop input that is applied to the system until the nextsampling instant ti+1:

u(t; x(ti))= u?(t; x(ti)), t∈ [ti, ti+1) . (5)

As noted above, the control u(t; x(ti)) is a feedback, since it is recalculatedat each sampling instant using the new state measurement. We limit thepresentation to input signals that are piecewise continuous and refer to anadmissible input as:

Definition 2. (Admissible Input) An input u : [0, Tp]→Rm for a state x0

is called admissible, if it is: a) piecewise continuous, b) u(τ)∈U ∀τ ∈ [0, Tp],c) x(τ ; u(·), x0)∈X ∀τ ∈ [0, Tp], d) x(Tp; u(·), x0)∈E.

We furthermore consider an admissible set of problem (3) as:

Definition 3. (Admissible Set) A set X ⊆ X is called admissible, if for allx0 ∈ X there exists a piecewise continuous input u : [0, Tp] → U such that a)x(τ ; x0, u(·)) ∈ X, τ ∈ [0, Tp] and b) x(Tp; x0, u(·)) ∈ E.

Without further (possibly very strong) restrictions it is often not clear if fora given x an admissible input nor if the minimum of (3) exists. While theexistence of an admissible input is related to constrained controllability, theexistence of an optimal solution of (3) is in general non trivial to answer. Forsimplicity of presentation we assume in the following, that the set R denotesan admissible set that admits an optimal solution of (3), i.e. one obtains thefollowing assumption:

Page 6: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

6 Findeisen, Raff, Allgower

Assumption 1 (Set R) There exists an admissible set R such that (3) ad-mits for all x0 ∈ R an optimal (not necessarily unique) solution.

It is possible to derive existence results for (3) considering measurable inputsand imposing certain convexity and compactness see for example [36, 37, 73]and [4, 35, 82]. However, often it is not possible to check the necessary condi-tions a priory. The main reason for imposing Assumption 1 is the requirementthat an optimal/feasible solution at one sampling instant should guarantee(under certain assumptions) the existence of an optimal/feasible solution atthe next sampling instant (see Section 3).The optimal value of the cost functional (4) plays an important role in manyconsiderations. It is typically denoted as value function:

Definition 4. (Value function) The value function V (x) is defined as theminimal value of the cost for the state x: V (x) = J(u?(·; x); x).

The value function is for example used in the proof of convergence and stabil-ity. It often serves as a “Lyapunov function”/decreasing function candidate,see Section 3 and [1, 68].In comparison to sampled-data NMPC for continuous time systems, in in-stantaneous NMPC the input is defined by the solution of the optimal controlproblem (3) at all times: u(x(t)) = u?(t; x(t)), i.e. no open-loop input is ap-plied, see e.g. [67, 68]. Considering that the solution of the open-loop optimalcontrol problem requires an often non negligible time, this approach can notbe applied in practice. Besides the continuous time considerations results forNMPC of discrete time systems are also available (see e.g. [1, 17, 68]). We donot go into further details here.

Remark 2. (Hybrid nature of sampled-data predictive control) Note, that insampled-data NMPC the input applied in between the recalculation instants ti

and ti+1 is given by the solution of the open-loop optimal control problem (3)at time ti, i.e. the closed-loop is given by

x(t) = f (x(t), u(t; x(ti))) . (6)

Thus, strictly speaking, the behavior of the system is not only defined bythe current state. Rigorously one has to consider a hybrid system [43, 46, 74,84] consisting of the “discrete” state x(ti), the continuous state x(t). This isespecially important for the stability considerations in Section 3, since the the“discrete memory” x(ti) must be taken into account.

2.2 Inherent Characteristics and Problems of NMPC

One of the key problems in predictive control schemes is that the actual closed-loop input and states differ from the predicted open-loop ones, even if nomodel plant mismatch and no disturbances are present. This stems from thefact, that at the next sampling instant the (finite) prediction horizon moves

Page 7: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 7

forward, allowing to consider more information thus leading to a mismatch ofthe trajectories. The difference between the predicted and the closed-loop tra-jectories has two immediate consequences. Firstly, the actual goal to computea feedback such that the performance objective over an often desired infinitehorizon of the closed-loop is minimized is not achieved. Secondly there is ingeneral no guarantee that the closed-loop system will be stable at all. It isindeed easy to construct examples for which the closed-loop becomes unsta-ble if a short finite horizon is chosen. Hence, when using finite predictionhorizons special attention is required to guarantee stability (see Section 3).Summarizing, the key characteristics and properties of NMPC are:

• NMPC allows the direct use of nonlinear models for prediction.• NMPC allows the explicit consideration of state and input constraints.• In NMPC a time domain performance criteria is minimized on-line.• In NMPC the predicted behavior is in general different from the closed-

loop behavior.• For the application of NMPC an open-loop optimal control problem must

be solved on-line.• To perform the prediction the system states must be measured or esti-

mated.

Remark 3. In this paper we mainly focus on NMPC for the stabilization oftime-invariant continuous time nonlinear systems. However, note that NMPCis also applicable to a large class of other systems, i.e. discrete time systems,delay systems, time-varying systems, and distributed parameter systems, formore details see for example [1, 17, 68]. Furthermore, NMPC is also well suitedfor tracking problems or problems where one has to perform transfer betweendifferent steady states optimally, see e.g. [28, 58, 70].

Before we summarize the available stability results for sampled-data NMPC,we comment in the next section on the numerical solution of the open-loopoptimal control problem.

2.3 Numerical Aspects of Sampled-Data NMPC

Predictive control circumvents the solution of the Hamilton-Jacobi-Bellmanequation by solving the open-loop optimal control problem at every samplinginstant only for the currently measured system state. An often untraceableproblem is replaced by a traceable one. In linear MPC the solution of theoptimal control problem (3) can often be cast as a convex quadratic program,which can be solved efficiently. This is one of the main reasons for the practicalsuccess of linear MPC. In NMPC, however, at every sampling instant a gen-eral nonlinear open-loop optimal control problem (3) must be solved on-line.Thus one important precondition for the application of NMPC, is the avail-ability of reliable and efficient numerical dynamic optimization algorithms forthe optimal control problem (3). Solving (3) numerically efficient and fast is,

Page 8: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

8 Findeisen, Raff, Allgower

however, not a trivial task and has attracted many research interest in recentyears (see e.g. [2, 5, 6, 18, 22–24, 56, 64–66,81, 83]). Typically so called directsolution methods [6, 7, 76] are used, i.e. the original infinite dimensional prob-lem is turned into a finite dimensional one by discretizing the input (and alsopossibly the state). Basically this is done by parameterizing the input (andpossibly the states) finitely and to solve/approximate the differential equa-tions during the optimization. We do not go into further details and insteadrefer to [7, 22, 66]. However, we note that recent studies have shown the usageof special dynamic optimizers and tailored NMPC schemes allows to employNMPC to practically relevant problems (see e.g. [2, 24, 29, 34, 65, 81]), evenwith todays computational power.

Remark 4. (Sub optimality and NMPC) Since the optimal control problem(3) is typically non convex, it is questionable if the globally minimizing inputcan be found at all. While the usage of a non optimal admissible input mightlead to an increase in the cost, it is not crucial to find the global minima forstability of the closed-loop, as outlined in the next Section.

3 Nominal Stability of Sampled-Data NMPC

As outlined one elementary question in NMPC is whether a finite horizonNMPC strategy does guarantee stability of the closed-loop. While a finiteprediction and control horizon is desirable from an implementation point ofview, the difference between the predicted state trajectory and the result-ing closed-loop behavior can lead to instability. Here we review some centralideas how stability can be achieved. No attempt is made to cover all exist-ing approaches and methods, especially those which consider instantaneousor discrete time NMPC. We do also only consider the nominal case, i.e. it isassumed that no external disturbances act on the system and that there is nomodel mismatch between the system model used for prediction and the realsystem.

Stability by an infinite prediction horizon: The most intuitive way toachieve stability/convergence to the origin is to use an infinite horizon cost,i.e. Tp in the optimal control problem (3) is set to ∞. In this case the open-loopinput and state trajectories resulting from (3) at a specific sampling instantare coincide with the closed-loop trajectories of the nonlinear system due toBellman’s principle of optimality [3]. Thus, the remaining parts of the tra-jectories at the next sampling instant are still optimal (end pieces of optimaltrajectories are optimal). Since the first part of the optimal trajectory hasbeen already implemented and the cost for the remaining part and thus thevalue function is decreasing, which implies under mild conditions convergenceof the states. Detailed derivations can for example be found in [51, 52, 67, 68].

Page 9: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 9

Stability for finite prediction horizons: In the case of finite horizonsthe stability of the closed-loop is not guaranteed a priori if no precautions aretaken. By now a series of approaches exist, that achieve closed-loop stability.In most of these approaches the terminal penalty E and the terminal regionconstraint E are chosen suitable to guarantee stability or the standard NMPCis modified to achieve stability. The additional terms are not motivated byphysical restrictions or performance requirements, they have the sole purposeto enforce stability. Therefore, they are usually called stability constraints.

Stability via a zero terminal constraint: One possibility to enforce sta-bility with a finite prediction horizon is to add the so called zero terminalequality constraint at the end of the prediction horizon, i.e.

x(t + Tp) = 0 (7)

is added to the optimal control problem (3) [9, 52, 67, 69]. This leads to stabil-ity of the closed-loop, if the optimal control problem has a solution at t = 0.Similar to the infinite horizon case the feasibility at one sampling instant doesimply feasibility at the following sampling instants and a decrease in the valuefunction. One disadvantage of a zero terminal constraint is that the predictedsystem state is forced to reach the origin in finite time. This leads to feasibilityproblems for short prediction/control horizon lengths, i.e. to small regions ofattraction. Furthermore, from a computational point of view, an exact satis-faction of a zero terminal equality constraint does require in general an infinitenumber of iterations in the optimization and is thus not desirable. The mainadvantages of a zero terminal constraint are the straightforward applicationand the conceptual simplicity.

Dual mode control: One of the first sampled-data NMPC approaches avoid-ing an infinite horizon or a zero terminal constraint is the so called dual-modeNMPC approach [71]. Dual-mode is based on the assumption that a local (lin-ear) controller is available for the nonlinear system. Based on this local linearcontroller a terminal region and a quadratic terminal penalty term which areadded to the open-loop optimal control problem similar to E and E such that:1.) the terminal region is invariant under the local control law, 2.) the ter-minal penalty term E enforces a decrease in the value function. Furthermorethe prediction horizon is considered as additional degree of freedom in theoptimization. The terminal penalty term E can be seen as an approximationof the infinite horizon cost inside of the terminal region E under the locallinear control law. Note, that the dual-mode control is not strictly a pureNMPC controller, since the open-loop optimal control problem is only repeat-edly solved until the system state enters the terminal set E , which is achievedin finite time. Once the system state is inside E the control is switched to thelocal control law u = Kx, thus the name dual-mode NMPC. Thus the localcontrol is utilized to establish asymptotic stability while the NMPC feedbackis used to increase the region of attraction of the local control law.

Page 10: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

10 Findeisen, Raff, Allgower

Based on the results in [71] it is shown in [12] that switching to the localcontrol law is not necessary to establish stability.

Control Lyapunov function approaches: In the case that E is a globalcontrol Lyapunov function for the system, the terminal region constraintx(t + Tp) ∈ E is actual not necessary. Even if the control Lyapunov is notglobally valid, convergence to the origin can be achieved [50] and it can beestablished that for increasing prediction horizon length the region of attrac-tion of the infinite horizon NMPC controller is recovered [48, 50]. Approachesusing a control Lyapunov functions as terminal penalty term and no terminalregion constraint are typically referred to as control Lyapunov function basedNMPC approaches.

Unified conditions for convergence: Besides the outlined approachesthere exist a series of approaches [11, 12, 14, 61, 71] that are based on the con-sideration of an (virtual) local control law that is able to stabilize the systeminside of the terminal region and where the terminal penalty E provides anupper bound on the optimal infinite horizon cost.The following theorem covers most of the existing stability results. It estab-lishes conditions for the convergence of the closed-loop states under sampled-data NMPC. It is a slight modification of the results given in [10, 11, 36]. Theproof is outlined here since it gives a basic idea on the general approach howconvergence and stability is achieved in NMPC.

Theorem 1. (Convergence of sampled-data NMPC) Suppose that(a) the terminal region E ⊆ X is closed with 0 ∈ E and that the terminal

penalty E(x) ∈ C1 is positive semi-definite(b) ∀x∈E there exists an (admissible) input uE : [0, π]→U such that x(τ) ∈ E

and

∂E

∂xf(x(τ), uE (τ)) + F (x(τ), uE (τ)) ≤ 0 ∀τ ∈ [0, π] (8)

(c) x(0) ∈ RThen for the closed-loop system (1), (5) x(t)→0 for t → ∞.

Proof. See [26].

Loosely speaking, E is a F -conform local control Lyapunov function in theterminal set E . The terminal region constraint enforces feasibility at the nextsampling instant and allows, similarly to the infinite horizon case, to showthat the value function is strictly decreasing. Thus stability can be established.Note that this result is nonlocal in nature, i.e. there exists a region of attractionR which is of at least the size of E . Various ways to determine a suitableterminal penalty term and terminal region exist. Examples are the use of acontrol Lyapunov function as terminal penalty E [49, 50] or the use of a localnonlinear or linear control law to determine a suitable terminal penalty E anda terminal region E [11, 12, 14, 61, 71].

Page 11: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 11

Remark 5. (Sub optimality) Note that we need the rather strict Assumption1 on the set R to ensure the existence of a new optimal solution at ti+1 basedon the existence of an optimal solution at ti. The existence of an admissibleinput at ti+1, i.e. u is already guaranteed due to existence of local controller,i.e. condition (b). In principle the existence of an optimal solution at the nexttime instance is not really required for the convergence result. The admissibleinput, which is a concatenation of the remaining old input and the local controlalready leads to a decrease in the cost function and thus convergence. Toincrease performance from time instance to time instance one could requirethat the cost decreases from time instance to time instance more than thedecrease resulting from an application of the “old” admissible control, i.e.feasibility implies convergence [12, 79].

Remark 6. (Stabilization of systems that require discontinuous inputs) In prin-ciple Theorem 1 allows to consider the stabilization of systems that can onlybe stabilized by feedback that is discontinuous in the state [36], e.g. nonholo-nomic mechanical systems. However, for such systems it is in general ratherdifficult to determine a suitable terminal region and a terminal penalty term.To weaken the assumptions in this case, it is possible to drop the continuousdifferentiability requirement on E, requiring merely that E is only Lipschitzcontinuous in E . From Rademacker’s theorem [16] it then follows that E iscontinuously differentiable almost everywhere and that (8) holds for almost allτ and the proof remains nearly unchanged. More details can be found in [37].

Remark 7. (Special input signals) Basically it is also possible to consider onlyspecial classes of input signals, e.g. one could require that the input is piecewisecontinuous in between sampling instants or that the input is parameterizedas polynomial in time or as a spline. Modifying Assumption 1, namely thatthe optimal control problem posses a solution for the considered input class,and that condition (8) holds for the considered inputs, the proof of Theorem 1remains unchanged. The consideration of such inputs can for example be ofinterest, if only piecewise constant inputs can be implemented on the realsystem or if the numerical on-line of the optimal control problem allows onlythe consideration of such inputs. One example of such an expansion are theconsideration of piecewise constant inputs as in [61, 62].

So far only conditions for the convergence of the states to the origin whereoutlined. In many control applications also the question of asymptotic stabilityin the sense of Lyapunov is of interest. Even so that this is possible for thesampled-data setup considered here, we do not go into further details, see e.g.[26, 37]. Concluding, the nominal stability question of NMPC is by now wellunderstood and a series of NMPC schemes exist, that guarantee the closed-loop stability.

Page 12: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

12 Findeisen, Raff, Allgower

4 Robustness of Sampled-Data NMPC

The results reviewed so far base on the assumption that the real system co-incides with the model used for prediction, i.e. no model/plant mismatchor external disturbances are present. Clearly, this is very unrealistic andthe development of a NMPC framework to address robustness issues is ofparamount importance. In general one distinguishes between the inherent ro-bustness properties of NMPC and the design of NMPC controllers that takethe uncertainty/disturbances directly into account.Typically NMPC schemes that take uncertainty that acts on the system di-rectly into account are based on game-theoretic considerations. Practicallythey often require the on-line solution of a min-max problem. A series of dif-ferent approaches can be distinguished. We do not go into details here andinstead refer to [8, 13, 38, 53, 54, 57, 59, 60].Instead we are interested in the so called inherent robustness properties ofsampled-data NMPC. By inherent robustness we mean the robustness ofNMPC to uncertainties/disturbances without taking them directly into ac-count. As shown sampled-data NMPC posses under certain conditions in-herent robustness properties. This property stems from the close relation ofNMPC to optimal control. Results on the inherent robustness of instantaneousNMPC can for example be found in [9, 63, 68]. Discrete time results are givenin [42, 80] and results for sampled-data NMPC are given in [33, 71].Typically these results consider additive disturbances of the following form:

x = f(x, u) + p(x, u, w) (9)

where p : Rn × R

m × Rl → R

n describes the model uncertainty/disturbance,and where w ∈ W ∈ R

l might be an exogenous disturbance acting on thesystem. However, assuming that f locally Lipschitz in u these results can besimply expanded to the case of input disturbances. This type of disturbancesis of special interrest, since it allows to capture the influence of numericalsolution of the open-loop optimal control problem. Further examples of inputdisturbances are neglected fast actuator dynamics, computational delays, ornumerical errors in the solution of the underlying optimal control problem.For example, inherent robustness was used in [20, 21] to establish stability ofa NMPC scheme that employs approximated solutions of the optimal controlproblem.Summarizing, some preliminary results for the inherent robustness and therobust design of NMPC controller exist. However, these result are either notimplementable since they require a high computational load or they are notdirectly applicable due to their restrictive assumptions.

Page 13: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 13

5 Output Feedback Sampled-Data NMPC

One of the key obstacles for the application of NMPC is that at every samplinginstant ti the system state is required for prediction. However, often not allsystem states are directly accessible, i.e. only an output

y = h(x, u) (10)

is directly available for feedback, where y ∈ Rp are the measured outputs and

where h : Rn×R

m → Rp maps the state and input to the output. To overcome

this problem one typically employs a state observer for the reconstruction ofthe states. In principle, instead of the optimal feedback (5) the “disturbed”feedback

u(t; x(ti))= u?(t; x(ti)), t∈ [ti, ti+1) (11)

is applied. Yet, due to the lack of a general nonlinear separation principle,stability is not guaranteed, even if the state observer and the NMPC con-troller are both stable. Several researchers have addressed this problem (seefor example for a review [32]). The approach in [19] derives local uniformasymptotic stability of contractive NMPC in combination with a “sampled”state estimator. In [58], see also [80], asymptotic stability results for observerbased discrete-time NMPC for “weakly detectable” systems are given. Theresults allow, in principle, to estimate a (local) region of attraction of the out-put feedback controller from Lipschitz constants. In [72] an optimization basedmoving horizon observer combined with a certain NMPC scheme is shown tolead to (semi-global) closed-loop stability. In [30, 31, 47], where semi-globalstability results for output feedback NMPC using high-gain observers are de-rived. Furthermore, in [32], based on the inherent robustness properties ofNMPC as outlined in Section 4 for a broad class of state feedback nonlinearmodel predictive controllers, conditions, on the observer that guarantee thatthe closed-loop is semi-global practically stable.Even so that a series of output feedback results for NMPC using observersfor state recovery exist, most of these approaches are fare away from beingimplementable. Thus, further research has to address this important questionto allow for a practical application of NMPC.

6 A Simple Nonlinear Example

The following example is thought to show some of the inherent properties ofsampled-data NMPC and to show how Theorem 1 can be used to design astabilizing NMPC controller that takes constraints into account. We considerthe following second order system [39]

x1(t) = x2(t) (12a)

x2(t) = −x1(t) + x2(t) sinh(x21(t) + x2

2(t)) + u(t), (12b)

Page 14: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

14 Findeisen, Raff, Allgower

which should be stabilized with the bounded control u(t) ∈ U := {u ∈ R| |u| ≤1} ∀t ≥ 0 where the stage cost is given by

F (x, u) = x22 + u2. (13)

According to Theorem 1 we achieve stability if we can find a terminal region Eand a C1 terminal penalty E(x) such that (8) is satisfied. For this we considerthe unconstrained infinite horizon optimal control problem for (12). One canverify that the control law

u∞(x) = −x2ex21+x2

2 (14)

minimizes the corresponding cost

J∞(x, u(·)) =

∫ ∞

0

(

x22(τ) + u2(τ)

)

dτ, (15)

and that the associated value function, which will be used as terminal penaltyterm, is given by

E(x) := V∞(x) = ex21+x2

2 − 1. (16)

It remains to find a suitable terminal region. According to Theorem 1 (b) forall x ∈ E there must exist an open-loop input uε which satisfies the constraintssuch that (8) is satisfied. If we define E as

E := {x ∈ R2|E(x) ≤ α} (17)

we know that along solution trajectories of the closed-loop system controlledby u∞(x), i.e. x = f(x, u∞), the following holds

∂E

∂xf(x, u∞(x)) + F (x(τ), u∞(x)) = 0, (18)

however, α must be chosen such that u∞(x) ∈ U . It can be verified, that for

α = 1β− 1, where β satisfies 1 − βeβ2

= 0, u∞(x) ∈ U ∀x ∈ E . The derived

terminal penalty term E(x) and the terminal region E are designed to satisfythe conditions of Theorem 1, thus the resulting NMPC controller should beable to stabilize the closed-loop.The resulting NMPC controller with the prediction horizon set to Tp = 2 iscompared to a feedback linearizing controller and the optimal controller (14)(where the input of both is limited to the set U by saturation). The feedbacklinearizing controller used is given by:

uFl(x) := −x2

(

1 + sinh(x21 + x2

2))

, (19)

which stabilizes the system globally, if the input is unconstrained. The reallyimplemented input for the feedback linearizing controller (and the uncon-strained optimal controller (14)) is given by

Page 15: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 15

u(x) = sign(uFl(x)) min{1, |uFl(x)|}, (20)

where the sign operator is defined as usual, i.e. sign(x) :={

−1, x<01, x≥0 . For the

NMPC controller the sampling instants where given by an equidistant parti-tion of the time axis, i.e. π = (ti) with ti+1 = ti +δ and t0 = 0, where the sam-pling time δ is δ = 0.1. The open-loop optimal control problem (3) is solved bya direct solution method. Specifically the input signal is parametrized as piece-wise constant with a time discretization of 0.05 over the prediction horizon,i.e. at every sampling instant an optimization problem with 40 free variablesis solved. Figure 2 shows the simulation results in the phase plan for the initialconditions x1(0) = −1.115 and x2(0) = −0.2 of all three controllers. Note that

−1.5 −1 −0.5 0 0.5 1 1.5

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

x1

x 2

Fig. 2. Phase plot x1 over x2 starting from the initial condition x(0) = [−1.115,

−0.2] for the NMPC controller (black solid), the saturated feedback linearizing con-troller (dark gray solid) and the saturated optimal controller (gray solid). The innerellipsoid (gray dashed) is the border of the terminal region E of the NMPC con-troller, while the outer curve (black dashed) are the points for which the optimalcontroller u∞(x) just satisfies the input constraint (saturation not active).

the initial conditions are such that for all controllers the input constraints arenot active at the beginning. However, after some time the maximum applica-ble input is reached, i.e. the saturation in (20) is active. As can be seen, both,the optimal controller and the feedback linearizing controller are not able tostabilize the system for the considered initial conditions. In comparison theNMPC controller is able to stabilize the system while meeting the input con-

Page 16: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

16 Findeisen, Raff, Allgower

−1

−0.5

0

x 1

−1

0

1

x 2

0 2 4 6 8 10 12

−1

0

1

Time

u

Fig. 3. Simulation results starting from the initial condition x(0) = [−1.115,−0.2]for the NMPC controller (black solid), the saturated feedback linearizing controller(dark gray solid) and the saturated optimal controller (gray solid).

straints (see Figure 3). Note, that inside of the terminal region the NMPCcontroller and the optimal control law u∞(x) coincide, since the constraintsare not active and since (18) is satisfied with equality. Thus, the terminalpenalty term E(x) can be seen as an approximation of the cost that appearsup to infinity. As can be seen from this example, if a value function/Lyapunovfunction and a local controller as well as the corresponding region of attractionis know, NMPC can be utilized to increase the overall region of attraction ofthe closed-loop while satisfying the input and state constraints.

7 Inverted Pendulum Benchmark Example

As a second example, underling the achievable performance in the case ofinput and state constraints, we consider the benchmark inverted pendulumon a cart system

x(t) =

0 −1 0−1 0 10 1 0

x(t) +

001

u(t) +

100

z(t) (21)

around its upright position. The variable x1 denotes the horizontal speedof the pendulum, x2 the horizontal displacement of the pendulum and x3

Page 17: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 17

the horizontal speed of the cart. The load z represents a horizontal forceon the pendulum which is persistent with unknown but bounded magnitude.Furthermore, u is the force by the actuator on the cart which is constrainedin the magnitude by |u| ≤ 1.25 and in the slew rate by |du(t)/dt| ≤ 2s−1.In order to take the slew rate constraint on the control input into account,the system (21) is augmented by an integrator at the control input. Thus, theinput constraint |u| ≤ 1.25 is transformed to a state constraint on the newstate, e.g. |ξ4| ≤ 1.25. With the state ξ = [ξ1, ξ2, ξ3, ξ4]

T = [x1, x2, x3, u]T andthe new control input v(t) = du(t)/dt one obtains the augmented system

ξ(t) =

0 −1 0 0−1 0 1 00 1 0 10 0 0 0

ξ(t) +

0001

v(t) +

1000

z(t). (22)

Therefore, the input constraints of the system (21) are |ξ4| < 1.25 and|dv(t)/dt| < 2s−1 for the system (22). Note, that the input constraints ofthe system (22) can be casted in the optimal control problem (3). In the

ϕ

u

x3

x1 z

Fig. 4. Inverted pendulum on a cart.

following, two control problems are considered. The control objective of thefirst problem is to track a reference signal r while the control objective of thesecond problem is stabilize the system under the influence of a disturbance z.For both control problems the stage cost is chosen as

F (ξ, v) = (ξ − ξs)T

10 0 0 00 10 0 00 0 10 00 0 0 1

(ξ − ξs) + εv2,

where ε is a small positive parameter, e.g. ε = 0.00001, and ξs the set pointwhich depends on the reference signal r and the disturbance z. The parameter

Page 18: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

18 Findeisen, Raff, Allgower

ε in the stage cost is chosen so small in order to recover the classical quadraticstage cost on the state x and on the input u. To guarantee closed-loop stability,the terminal cost E and the terminal region E are are calculated off-line bya procedure as in the quasi infinite horizon model predictive control schemedescribed in [12, 15]. The resulting terminal cost E is given by

E(ξ) = (ξ − ξs)T

142.6 −148.8 −67.0 −17.3−148.8 169.0 79.2 21.1−67.0 79.2 44.5 12.1−17.3 21.1 12.1 5.3

(ξ − ξs)

and the terminal region E is given by

E = {ξ ∈ R4|E(ξ) ≤ 3.2}. (23)

Note, that the design of the terminal penalty term and terminal region con-straint is rather easy, since the system itself is linear. Furthermore, the controland prediction horizon is chosen to TP = 6 and the sampling time is chosento δ = 0.1.

7.1 Tracking

In the following the tracking problem is studied. The control objective is thatthe state variable x1 tracks asymptotically the reference signal r. However,the tracking problem cannot directly solved via the NMPC controller with theoptimal control problem (3). Therefore, the tracking problem was consideredas a sequence of set point changes. The set points of the system (22) dependon the reference signal r, i.e. ξs = [r 0 r 0]T . Figure 5 shows the closed-loop system states x, the control input u and the reference signal r. Figure5 shows that the reference signal r is asymptotically tracked while satisfyingthe constraints.

7.2 Disturbance Attenuation

In the following the task is to stabilize the state x1 under a persistent dis-turbance z with unknown but bounded magnitude. It is assumed that thefull state ξ can be measured but not the disturbance z. Also in this controlproblem the NMPC controller with the optimal control problem (3) cannotdirectly be applied to stabilize the state x1 under the disturbance z. A typicalapproach to solve such kind of disturbance attenuation problems in model pre-dictive control is to estimate the disturbance z via an observer and to use theestimated disturbance z in the prediction of the model predictive controller.The disturbance z can be estimated via the observer

Page 19: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 19

0 5 10 15 20 25 30 35 40 450

1

2

2.5

x 1, r

0 5 10 15 20 25 30 35 40 45−1

0

1

22.5

x 2, x3

0 5 10 15 20 25 30 35 40 45−2

−1.25

0

1.25

2

Time

u

Fig. 5. Simulation results of r (gray solid), x1 (black solid), x2 (gray solid), x3

(black solid), and u (black solid) for the tracking problem.

˙φ(t) =

0 −1 0 0 1−1 0 1 0 00 1 0 1 00 0 0 0 00 0 0 0 0

φ(t) +

00010

v(t) + L(y(t) − y(t))

y(t) =

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 0

φ(t),

(24)

where φ = [ξ z]T is the augmented state. The observer gain L was chosensuch that the disturbance z is estimated sufficiently fast in order to obtain agood performance. Figure 6 shows the closed-loop system states x, the controlinput u and the disturbance z. As can be seen the state x1 is asymptoticallystabilized under the disturbance z while satisfying the constraints.In summary, in all considered cases NMPC shows good performance whilesatisfying the constraints.

Page 20: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

20 Findeisen, Raff, Allgower

0 5 10 15 20 25 30 35 40−1

0

1

2

x 1, z

0 5 10 15 20 25 30 35 40−1

0

1

2

x 2, x3

0 5 10 15 20 25 30 35 40−2

−1.25

0

1.25

2

Time

u

Fig. 6. Simulation results of z (gray solid), x1 (black solid), x2 (gray solid), x3

(black solid), and u (black solid) for the disturbance attenuation problem.

8 Conclusions

Model predictive control, especially linear model predictive control, is by nowwidely applied in practice. However, increasing productivity demands, tighterenvironmental regulations, higher quality specifications and demanding eco-nomical considerations require to operate process over a wide region of op-erating conditions, for which linear models are often not adequate. This in-adequacy has lead in recent years to an increased theoretical and practicalinterest in NMPC.In this paper reviewed the main principles and the existing results of sampled-data NMPC for continuous time systems subject to constraints. As outlined,in NMPC an open-loop optimal control problem is solved repeatedly at fixedsampling instant considering the current system state and the resulting controlis applied open-loop for a short time. Since NMPC is based on an open-loop optimal control problem, it allows the direct consideration of a nonlinearsystem model and the inclusion of constraints on states and inputs. As outlineda series of questions for NMPC, such as the stability of the closed-loop, areby now well understood. Nevertheless, many open questions remain, beforeNMPC can be applied successfully in practice.

Page 21: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 21

References

1. F. Allgower, T.A. Badgwell, J.S. Qin, J.B. Rawlings, and S.J. Wright. Nonlinearpredictive control and moving horizon estimation – An introductory overview. InP.M. Frank, editor, Advances in Control, Highlights of ECC’99, pages 391–449.Springer, London, 1999.

2. R.A. Bartlett, A. Wachter, and L.T. Biegler. Active set vs. interior point strate-gies for model predictive control. In Proc. Amer. Contr. Conf., pages 4229–4233,Chicago, Il, 2000.

3. R. Bellman. Dynamic Programming. Princeton University Press, Princeton,New Jersey, 1957.

4. L.D. Berkovitz. Optimal Control Theory. Springer-Verlag, New York, 1974.5. L. Biegler. Efficient solution of dynamic optimization and NMPC problems. In

F. Allgower and A. Zheng, editors, Nonlinear Predictive Control, pages 219–244.Birkhauser, Basel, 2000.

6. L.T. Biegler and J.B Rawlings. Optimization approaches to nonlinear modelpredictive control. In W.H. Ray and Y. Arkun, editors, Proc. 4th International

Conference on Chemical Process Control - CPC IV, pages 543–571. AIChE,CACHE, 1991.

7. T. Binder, L. Blank, H.G. Bock, R. Burlisch, W. Dahmen, M. Diehl, T. Kro-nseder, W. Marquardt, J.P. Schloder, and O. von Stryk. Introduction to modelbased optimization of chemical processes on moving horizons. In M. Groetschel,S.O. Krumke, and J. Rambau, editors, Online Optimization of Large Scale Sys-

tems: State of the Art, pages 295–339. Springer, Berlin, 2001.8. R. Blauwkamp and T. Basar. A receding-horizon approach to robust output

feedback control for nonlinear systems. In Proc. 38th IEEE Conf. Decision

Contr., pages 4879–4884, San Diego, 1999.9. C.C. Chen and L. Shaw. On receding horizon feedback control. Automatica,

18(3):349–352, 1982.10. H. Chen. Stability and Robustness Considerations in Nonlinear Model Predictive

Control. Fortschr.-Ber. VDI Reihe 8 Nr. 674. VDI Verlag, Dusseldorf, 1997.11. H. Chen and F. Allgower. Nonlinear model predictive control schemes with

guaranteed stability. In R. Berber and C. Kravaris, editors, Nonlinear Model

Based Process Control, pages 465–494. Kluwer Academic Publishers, Dodrecht,1998.

12. H. Chen and F. Allgower. A quasi-infinite horizon nonlinear model predictivecontrol scheme with guaranteed stability. Automatica, 34(10):1205–1218, 1998.

13. H. Chen, C.W. Scherer, and F. Allgower. A game theoretic approach to non-linear robust receding horizon control of constrained systems. In Proc. Amer.

Contr. Conf., pages 3073–3077, Albuquerque, 1997.14. W. Chen, D.J. Ballance, and J. O’Reilly. Model predictive control of nonlin-

ear systems: Computational burden and stability. IEE Proceedings, Part D,147(4):387–392, 2000.

15. W. Chen, D.J. Ballance, and J. O’Reilly. Optimisation of attraction domains ofnonlinear mpc via lmi methods. In Proc. Amer. Contr. Conf., pages 3067–3072,Arlington, 2002.

16. F.H. Clark, Y.S. Leydaev, R.J. Stern, and P.R. Wolenski. Nonsmooth Analysis

and Control Theory. Number 178 in Graduate Texts in Mathematics. SpringerVerlag, New York, 1998.

Page 22: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

22 Findeisen, Raff, Allgower

17. G. De Nicolao, L. Magni, and R. Scattolini. Stability and robustness of nonlin-ear receding horizon control. In F. Allgower and A. Zheng, editors, Nonlinear

Predictive Control, pages 3–23. Birkhauser, Basel, 2000.18. N.M.C. de Oliveira and L.T. Biegler. An extension of Newton-type algorithms

for nonlinear process control. Automatica, 31(2):281–286, 1995.19. S. de Oliveira Kothare and M. Morari. Contractive model predictive control

for constrained nonlinear systems. IEEE Trans. Aut. Control, 45(6):1053–1071,2000.

20. M. Diehl, R. Findeisen, F. Allgower, J.P. Schloder, and H.G. Bock. Stabilityof nonlinear model predictive control in the presence of errors due to numericalonline optimization. In Proc. 43th IEEE Conf. Decision Contr., pages 1419–1424, Maui, 2003.

21. M. Diehl, R. Findeisen, H.G. Bock, J.P. Schloder, and F. Allgower. Nominalstability of the real-time iteration scheme for nonlinear model predictive control.IEE Control Theory Appl., 152(3):296–308, 2005.

22. M. Diehl, R. Findeisen, Z. Nagy, H.G. Bock, J.P. Schloder, and F. Allgower.Real-time optimization and nonlinear model predictive control of processes gov-erned by differential-algebraic equations. J. Proc. Contr., 4(12):577–585, 2002.

23. M. Diehl, R. Findeisen, S. Schwarzkopf, I. Uslu, F. Allgower, H.G. Bock, and J.P.Schloder. An efficient approach for nonlinear model predictive control of large-scale systems. Part I: Description of the methodology. Automatisierungstechnik,12:557–567, 2002.

24. M. Diehl, R. Findeisen, S. Schwarzkopf, I. Uslu, F. Allgower, H.G. Bock, andJ.P. Schloder. An efficient approach for nonlinear model predictive control oflarge-scale systems. Part II: Experimental evaluation considering the control ofa distillation column. Automatisierungstechnik, 1:22–29, 2003.

25. A.M. Elaiw and Gyurkovics E. Multirate sampling and delays in receding hori-zon stabilization of nonlinear systems. In Proc. 16th IFAC World Congress,

Prague, Czech Republic, 2005.26. R. Findeisen. Nonlinear Model Predictive Control: A Sampled-Data Feedback

Perspective. Fortschr.-Ber. VDI Reihe 8 Nr. 1087, VDI Verlag, Dusseldorf, 2005.27. R. Findeisen and F. Allgower. Stabilization using sampled-data open-loop feed-

back – a nonlinear model predictive control perspective. In Proc. Symposium

on Nonlinear Control Systems, NOLCOS’2004, Stuttgart, Germany, 2004.28. R. Findeisen, H. Chen, and F. Allgower. Nonlinear predictive control for setpoint

families. In Proc. Amer. Contr. Conf., pages 260–265, Chicago, 2000.29. R. Findeisen, M. Diehl, I. Uslu, S. Schwarzkopf, F. Allgower, H.G. Bock, J.P.

Schloder, and E.D. Gilles. Computation and performance assessment of nonlin-ear model predictive control. In Proc. 42th IEEE Conf. Decision Contr., pages4613–4618, Las Vegas, 2002.

30. R. Findeisen, L. Imsland, F. Allgower, and B.A. Foss. Output feedback nonlinearpredictive control - a separation principle approach. In Proc. of 15th IFAC World

Congress, Barcelona, Spain, 2002. Paper ID 2204 on CD-ROM.31. R. Findeisen, L. Imsland, F. Allgower, and B.A. Foss. Output feedback stabi-

lization for constrained systems with nonlinear model predictive control. Int. J.

of Robust and Nonlinear Control, 13(3-4):211–227, 2003.32. R. Findeisen, L. Imsland, F. Allgower, and B.A. Foss. State and output feedback

nonlinear model predictive control: An overview. Europ. J. Contr., 9(2-3):190–207, 2003.

Page 23: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 23

33. R. Findeisen, L. Imsland, F. Allgower, and B.A. Foss. Towards a sampled-data theory for nonlinear model predictive control. In W. Kang, C. Borges,and M. Xiao, editors, New Trends in Nonlinear Dynamics and Control, volume295 of Lecture Notes in Control and Information Sciences, pages 295–313, NewYork, 2003. Springer-Verlag.

34. R. Findeisen, Z. Nagy, M. Diehl, F. Allgower, H.G. Bock, and J.P. Schloder.Computational feasibility and performance of nonlinear model predicitve con-trol. In Proc. 6th European Control Conference ECC’01, pages 957–961, Porto,Portugal, 2001.

35. W. H. Fleming and R. W. Rishel. Deterministic and stochastic optimal control.Springer, Berlin, 1982.

36. F.A. Fontes. A general framework to design stabilizing nonlinear model predic-tive controllers. Syst. Contr. Lett., 42(2):127–143, 2000.

37. F.A. Fontes. Discontinuous feedbacks, discontinuous optimal controls, andcontinuous-time model predictive control. Int. J. of Robust and Nonlinear Con-

trol, 13(3-4):191–209, 2003.38. F.A. Fontes and L. Magni. Min-max predictive control of nonlinear systems

using discontinuous feedback. IEEE Trans. Aut. Control, 48(10):1750–1755,2003.

39. R. Freeman and J. Primbs. Control Lyapunov functions: New ideas from anold source. In Proc. 35th IEEE Conf. Decision Contr., pages 3926–3931, Kobe,Japan, December 1996.

40. J. B. Froisy. Model predictive control: Past, present and future. ISA Transac-

tions, 33:235–243, 1994.41. C.E. Garcıa, D.M. Prett, and M. Morari. Model Predictive Control: Theory and

practice – A survey. Automatica, 25(3):335–347, 1989.42. G. Grimm, M.J. Messina, S. Tuna, and A.R. Teel. Model predictive control:

for want of a local control lyapunov function, all is not lost. IEEE Trans. Aut.

Control, 50(5):546–558, 2005.43. R. Grossman, A. Nerode, A. Ravn, and H. Rischel, editors. Hybrid Dynamical

Systems. Springer-Verlag, New York, 1993.44. L. Grune and D. Nesic. Optimization based stabilization of sampled-data non-

linear systems via their approximate discrete-time models. SIAM J. Contr.

Optim., 42:98–122, 2003.45. L. Grune, D. Nesic, and J. Pannek. Model predictive control for nonlinear

sampled-data systems. In R. Findeisen, L.B. Biegler, and F. Allgower, editors,Assessment and Future Directions of Nonlinear Model Predictive Control, Lec-ture Notes in Control and Information Sciences, Berlin, 2006. Springer-Verlag.to appear.

46. L. Hou, A.N. Michel, and H. Ye. Some qualitative properties of sampled-datacontrol systems. IEEE Trans. Aut. Control, 42(42):1721–1725, 1997.

47. L. Imsland, R. Findeisen, E. Bullinger, F. Allgower, and B.A. Foss. A noteon stability, robustness and performance of output feedback nonlinear modelpredictive control. J. Proc. Contr., 13(7):633–644, 2003.

48. K. Ito and K. Kunisch. Asymptotic properties of receding horizon optimalcontrol problems. SIAM J. Contr. Optim., 40(5):1585–1610, 2002.

49. A. Jadbabaie and J. Hauser. On the stability of receding horizon control witha general cost. IEEE Trans. Aut. Control, 50(5):674–678, 2005.

50. A. Jadbabaie, J. Yu, and J. Hauser. Unconstrained receding horizon control ofnonlinear systems. IEEE Trans. Aut. Control, 46(5):776 –783, 2001.

Page 24: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

24 Findeisen, Raff, Allgower

51. S.S. Keerthi and E.G. Gilbert. An existence theorem for discrete-time infinite-horizon optimal control problems. IEEE Trans. Aut. Control, 30(9):907–909,1985.

52. S.S. Keerthi and E.G. Gilbert. Optimal infinite-horizon feedback laws for ageneral class of constrained discrete-time systems: Stability and moving-horizonapproximations. J. Opt. Theory and Appl., 57(2):265–293, 1988.

53. M.V. Kothare, V. Balakrishnan, and M. Morari. Robust constrained modelpredictive control using linear matrix inequalities. Automatica, 32(10):1361–1379, 1996.

54. S. Lall and K. Glover. A game theoretic approach to moving horizon control.In D. Clarke, editor, Advances in Model-Based Predictive Control. Oxford Uni-versity Press, 1994.

55. J.H. Lee and B. Cooley. Recent advances in model predictive control and otherrelated areas. In J.C. Kantor, C.E. Garcia, and B. Carnahan, editors, Fifth

International Conference on Chemical Process Control – CPC V, pages 201–216. American Institute of Chemical Engineers, 1996.

56. W.C. Li and L.T. Biegler. Multistep, Newton-type control strategies for con-strained nonlinear processes. Chem. Eng. Res. Des., 67:562–577, 1989.

57. L. Magni, G. De Nicolao, R. Scatollini, and F. Allgower. Robust model predictivecontrol for nonlinear discrete-time systems. Int. J. of Robust and Nonlinear

Control, 13(3-4):229–246, 2003.58. L. Magni, G. De Nicolao, and R. Scattolini. Output feedback and tracking of

nonlinear systems with model predictive control. Automatica, 37(10):1601–1607,2001.

59. L. Magni, G. De Nicolao, R. Scattolini, and F. Allgower. Robust recedinghorizon control for nonlinear discrete-time systems. In Proc. of 15th IFAC World

Congress, Barcelona, Spain, 2001. Paper ID 759 on CD-ROM.60. L. Magni, H. Nijmeijer, and A.J. van der Schaft. A receding-horizon approach

to the nonlinear H∞ control problem. Automatica, 37(5):429–435, 2001.61. L. Magni and R. Scattolini. State-feedback MPC with piecewise constant control

for continuous-time systems. In Proc. 42th IEEE Conf. Decision Contr., pages4625 – 4630, Las Vegas, 2002.

62. L. Magni and R. Scattolini. Model predictive control of continuous-time non-linear systems with piecewise constant control. IEEE Trans. Aut. Control,49(5):900–906, 2004.

63. L. Magni and R. Sepulchre. Stability margins of nonlinear receding–horizoncontrol via inverse optimality. Syst. Contr. Lett., 32(4):241–245, 1997.

64. R. Mahadevan and F.J. Doyle III. Efficient optimization approaches to nonlinearmodel predictive control. Int. J. of Robust and Nonlinear Control, 13(3-4):309–329, 2003.

65. F. Martinsen, L.T. Biegler, and B.A Foss. Application of optimization algo-rithms to nonlinear MPC. In Proc. of 15th IFAC World Congress, Barcelona,Spain, 2002. Paper ID 1245 on CD-ROM.

66. D.Q. Mayne. Optimization in model based control. In Proc. IFAC Symposium

Dynamics and Control of Chemical Reactors, Distillation Columns and Batch

Processes, pages 229–242, Helsingor, 1995.67. D.Q. Mayne and H. Michalska. Receding horizon control of nonlinear systems.

IEEE Trans. Aut. Control, 35(7):814–824, 1990.68. D.Q. Mayne, J.B. Rawlings, C.V. Rao, and P.O.M. Scokaert. Constrained model

predictive control: stability and optimality. Automatica, 26(6):789–814, 2000.

Page 25: Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Sampled-Data Model Predictive Control for Constrained Systems 25

69. E.S. Meadows, M.A. Henson, J.W. Eaton, and J.B. Rawlings. Receding hori-zon control and discontinuous state feedback stabilization. Int. J. Contr.,62(5):1217–1229, 1995.

70. H. Michalska. Trajectory tracking control using the receding horizon stratrgy. InSymposium on Control, Optimization and Supervision, CESA’96 IMACS Mul-

ticonference, pages 298–303, Lille, 1996.71. H. Michalska and D.Q. Mayne. Robust receding horizon control of constrained

nonlinear systems. IEEE Trans. Aut. Control, 38(11):1623–1633, 1993.72. H. Michalska and D.Q. Mayne. Moving horizon observers and observer-based

control. IEEE Trans. Aut. Control, 40(6):995–1006, 1995.73. H. Michalska and R.B. Vinter. Nonlinear stabilization using discontinuous

moving-horizon control. IMA Journal of Mathematical Control & Information,11:321–340, 1994.

74. A.N. Michel. Recent trends in the stability analysis of hybride dynamical sys-tems. IEEE Trans. on Circuits and System, 45(1):120–133, 1999.

75. M. Morari and J.H. Lee. Model predicitve control: Past, present and future.Comp. & Chem. Eng., 23(4/5):667–682, 1999.

76. R. Pytlak. Numerical Methods for Optimal Control Problems with State Con-

straints. Lecture Notes in Mathematics. Springer, Berlin, 1999.77. S.J. Qin and T.A. Badgwell. An overview of nonlinear model predictive control

applications. In F. Allgower and A. Zheng, editors, Nonlinear Predictive Control,pages 369–393. Birkhauser, 2000.

78. S.J. Qin and T.A. Badgwell. A survey of industrial model predictive controltechnology. Control Engineering Practice, 11(7):733–764, July 2003.

79. P.O.M. Scokaert, D.Q. Mayne, and J.B. Rawlings. Suboptimal model predictivecontrol (feasibility implies stability). IEEE Trans. Aut. Control, 44(3):648–654,1999.

80. P.O.M. Scokaert, J.B. Rawlings, and E.S. Meadows. Discrete-time stability withperturbations: Application to model predictive control. Automatica, 33(3):463–470, 1997.

81. M.J. Tenny and J.B. Rawlings. Feasible real-time nonlinear model predictivecontrol. In 6th International Conference on Chemical Process Control – CPC

VI, AIChE Symposium Series, 98(326), pages 187–193, 2001.82. R. Vinter. Optimal Control. Systems & Control: Foundations & Applications.

Birkhauser Verlag, Boston, 2000.83. S. J. Wright. Applying new optimization algorithms to model predictive con-

trol. In J.C. Kantor, C.E. Garcia, and B. Carnahan, editors, Fifth International

Conference on Chemical Process Control – CPC V, pages 147–155. AIChE Sym-posium Series, 93(316), 1996.

84. H. Ye, A.N. Michel, and L. Hou. Stability theory for hybrid dynamical systems.IEEE Trans. Aut. Control, 43(4):461–474, 1998.