Top Banner
Linear Matrix Inequalities in Control Siep Weiland and Carsten Scherer Dutch Institute of Systems and Control Class 1 Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 1 / 59
80

LMI introduction

Dec 07, 2015

Download

Documents

Introduction to LMI theory, useful in control problems.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: LMI introduction

Linear Matrix Inequalities in Control

Siep Weiland and Carsten Scherer

Dutch Institute of Systems and Control

Class 1

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 1 / 59

Page 2: LMI introduction

Outline of Part I

1 Course organizationMaterial and contact informationTopicsHomework and grading

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 2 / 59

Page 3: LMI introduction

Outline of Part II

2 Convex sets and convex functionsConvex setsConvex functions

3 Why is convexity important?ExamplesEllipsoidal algorithmDuality and convex programs

4 Linear Matrix InequalitiesDefinitionsLMI’s and convexityLMI’s in control

5 A design example

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 3 / 59

Page 4: LMI introduction

Part I

Course organization

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 4 / 59

Page 5: LMI introduction

organization and addresses

All material and info is posted on website:w3.ele.tue.nl/nl/cs/education/courses/DISClmi/

Lectures:December 15, 2008January 5, 2009

January 12, 2009January 19, 2009

How to contact us

Siep WeilandDepartment of Electrical EngineeringEindhoven University of TechnologyP.O. Box 513; 5600 MB EindhovenPhone: +31.40.2475979; Email: [email protected] SchererDelft Center for Systems and ControlDelft University of TechnologyMekelweg 2; 2628 CD DelftPhone: +31-15-2785899; Email: [email protected]

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 5 / 59

Page 6: LMI introduction

course topics

Facts from convex analysis. LMI’s: history, algorithms and software.

The role of LMI’s in dissipativity, stability and nominal performance.Analysis results.

From LMI analysis to LMI synthesis. State-feedback andoutput-feedback synthesis algorithms

From nominal to robust stability, robust performance and robustsynthesis.

IQC’s and multipliers. Relations to classical tests and to µ-theory.

Mixed control problems and parametrically-varying systems andcontrol design.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 6 / 59

Page 7: LMI introduction

homework and grading

Exercises

One exercise set is issued for every class.All sets have options to choose fromPlease hand in within 2 weeks.

Grading

Your average grade over 4 homework sets.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 7 / 59

Page 8: LMI introduction

Part II

Class 1

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 8 / 59

Page 9: LMI introduction

Merging control and optimization

In contrast to classical control, H∞ synthesis allows to design optimalcontrollers. However H∞ paradigm is restricted:

Performance specs in terms of complete closed loop transfer matrix.Sometimes only particular channels are relevant.

One measure of performance only. Often multiple specifications havebeen imposed for controlled system

No incorporation of structured time-varying/nonlinear uncertainties

Can only design LTI controllers

Control vs. optimization

View controller as decision variable of optimization problem. Desiredspecifications are constraints on controlled closed loop system system.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 9 / 59

Page 10: LMI introduction

Major goals for control and optimization

Distinguish easy from difficult problems. (Convexity is key!)

What are consequences of convexity in optimization?

What is robust optimization?

How to check robust stability by convex optimization?

Which performance measures can be dealt with?

How can controller synthesis be convexified?

What are limits for the synthesis of robust controllers?

How can we perform systematic gain scheduling?

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 10 / 59

Page 11: LMI introduction

Optimization problems

Casting optimization problems in mathematics requires

X : decision set

S ⊆ X : feasible decisions

f : S → R: cost function

f assigns to each decision x ∈ S a cost f(x) ∈ R.

Wish to select the decision x ∈ S that minimizes the cost f(x)

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 11 / 59

Page 12: LMI introduction

Optimization problems

1 What is least possible cost? Compute optimal value

fopt := infx∈S

f(x) = inf{f(x) | x ∈ S} ≥ −∞

Convention: S = ∅ then fopt = +∞Convention: If fopt = −∞ then problem is said to be unbounded

2 How to determine almost optimal solutions? For arbitrary ε > 0 find

xε ∈ S with fopt ≤ f(xε) ≤ fopt + ε.

3 Is there an optimal solution (or minimizer)? Does there exist

xopt ∈ S with fopt = f(xopt)

We write: f(xopt) = minx∈S f(x)

4 Can we calculate all optimal solutions? (Non)-uniqueness

arg min f(x) := {x ∈ S | fopt = f(x)}

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 12 / 59

Page 13: LMI introduction

Optimization problems

1 What is least possible cost? Compute optimal value

fopt := infx∈S

f(x) = inf{f(x) | x ∈ S} ≥ −∞

Convention: S = ∅ then fopt = +∞Convention: If fopt = −∞ then problem is said to be unbounded

2 How to determine almost optimal solutions? For arbitrary ε > 0 find

xε ∈ S with fopt ≤ f(xε) ≤ fopt + ε.

3 Is there an optimal solution (or minimizer)? Does there exist

xopt ∈ S with fopt = f(xopt)

We write: f(xopt) = minx∈S f(x)

4 Can we calculate all optimal solutions? (Non)-uniqueness

arg min f(x) := {x ∈ S | fopt = f(x)}

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 12 / 59

Page 14: LMI introduction

Optimization problems

1 What is least possible cost? Compute optimal value

fopt := infx∈S

f(x) = inf{f(x) | x ∈ S} ≥ −∞

Convention: S = ∅ then fopt = +∞Convention: If fopt = −∞ then problem is said to be unbounded

2 How to determine almost optimal solutions? For arbitrary ε > 0 find

xε ∈ S with fopt ≤ f(xε) ≤ fopt + ε.

3 Is there an optimal solution (or minimizer)? Does there exist

xopt ∈ S with fopt = f(xopt)

We write: f(xopt) = minx∈S f(x)4 Can we calculate all optimal solutions? (Non)-uniqueness

arg min f(x) := {x ∈ S | fopt = f(x)}

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 12 / 59

Page 15: LMI introduction

Optimization problems

1 What is least possible cost? Compute optimal value

fopt := infx∈S

f(x) = inf{f(x) | x ∈ S} ≥ −∞

Convention: S = ∅ then fopt = +∞Convention: If fopt = −∞ then problem is said to be unbounded

2 How to determine almost optimal solutions? For arbitrary ε > 0 find

xε ∈ S with fopt ≤ f(xε) ≤ fopt + ε.

3 Is there an optimal solution (or minimizer)? Does there exist

xopt ∈ S with fopt = f(xopt)

We write: f(xopt) = minx∈S f(x)

4 Can we calculate all optimal solutions? (Non)-uniqueness

arg min f(x) := {x ∈ S | fopt = f(x)}

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 12 / 59

Page 16: LMI introduction

Recap: infimum and minimum of functions

Infimum of a functionAny f : S → R has infimum L ∈ R ∪ {−∞} denoted as infx∈S f(x)defined by the properties

L ≤ f(x) for all x ∈ SL finite: for all ε > 0 exists x ∈ S with f(x) < L + εL infinite: for all ε > 0 there exist x ∈ S with f(x) < −1/ε

Minimum of a functionIf exists x0 ∈ S with f(x0) = infx∈S f(x) we say that f attains itsminimum on S and write L = minx∈S f(x).Minimum is uniquely defined by the properties

L ≤ f(x) for all x ∈ SThere exists some x0 ∈ S with f(x0) = L

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 13 / 59

Page 17: LMI introduction

A classical result

Theorem (Weierstrass)

If f : S → R is continuous and S is a compact subset of the normed linearspace X , then there exists xmin, xmax ∈ S such that for all x ∈ S

infx∈S

f(x) = f(xmin) ≤ f(x) ≤ f(xmax) = supx∈S

f(x)

Comments:

Answers problem 3 for “special” S and f

No clue on how to find xmin, xmax

No answer to uniqueness issue

S compact if for every sequence xn ∈ S a subsequence xnm existswhich converges to a point x ∈ SContinuity and compactness overly restrictive!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 14 / 59

Page 18: LMI introduction

Convex sets

Definition

A set S in a linear vector space X is convex if

x1, x2 ∈ S =⇒ αx1 + (1− α)x2 ∈ S for all α ∈ (0, 1)

The point αx1 + (1− α)x2 with α ∈ (0, 1) is a convex combination of x1

and x2.

Definition

The point x ∈ X is a convex combination of x1, . . . , xn ∈ X if

x :=n∑

i=1

αixi, αi ≥ 0,

n∑i=1

αi = 1

Note: set of all convex combinations of x1, . . . , xn is convex.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 15 / 59

Page 19: LMI introduction

Convex sets

Definition

A set S in a linear vector space X is convex if

x1, x2 ∈ S =⇒ αx1 + (1− α)x2 ∈ S for all α ∈ (0, 1)

The point αx1 + (1− α)x2 with α ∈ (0, 1) is a convex combination of x1

and x2.

Definition

The point x ∈ X is a convex combination of x1, . . . , xn ∈ X if

x :=n∑

i=1

αixi, αi ≥ 0,

n∑i=1

αi = 1

Note: set of all convex combinations of x1, . . . , xn is convex.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 15 / 59

Page 20: LMI introduction

Examples of convex sets

Convex sets Non-convex sets

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 16 / 59

Page 21: LMI introduction

Basic properties of convex sets

Theorem

Let S and T be convex. Then

αS := {x | x = αs, s ∈ S} is convex

S + T := {x | x = s + t, s ∈ S, t ∈ T } is convex

closure of S and interior of S are convex

S ∩ T := {x | x ∈ S and x ∈ T } is convex.

x ∈ S is interior point of S if there exist ε > 0 such that all y with‖x− y‖ ≤ ε belong to Sx ∈ X is closure point of S ⊆ X if for all ε > 0 there exist y ∈ S with‖x− y‖ ≤ ε

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 17 / 59

Page 22: LMI introduction

Examples of convex sets

With a ∈ Rn\{0} and b ∈ R, the hyperplane

H = {x ∈ Rn | a>x = b}

and the half-space

H− = {x ∈ Rn | a>x ≤ b}

are convex.

The intersection of finitely many hyperplanes and half-spaces is apolyhedron. Any polyhedron is convex and can be described as

{x ∈ Rn | Ax ≤ b, Dx = e}

for suitable matrices A and D and vectors b, e.

A compact polyhedron is a polytope.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 18 / 59

Page 23: LMI introduction

The convex hull

Definition

The convex hull of a set S ⊂ X is

co(S) := ∩{T | T is convex and S ⊆ T }

co(S) is convex for any set Sco(S) is set of all convex combinations of points of SThe convex hull of finitely many points co(x1, . . . , xn) is a polytope.Moreover, any polytope can be represented in this way!!

Latter property allows explicit representation of polytopes. For example{x ∈ Rn | a ≤ x ≤ b} consists of 2n inequalities and requires 2n

generators for its representation as convex hull!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 19 / 59

Page 24: LMI introduction

The convex hull

Definition

The convex hull of a set S ⊂ X is

co(S) := ∩{T | T is convex and S ⊆ T }

co(S) is convex for any set Sco(S) is set of all convex combinations of points of SThe convex hull of finitely many points co(x1, . . . , xn) is a polytope.Moreover, any polytope can be represented in this way!!

Latter property allows explicit representation of polytopes. For example{x ∈ Rn | a ≤ x ≤ b} consists of 2n inequalities and requires 2n

generators for its representation as convex hull!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 19 / 59

Page 25: LMI introduction

Convex functions

Definition

A function f : S → R is convex if

S is convex and

for all x1, x2 ∈ S, α ∈ (0, 1) there holds

f(αx1 + (1− α)x2) ≤ αf(x1) + (1− α)f(x2)

We have

f : S → R convex =⇒ {x ∈ S | f(x) ≤ γ}︸ ︷︷ ︸Sublevel sets

convex for all γ ∈ R

Derives convex sets from convex functions

Converse ⇐= is not true!

f is strictly convex if < instead of ≤Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 20 / 59

Page 26: LMI introduction

Examples of convex functions

Convex functions

f(x) = ax2 + bx + c convex ifa > 0f(x) = |x|f(x) = ‖x‖f(x) = sin x on [π, 2π]

Non-convex functions

f(x) = x3 on Rf(x) = −|x|f(x) =

√x on R+

f(x) = sin x on [0, π]

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 21 / 59

Page 27: LMI introduction

Affine sets

Definition

A subset S of a linear vector space is affine if x = αx1 + (1− α)x2

belongs to S for every x1, x2 ∈ S and α ∈ R

Geometric idea: line through any two points belongs to set

Every affine set is convex

S affine iff S = {x | x = x0 + m,m ∈M} with M a linear subspace

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 22 / 59

Page 28: LMI introduction

Affine functions

Definition

A function f : S → T is affine if

f(αx1 + (1− α)x2) = αf(x1) + (1− α)f(x2)

for all x1, x2 ∈ S and for all α ∈ R

Theorem

If S and T are finite dimensional, then f : S → T is affine if and only if

f(x) = f0 + T (x)

where f0 ∈ T and T : T → T a linear map (a matrix).

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 23 / 59

Page 29: LMI introduction

Affine functions

Definition

A function f : S → T is affine if

f(αx1 + (1− α)x2) = αf(x1) + (1− α)f(x2)

for all x1, x2 ∈ S and for all α ∈ R

Theorem

If S and T are finite dimensional, then f : S → T is affine if and only if

f(x) = f0 + T (x)

where f0 ∈ T and T : T → T a linear map (a matrix).

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 23 / 59

Page 30: LMI introduction

Cones and convexity

Definition

A cone is a set K ⊂ Rn with the property that

x1, x2 ∈ K =⇒ α1x1 + α2x2 ∈ K for all α1, α2 ≥ 0.

Since x1, x2 ∈ K implies αx1 + (1− α)x2 ∈ K for all α ∈ (0, 1) everycone is convex.

If S ⊂ X is an arbitrary set, then

K := {y ∈ Rn | 〈x, y〉 ≥ 0 for all x ∈ S}

is a cone. Also denoted K = S∗ and called dual cone of S.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 24 / 59

Page 31: LMI introduction

Why is convexity interesting ???

Reason 1: absence of local minima

Definition

f : S → R. Then x0 ∈ S is a

local optimum if ∃ε > 0 such that

f(x0) ≤ f(x) for all x ∈ S with ‖x− x0‖ ≤ ε

global optimum if f(x0) ≤ f(x) for all x ∈ S

Theorem

If f : S → R is convex then every local optimum x0 is a global optimumof f . If f is strictly convex, then the global optimum x0 is unique.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 25 / 59

Page 32: LMI introduction

Why is convexity interesting ???

Reason 1: absence of local minima

Definition

f : S → R. Then x0 ∈ S is a

local optimum if ∃ε > 0 such that

f(x0) ≤ f(x) for all x ∈ S with ‖x− x0‖ ≤ ε

global optimum if f(x0) ≤ f(x) for all x ∈ S

Theorem

If f : S → R is convex then every local optimum x0 is a global optimumof f . If f is strictly convex, then the global optimum x0 is unique.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 25 / 59

Page 33: LMI introduction

Why is convexity interesting ???

Reason 2: uniform bounds

Theorem

Suppose S = co(S0) and f : S → R is convex. Then equivalent are:

f(x) ≤ γ for all x ∈ Sf(x) ≤ γ for all x ∈ S0

Very interesting if S0 consists of finite number of points, i.e,S0 = {x1, . . . , xn}. A finite test!!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 26 / 59

Page 34: LMI introduction

Why is convexity interesting ???

Reason 3: subgradients

Definition

A vector g = g(x0) ∈ Rn is a subgradient of f at x0 if

f(x) ≥ f(x0) + 〈g, x− x0〉

for all x ∈ S

Geometric idea: graph of affine function x 7→ f(x0) + 〈g, x− x0〉 tangentto graph of f at (x0, f(x0)).

Theorem

A convex function f : S → R has a subgradient at every interior point x0

of S.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 27 / 59

Page 35: LMI introduction

Why is convexity interesting ???

Reason 3: subgradients

Definition

A vector g = g(x0) ∈ Rn is a subgradient of f at x0 if

f(x) ≥ f(x0) + 〈g, x− x0〉

for all x ∈ S

Geometric idea: graph of affine function x 7→ f(x0) + 〈g, x− x0〉 tangentto graph of f at (x0, f(x0)).

Theorem

A convex function f : S → R has a subgradient at every interior point x0

of S.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 27 / 59

Page 36: LMI introduction

Examples and properties of subgradients

f differentiable, then g = g(x0) = ∇f(x0) is subgradientSo, for differentiable functions every gradient is a subgradient

the non-differentiable function f(x) = |x| has any real numberg ∈ [−1, 1] as its subgradient at x0 = 0.

f(x0) is global minimum of f if and only if 0 is subgradient of f at x0.

Since〈g, x− x0〉 > 0 =⇒ f(x) > f(x0),

all points in half space H := {x | 〈g, x− x0〉 > 0} can be discarded insearching for minimum of f .

Used explicitly in ellipsoidal algorithm

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 28 / 59

Page 37: LMI introduction

Ellipsoidal algorithm

Aim: Minimize convex function f : Rn → RStep 0 Let x0 ∈ Rn and P0 > 0 such that all minimizers of f are

located in the ellipsoid

E0 := {x ∈ Rn | (x− x0)>P−10 (x− x0) ≤ 1}.

Set k = 0.Step 1 Compute a subgradient gk of f at xk. If gk = 0 then stop,

otherwise proceed to Step 2.Step 2 All minimizers are contained in

Hk := Ek ∩ {x | 〈gk, x− xk〉 ≤ 0}.

Step 3 Compute xk+1 ∈ Rn and Pk+1 > 0 with minimaldeterminant det Pk+1 such that

Ek+1 := {x ∈ Rn | (x− xk+1)>P−1k+1(x− xk+1) ≤ 1}

contains Hk.Step 4 Set k to k + 1 and return to Step 1.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 29 / 59

Page 38: LMI introduction

Ellipsoidal algorithm

Remarks ellipsoidal algorithm:

Convergence f(xk) → infx f(x).

Exist explicit equations for xk, Pk, Ek such that volume of Ek

decreases with e−1/2n.

Simple, robust, easy to implement, but slow convergence.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 30 / 59

Page 39: LMI introduction

Why is convexity interesting ???

Reason 4: Duality and convex programsSet of feasible decisions often described by equality and inequalityconstraints:

S = {x ∈ X | gk(x) ≤ 0, k = 1, . . . ,K, h`(x) = 0, ` = 1, . . . , L}

Primal optimization:

Popt = infx∈S

f(x)

One of index sets K or L infinite: semi-infinite optimization

Both index sets K and L finite: nonlinear program

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 31 / 59

Page 40: LMI introduction

Why is convexity interesting ???

Examples: saturation constraints, safety margins, physicallymeaningful variables, constitutive and balance equations all assumethe form S.

linear program:

f(x) = c>x, g(x) = g0 + Gx, h(x) = h0 + Hx

quadratic program:

f(x) = x>Qx, g(x) = g0 + Gx, h(x) = h0 + Hx

quadratically constraint quadratic program:

f(x) = x>Qx+2s>x+r, gj(x) = x>Qjx+2s>j x+rj , h(x) = h0+Hx

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 32 / 59

Page 41: LMI introduction

Why is convexity interesting ???

Examples: saturation constraints, safety margins, physicallymeaningful variables, constitutive and balance equations all assumethe form S.

linear program:

f(x) = c>x, g(x) = g0 + Gx, h(x) = h0 + Hx

quadratic program:

f(x) = x>Qx, g(x) = g0 + Gx, h(x) = h0 + Hx

quadratically constraint quadratic program:

f(x) = x>Qx+2s>x+r, gj(x) = x>Qjx+2s>j x+rj , h(x) = h0+Hx

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 32 / 59

Page 42: LMI introduction

Why is convexity interesting ???

Examples: saturation constraints, safety margins, physicallymeaningful variables, constitutive and balance equations all assumethe form S.

linear program:

f(x) = c>x, g(x) = g0 + Gx, h(x) = h0 + Hx

quadratic program:

f(x) = x>Qx, g(x) = g0 + Gx, h(x) = h0 + Hx

quadratically constraint quadratic program:

f(x) = x>Qx+2s>x+r, gj(x) = x>Qjx+2s>j x+rj , h(x) = h0+Hx

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 32 / 59

Page 43: LMI introduction

Why is convexity interesting ???

Examples: saturation constraints, safety margins, physicallymeaningful variables, constitutive and balance equations all assumethe form S.

linear program:

f(x) = c>x, g(x) = g0 + Gx, h(x) = h0 + Hx

quadratic program:

f(x) = x>Qx, g(x) = g0 + Gx, h(x) = h0 + Hx

quadratically constraint quadratic program:

f(x) = x>Qx+2s>x+r, gj(x) = x>Qjx+2s>j x+rj , h(x) = h0+Hx

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 32 / 59

Page 44: LMI introduction

Upper and lower bounds for convex programs

Primal optimization problem

Popt = infx∈X

f(x)

subject to g(x) ≤ 0, h(x) = 0

Can we obtain bounds on optimal value Popt ?

Upper bound on optimal valueFor any x0 ∈ S we have

Popt ≤ f(x0)

which defines an upper bound on Popt.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 33 / 59

Page 45: LMI introduction

Upper and lower bounds for convex programs

Primal optimization problem

Popt = infx∈X

f(x)

subject to g(x) ≤ 0, h(x) = 0

Can we obtain bounds on optimal value Popt ?

Upper bound on optimal valueFor any x0 ∈ S we have

Popt ≤ f(x0)

which defines an upper bound on Popt.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 33 / 59

Page 46: LMI introduction

Upper and lower bounds for convex programs

Lower bound on optimal valueLet x ∈ S. Then for arbitrary y ≥ 0 and z we have

L(x, y, z) := f(x) + 〈y, g(x)〉+ 〈z, h(x)〉 ≤ f(x)

and, in particular,

`(y, z) := infx∈X

L(x, y, z) ≤ infx∈S

L(x, y, z) ≤ infx∈S

f(x) = Popt.

so that

Dopt := supy≥0, z

`(y, z) = supy≥0, z

infx∈X

L(x, y, z) ≤ Popt

defines a lower bound for Popt.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 34 / 59

Page 47: LMI introduction

Upper and lower bounds for convex programs

Lower bound on optimal valueLet x ∈ S. Then for arbitrary y ≥ 0 and z we have

L(x, y, z) := f(x) + 〈y, g(x)〉+ 〈z, h(x)〉 ≤ f(x)

and, in particular,

`(y, z) := infx∈X

L(x, y, z) ≤ infx∈S

L(x, y, z) ≤ infx∈S

f(x) = Popt.

so that

Dopt := supy≥0, z

`(y, z) = supy≥0, z

infx∈X

L(x, y, z) ≤ Popt

defines a lower bound for Popt.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 34 / 59

Page 48: LMI introduction

Duality and convex programs

Some terminology:

Lagrange function: L(x, y, z)Lagrange dual cost: `(y, z)Lagrange dual optimization problem:

Dopt := supy≥0, z

`(y, z)

Remarks:

`(y, z) computed by solving an unconstrained optimization problem.Is concave function.

Dual problem is concave maximization problem. Constraints aresimpler than in primal problem

Main question: when is Dopt = Popt?

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 35 / 59

Page 49: LMI introduction

Example of duality

Primal Linear Program

Popt = infx

c>x

subject to x ≥ 0, b−Ax = 0

Lagrange dual cost

`(y, z) = infx

c>x− y>x + z>(b−Ax)

=

{b>z if c−A>z − y = 0−∞ otherwise

Dual Linear Program

Dopt =supz

b>z

subject to y = c−A>z ≥ 0

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 36 / 59

Page 50: LMI introduction

Example of duality

Primal Linear Program

Popt = infx

c>x

subject to x ≥ 0, b−Ax = 0

Lagrange dual cost

`(y, z) = infx

c>x− y>x + z>(b−Ax)

=

{b>z if c−A>z − y = 0−∞ otherwise

Dual Linear Program

Dopt =supz

b>z

subject to y = c−A>z ≥ 0

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 36 / 59

Page 51: LMI introduction

Karush-Kuhn-Tucker and duality

We need the following property:

Definition

Suppose f , g convex and h affine.(g, h) satisfy the constraint qualification if ∃x0 in the interior of X withg(x0) ≤ 0, h(x0) = 0 such that gj(x0) < 0 for all component functions gj

that are not affine.

Example: (g, h) satisfies constraint qualification if g and h are affine.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 37 / 59

Page 52: LMI introduction

Karush-Kuhn-Tucker and duality

Theorem (Karush-Kuhn-Tucker)

If (g, h) satisfies the constraint qualification, then we have strong duality:

Dopt = Popt.

There exist yopt ≥ 0 and zopt, such that Dopt = `(yopt, zopt).Moreover, xopt is an optimal solution of the primal optimization problemand (yopt, zopt) is an optimal solution of the dual optimization problem, ifand only if

1 g(xopt) ≤ 0, h(xopt) = 0,

2 yopt ≥ 0 and xopt minimizes L(x, yopt, zopt) over all x ∈ X and

3 〈yopt, g(xopt)〉 = 0.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 38 / 59

Page 53: LMI introduction

Karush-Kuhn-Tucker and duality

Theorem (Karush-Kuhn-Tucker)

If (g, h) satisfies the constraint qualification, then we have strong duality:

Dopt = Popt.

There exist yopt ≥ 0 and zopt, such that Dopt = `(yopt, zopt).Moreover, xopt is an optimal solution of the primal optimization problemand (yopt, zopt) is an optimal solution of the dual optimization problem, ifand only if

1 g(xopt) ≤ 0, h(xopt) = 0,

2 yopt ≥ 0 and xopt minimizes L(x, yopt, zopt) over all x ∈ X and

3 〈yopt, g(xopt)〉 = 0.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 38 / 59

Page 54: LMI introduction

Karush-Kuhn-Tucker and duality

Theorem (Karush-Kuhn-Tucker)

If (g, h) satisfies the constraint qualification, then we have strong duality:

Dopt = Popt.

There exist yopt ≥ 0 and zopt, such that Dopt = `(yopt, zopt).Moreover, xopt is an optimal solution of the primal optimization problemand (yopt, zopt) is an optimal solution of the dual optimization problem, ifand only if

1 g(xopt) ≤ 0, h(xopt) = 0,

2 yopt ≥ 0 and xopt minimizes L(x, yopt, zopt) over all x ∈ X and

3 〈yopt, g(xopt)〉 = 0.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 38 / 59

Page 55: LMI introduction

Karush-Kuhn-Tucker and duality

Remarks:

Very general result, strong tool in convex optimization

Dual problem simpler to solve, (yopt, zopt) called Kuhn Tucker point.

The triple (xopt, yopt, zopt) exist if and only if it defines a saddle pointof the Lagrangian L in that

L(xopt, y, z) ≤ L(xopt, yopt, zopt)︸ ︷︷ ︸=Popt=Dopt

≤ L(x, yopt, zopt)

for all x, y ≥ 0 and z.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 39 / 59

Page 56: LMI introduction

Linear Matrix Inequalities

Definition

A linear matrix inequality (LMI) is an expression

F (x) = F0 + x1F1 + . . . + xnFn ≺ 0

where

x = col(x1, . . . , xn) is a vector of real decision variables,

Fi = F>i are real symmetric matrices and

≺ 0 means negative definite, i.e.,

F (x) ≺ 0 ⇔ u>F (x)u < 0 for all u 6= 0⇔ all eigenvalues of F (x) are negative

⇔ λmax (F (x)) < 0

F is affine function of decision variables

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 40 / 59

Page 57: LMI introduction

Recap: Hermitian and symmetric matrices

Definition

For a real or complex matrix A the inequality A ≺ 0 means that A isHermitian and negative definite.

A is Hermitian is A = A∗ = A>. If A is real this amounts to A = A>

and we call A symmetric.

Set of n× n Hermitian or symmetric matrices: Hn and Sn.

All eigenvalues of Hermitian matrices are real.

By definition a Hermitian matrix A is negative definite if

u∗Au < 0 for all complex vectors u 6= 0

A is negative definite if and only if all its eigenvalues are negative.

A 4 B, A � B and A < B defined and characterized analogously.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 41 / 59

Page 58: LMI introduction

Simple examples of LMI’s

1 + x < 0

1 + x1 + 2x2 < 0(1 00 1

)+ x1

(2 −1−1 2

)+ x2

(1 00 0

)≺ 0.

All the same with � 0, � 0 and � 0.

Only very simple cases can be treated analytically.

Need to resort to numerical techniques!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 42 / 59

Page 59: LMI introduction

Main LMI problems

1 LMI feasibility problem: Test whether there exists x1, . . . , xn suchthat F (x) ≺ 0.

2 LMI optimization problem: Minimize f(x) over all x for which theLMI F (x) ≺ 0 is satisfied.

How is this solved?F (x) ≺ 0 is feasible iff minx λmax(F (x)) < 0 and therefore involvesminimizing the function

x 7→ λmax (F (x))

Possible because this function is convex!

There exist efficient algorithms (Interior point, ellipsoid).

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 43 / 59

Page 60: LMI introduction

Main LMI problems

1 LMI feasibility problem: Test whether there exists x1, . . . , xn suchthat F (x) ≺ 0.

2 LMI optimization problem: Minimize f(x) over all x for which theLMI F (x) ≺ 0 is satisfied.

How is this solved?F (x) ≺ 0 is feasible iff minx λmax(F (x)) < 0 and therefore involvesminimizing the function

x 7→ λmax (F (x))

Possible because this function is convex!

There exist efficient algorithms (Interior point, ellipsoid).

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 43 / 59

Page 61: LMI introduction

Linear Matrix Inequalities

Definition

(More general:) A linear matrix inequality is an inequality

F (X) ≺ 0

where F is an affine function mapping a finite dimensional vector spaceX to the set H of Hermitian matrices.

Allows defining matrix valued LMI’s.

F affine means F (X) = F0 + T (X) with T a linear map (a matrix).

With Xj basis of X , any X ∈ X can be expanded asX =

∑nj=1 xjXj so that

F (X) = F0 + T (X) = F0 +n∑

j=1

xjFj with Fj = T (Xj)

which is the standard form.Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 44 / 59

Page 62: LMI introduction

Why are LMI’s interesting?

Reason 1: LMI’s define convex constraints on x, i.e.,S := {x | F (x) ≺ 0} is convex.

Indeed, F (αx1 + (1− α)x2) = αF (x1) + (1− α)F (x2) ≺ 0.

Reason 2: Solution set of multiple LMI’s

F1(x) ≺ 0, . . . , Fk(x) ≺ 0

is convex and representable as one single LMI

F (x) =

F1(x) 0 . . . 0

0. . . 0

0 . . . 0 Fk(x)

≺ 0

Allows to combine LMI’s!

Reason 3: Incorporate affine constraints such asF (x) ≺ 0 and Ax = bF (x) ≺ 0 and x = Ay + b for some yF (x) ≺ 0 and x ∈ S with S an affine set.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 45 / 59

Page 63: LMI introduction

Why are LMI’s interesting?

Reason 1: LMI’s define convex constraints on x, i.e.,S := {x | F (x) ≺ 0} is convex.

Indeed, F (αx1 + (1− α)x2) = αF (x1) + (1− α)F (x2) ≺ 0.

Reason 2: Solution set of multiple LMI’s

F1(x) ≺ 0, . . . , Fk(x) ≺ 0

is convex and representable as one single LMI

F (x) =

F1(x) 0 . . . 0

0. . . 0

0 . . . 0 Fk(x)

≺ 0

Allows to combine LMI’s!

Reason 3: Incorporate affine constraints such asF (x) ≺ 0 and Ax = bF (x) ≺ 0 and x = Ay + b for some yF (x) ≺ 0 and x ∈ S with S an affine set.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 45 / 59

Page 64: LMI introduction

Why are LMI’s interesting?

Reason 1: LMI’s define convex constraints on x, i.e.,S := {x | F (x) ≺ 0} is convex.

Indeed, F (αx1 + (1− α)x2) = αF (x1) + (1− α)F (x2) ≺ 0.

Reason 2: Solution set of multiple LMI’s

F1(x) ≺ 0, . . . , Fk(x) ≺ 0

is convex and representable as one single LMI

F (x) =

F1(x) 0 . . . 0

0. . . 0

0 . . . 0 Fk(x)

≺ 0

Allows to combine LMI’s!

Reason 3: Incorporate affine constraints such asF (x) ≺ 0 and Ax = bF (x) ≺ 0 and x = Ay + b for some yF (x) ≺ 0 and x ∈ S with S an affine set.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 45 / 59

Page 65: LMI introduction

Affinely constrained LMI is LMI

Combine LMI constraint F (x) ≺ 0 and x ∈ S with S affine:

Write S = x0 +M with M a linear subspace of dimension k

Let {ej}kj=1 be a basis for M

Write F (x) = F0 + T (x) with T linear

Then for any x ∈ S:

F (x) = F0 + T

x0 +k∑

j=1

xjej

= F0 + T (x0)︸ ︷︷ ︸constant

+k∑

j=1

xjT (ej)︸ ︷︷ ︸linear

= G0 + x1G1 + . . . + xkGk = G(x)

Result: x unconstrained and x has lower dimension than x !!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 46 / 59

Page 66: LMI introduction

Why are LMI’s interesting?

Reason 4: conversion nonlinear constraints to linear ones

Theorem (Schur complement)

Let F be an affine function with

F (x) =(

F11(x) F12(x)F21(x) F22(x)

), F11(x) is square.

Then

F (x) ≺ 0 ⇐⇒

{F11(x) ≺ 0F22(x)− F21(x) [F11(x)]−1 F12(x) ≺ 0.

⇐⇒

{F22(x) ≺ 0F11(x)− F12(x) [F22(x)]−1 F21(x) ≺ 0

.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 47 / 59

Page 67: LMI introduction

First examples in control

Example 1: Stability

Verify stability through feasibility

x = Ax asymptotically stable ⇐⇒(−X 00 A>X + XA

)≺ 0 feasible

Here X = X> defines a Lyapuov function V (x) := x>Xx for the flowx = Ax

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 48 / 59

Page 68: LMI introduction

First examples in control

Example 2: Joint stabilization

Given (A1, B1), . . . , (Ak, Bk), find F such that(A1 + B1F ), . . . , (Ak + BkF ) asymptotically stable.

Equivalent to finding F ,X1, . . . , Xk such that for j = 1, . . . , k:(−Xj 0

0 (Aj + BjF )Xj + Xj(Aj + BjF )>

)≺ 0 not an LMI!!

Sufficient condition: X = X1 = . . . = Xk, K = FX, yields(−X 00 AjX + XA>j + BjK + K>B>

j

)≺ 0 an LMI!!

Set feedback F = KX−1.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 49 / 59

Page 69: LMI introduction

First examples in control

Example 3: Eigenvalue problemGiven F : V → S affine, minimize over all x

f(x) = λmax (F (x)) .

Observe that, with γ > 0, and using Schur complement:

λmax(F>(x)F (x)) < γ2 ⇔ 1γ

F>(x)F (x)−γI ≺ 0 ⇔(−γI F>(x)F (x) −γI

)≺ 0

We can define

y :=(

); G(y) :=

(−γI F>(x)F (x) −γI

); g(y) := γ

then G affine in y and minx f(x) = minG(y)≺0 g(y).

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 50 / 59

Page 70: LMI introduction

Truss topology design

0 50 100 150 200 250 300 350 400

0

50

100

150

200

250

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 51 / 59

Page 71: LMI introduction

Trusses

Trusses consist of straight members (‘bars’) connected at joints.

One distinguishes free and fixed joints.

Connections at the joints can rotate.

The loads (or the weights) are assumed to be applied at the freejoints.

This implies that all internal forces are directed along the members,(so no bending forces occur).

Construction reacts based on principle of statics: the sum of theforces in any direction, or the moments of the forces about any joint,are zero.

This results in a displacement of the joints and a new tensiondistribution in the truss.

Many applications (roofs, cranes, bridges, space structures, . . . ) !!

Design your own bridge

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 52 / 59

Page 72: LMI introduction

Truss topology design

Problem features:

Connect nodes by N bars of length ` = col(`1, . . . , `N ) (fixed) andcross sections s = col(s1, . . . , sN ) (to be designed)

Impose bounds on cross sections ak ≤ sk ≤ bk and total volume`>s ≤ v (and hence an upperbound on total weight of the truss).Let a = col(a1, . . . , aN ) and b = col(b1, . . . , bN ).Distinguish fixed and free nodes.

Apply external forces f = col(f1, . . . fM ) to some free nodes. Theseresult in a node displacements d = col(d1, . . . , dM ).

Mechanical model defines relation A(s)d = f where A(s) � 0 is thestiffness matrix which depends linearly on s.

Goal:

Maximize stiffness or, equivalently, minimize elastic energy f>d

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 53 / 59

Page 73: LMI introduction

Truss topology design

Problem

Find s ∈ RN which minimizes elastic energy f>d subject to the constraints

A(s) � 0, A(s)d = f, a ≤ s ≤ b, `>s ≤ v

Data: Total volume v > 0, node forces f , bounds a, b, lengths ` andsymmetric matrices A1, . . . , AN that define the linear stiffness matrixA(s) = s1A1 + . . . + sNAN .

Decision variables: Cross sections s and displacements d (bothvectors).

Cost function: stored elastic energy d 7→ f>d.

Constraints:

Semi-definite constraint: A(s) � 0Non-linear equality constraint: A(s)d = fLinear inequality constraints: a ≤ s ≤ b and `>s ≤ v.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 54 / 59

Page 74: LMI introduction

From truss topology design to LMI’s

First eliminate affine equality constraint A(s)d = f :

minimize f> (A(s))−1 fsubject to A(s) � 0, `>s ≤ v, a ≤ s ≤ b

Push objective to constraints with auxiliary variable γ:

minimize γ

subject to γ > f> (A(s))−1 f , A(s) � 0, `>s ≤ v, a ≤ s ≤ b

Apply Schur lemma to linearize

minimize γ

subject to

(γ f>

f A(s)

)� 0, `>s ≤ v, a ≤ s ≤ b

Note that the latter is an LMI optimization problem as all constraints on sare formulated as LMI’s!!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 55 / 59

Page 75: LMI introduction

From truss topology design to LMI’s

First eliminate affine equality constraint A(s)d = f :

minimize f> (A(s))−1 fsubject to A(s) � 0, `>s ≤ v, a ≤ s ≤ b

Push objective to constraints with auxiliary variable γ:

minimize γ

subject to γ > f> (A(s))−1 f , A(s) � 0, `>s ≤ v, a ≤ s ≤ b

Apply Schur lemma to linearize

minimize γ

subject to

(γ f>

f A(s)

)� 0, `>s ≤ v, a ≤ s ≤ b

Note that the latter is an LMI optimization problem as all constraints on sare formulated as LMI’s!!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 55 / 59

Page 76: LMI introduction

From truss topology design to LMI’s

First eliminate affine equality constraint A(s)d = f :

minimize f> (A(s))−1 fsubject to A(s) � 0, `>s ≤ v, a ≤ s ≤ b

Push objective to constraints with auxiliary variable γ:

minimize γ

subject to γ > f> (A(s))−1 f , A(s) � 0, `>s ≤ v, a ≤ s ≤ b

Apply Schur lemma to linearize

minimize γ

subject to

(γ f>

f A(s)

)� 0, `>s ≤ v, a ≤ s ≤ b

Note that the latter is an LMI optimization problem as all constraints on sare formulated as LMI’s!!

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 55 / 59

Page 77: LMI introduction

Yalmip coding for LMI optimization problem

Equivalent LMI optimization problem:

minimize γ

subject to

(γ f>

f A(s)

)� 0, `>s ≤ v, a ≤ s ≤ b

The following YALMIP code solves this problem:

gamma=sdpvar(1,1); x=sdpvar(N,1,’full’);lmi=set([gamma f’; f A*diag(x)*A’]);lmi=lmi+set(l’*x<=v);lmi=lmi+set(a<=x<=b);options=sdpsettings(’solver’,’csdp’);solvesdp(lmi,gamma,options); s=double(x);

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 56 / 59

Page 78: LMI introduction

Result: optimal truss

0 50 100 150 200 250 300 350 400

0

50

100

150

200

250

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 57 / 59

Page 79: LMI introduction

Useful software:

General purpose MATLAB interface Yalmip

Free code developed by J. Lofberg accessible here

Get Yalmip now

Run yalmipdemo.m for a comprehensive introduction.Run yalmiptest.m to test settings.

Yalmip uses the usual Matlab syntax to define optimization problems.Basic commands sdpvar, set, sdpsettings and solvesdp.Truely easy to use!!!

Yalmip needs to be connected to solver for semi-definite programming.There exist many solvers:

SeDuMi PENOPT OOQPDSDP CSDP MOSEK

Alternative Matlab’s LMI toolbox for dedicated control applications.

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 58 / 59

Page 80: LMI introduction

Gallery

Joseph-Louis Lagrange (1736) Aleksandr Mikhailovich Lyapunov (1857)

to next class

Siep Weiland and Carsten Scherer (DISC) Linear Matrix Inequalities in Control Class 1 59 / 59