Computational Optimization Sequential Quadratic Programming NW Chapter 18
21

# Computational Optimization - Rensselaer Polytechnic Institutebennek/class/compopt/slides-08/SQP-08.pdf · 2008. 4. 11. · gx λλ ν λ ν ⎡⎤ ... First shot at an algorithm while

Feb 03, 2021

## Documents

dariahiddleston
Welcome message from author
Transcript
• Computational Optimization

NW Chapter 18

Basic Idea:QP with constraints are easy. For any

guess of active constraints, just have to solve system of equations.

So why not solve general problem as a series of constrained QPs.

Which QP should be used?

• Use KKT

Problem

Use Lagrangian'

'

( , ) ( ) ( )( , ) ( ) ( ) 0( , ) ( ) 0

x

L x f x g xL x f x g xL x g xλ

λ λ

λ λλ

= −

∇ = ∇ − ∇ =∇ = =

min ( )( ) . . ( ) 0i

f xNLP s t g x i E= ∈

• Solve for primal and dual using Newton’s Method

Newton step

where

1

1

k k k

k k k

x x pλ λ ν

+

+

⎡ ⎤ ⎡ ⎤ ⎡ ⎤= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

2

2

'

( , ) ( , )

( , )( , ) ( )( )( ) 0

xx

kk k k k

k

k x k kk k k

k kk

pL x L x

p L xL x g xg xg x

λ λν

λλν

⎡ ⎤∇ = −∇⎢ ⎥

⎣ ⎦⇓

−∇⎡ ⎤∇ −∇ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥−∇ ⎣ ⎦ ⎣ ⎦⎣ ⎦

• SQPEquations are first order KKT of

( ) ( )212min ' ( , ) ' ( , ). . ( ) ' ( ) 0

: is the Lagrangian multiplier for constraints

xx k k x x kp

x x k

k

p L x p p L x

s t g x p g x

NOTE

λ λ

ν

∇ + ∇

∇ + =

( ) ( )

( )

2

2

Lagrangian of QP approximation1L̂(p, )= ' ( , ) ' ( , ) '( ( ) ' ( ))2

L̂(p, )= ( , ) ( , ) ( ) 0

( ) ' ( ) 0

xx k k x x k x x k

p xx k k x x k x x

x x k

p L x p p L x g x p g x

KKT

L x p L x g x

g x p g x

ν λ λ ν

ν λ λ ν

∇ + ∇ − ∇ +

∇ ∇ +∇ −∇ =

∇ + =

• Local SQP Algorithm

First shot at an algorithmwhile not done

Solve QP subproblem for (p,ν)Add to iterate.

Like any Newton’s method – only works if close enough to solution

• Local SQP AlgorithmTry on (you verify solution)

5-25

54

53

22

2121

4321

e*]',[*

01),(..),(min 21

−=−−=

=−+== +

λxSolution

xxxxgtsexxf xx

1 2 1 23 4 3 421 2 1 2

2 21 2 1 2

1 21 2 1 2

2

2

0 [ .7, .7]' 0 .013 9 12

( , ) ( , )4 12 16

( , ) 1 .022 1.4 2 0

( , ) ( , )2 1.4 0 2

.008340

.015786 xx

x x x x

x

Start x

f x x e f x x e

g x x x xx

g x x g x xx

L f g L

λ

λ

+ +

= − − = −

⎡ ⎤ ⎡ ⎤∇ = ∇ =⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦= + − = −

⎡ ⎤ ⎡ ⎤ ⎡ ⎤∇ = = ∇ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦⎣ ⎦⎡ ⎤

∇ = ∇ − ∇ = ∇ = ∇⎢ ⎥⎣ ⎦

2 2 .08702 .08936.08936 .13915

f gλ⎡ ⎤

− ∇ = ⎢ ⎥⎣ ⎦

• First iterationSolve

2

'

( , ) ( ) ( , )( )( ) 0

xx k k k k x k k

k kk

L x g x p L xg xg x

λ λν

⎡ ⎤∇ −∇ −∇⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−∇⎢ ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦

1

2

.08720 .08936 1.4 .008340

.08936 .13915 1.4 .0157861.4 1.4 0 .020000

ppν

−⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥= −⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦⎣ ⎦ 1

2

.14196.15624.004808

ppν

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥= −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦⎣ ⎦

• New point

Yields

014808.85624.55804.

001

001

−=+=

⎥⎦

⎤⎢⎣

⎡−−

=+=

νλλ

pxx

• k xk λk ||del L|| ||g||

0 -.7000 -.7000 -.0100 2e-2 2e-2

1 -.5580 -.8562 -.0148 2e-3 5e-2

2 -.6077 -.7978 -.0168 2e-4 6e-3

3 -.5999 -.8001 -.0168 2e-6 7e-5

4 -.6000 -.8000 -.0168 2e-9 3e-8

5 -.6000 -.8000 -.0168 2e-16 4e-15

• Inequalities

Local form easily extended to inequalities.Just solve this QP at each iteration

( ) ( )212min ' ( , ) ' ( , )( ) ' ( ) 0

. .( ) ' ( ) 0

: is the Lagrangian multiplier for constraints

xx k k x x kp

x i x i k

x i x i k

k

p L x p p L x

g x p g x i Es t

g x p g x i I

NOTE

λ λ

ν

∇ + ∇

∇ + = ∈∇ + ≥ ∈

• But….

Local form not used because of classic Newton flaws

May not convergeExpensive Linearization may be infeasible

Similar solutions Add linesearchModified factorization to force hessian pdQuasi-Newton Approximations

• Merit Functions

Line search needs to worry about objective value and feasibilityUse linesearch but with merit function to

force toward feasibilityInexact

Exact

2( ) ( ) ( ) ' ( ) ( ) ( )ii

M x f x g x g x f x g xρ ρ= + = + ∑

1( ) ( ) ( ) ( ) ( )i

iM x f x g x f x g xρ ρ= + = + ∑

• Plus Usual Tricks

Use Modify Cholesky to insure descent directionsUse Quasi Newton approximation of Hessian of LagrangianCan add linearization of g(x) for inequality constraints too.Change things a bit (see damped Newton)

• General Problem Case

Same things works for inequalities

Use QP

Merit function

( ) ( )2121

2

min ' ( , ) ' ( , )

( ) ' ( ) 0. .

( ) ' ( ) 0

xx k k x x kp

x x k

x x k

p L x p p L x

g x p g xs t

g x p g x

λ λ∇ + ∇

∇ + =∇ + ≥

1 2( ) ( ) ( ) max(0, ( ))i ii i

M x f x g x g xρ ⎛ ⎞= + + −⎜ ⎟⎝ ⎠∑ ∑

• Devil in the details

Check out Practical SQP algorithm Algorithm 18.3 Practical Line Search

Algorithm

• Trust Region Works Great

We only trust approximation locally so limit step to this region by adding constraint to QP

( ) ( )212min ' ( , ) ' ( , ). . ( ) ' ( ) 0

xx k k x x kp

x x k

k

p L x p p L x

ps t g x p g x

λ λ∇ + ∇

∇ +

≤ Δ

=

Trust region

No stepsize needed!

• How to pick trust region?

Give it a try by solving the QPThings go better than expected (great

decrease), increase trust region,Things go as expected (okay decrease), keep

trust region the same,Things go worse than expected (insufficient

decrease), shrink trust region and try again. Use l2 merit function to decide if things are

okay.

• Devil in the details

Check out Practical SQP algorithm Algorithm 18.4 Byrd et al Trust Region

Line Search Algorithm

Also might want to add a Filter

• SQP Methods

Good when number of active constraints is close to number of variablesRequire few function evaluations (compared to augmented Lagrangian)Robust on badly scaled problemsVery successful and widely used in practice (SNOPT and FILTER)Can suffer from Maratos effect (merit eliminates good steps)

• NLP Family of AlgorithmsBasic Method

Sequential Linear Prog

Augmented Lagrangian