Top Banner
Linear Programming 2011 1 Chapter 9. Interior Point Methods Three major variants Affine scaling algorithm - easy concept, good performance Potential reduction algorithm - poly time Path following algorithm - poly time, good performance, theoretically elegant
22

Chapter 9. Interior Point Methods

Jan 03, 2016

Download

Documents

Oscar Cross

Chapter 9. Interior Point Methods. Three major variants Affine scaling algorithm - easy concept, good performance Potential reduction algorithm - poly time Path following algorithm - poly time, good performance, theoretically elegant . 9.4 The primal path following algorithm. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Chapter 9.  Interior Point Methods

Linear Programming 2011 1

Chapter 9. Interior Point Methods

Three major variantsAffine scaling algorithm - easy concept, good performancePotential reduction algorithm - poly timePath following algorithm - poly time, good performance, theoretically

elegant

Page 2: Chapter 9.  Interior Point Methods

Linear Programming 2011 2

9.4 The primal path following algorithm

min c’x max p’b

Ax = b p’A + s’ = c’

x 0 s 0

Nonnegativity makes the problem difficult, hence use barrier function in the objective and consider unconstrained problem ( in the affine space Ax = b, p’A + s’ = c’ )

Barrier function: B(x) = c’x - j=1n log xj, > 0

B(x) + if xj 0 for some j

Solve min B(x), Ax = b (9.15)

B(x) is strictly convex, hence has unique min point if min exists.

Page 3: Chapter 9.  Interior Point Methods

Linear Programming 2011 3

ex) min x, s. t. x 0

B(x) = x - log x,

1 - /x = 0 min at x =

x

B(x)

0 1

- log x

Page 4: Chapter 9.  Interior Point Methods

Linear Programming 2011 4

min B(x) = c’x - j=1n log xj

s.t. Ax = b

Let x() is optimal solution given > 0.

x(), when varies, is called the central path

( hence the name path following )

It can be shown that lim 0 x() = x* optimal solution to LP.

When = , x() is called the analytic center.

For dual problem, the barrier problem is

max p’b + j = 1m log sj , p’A + s’ = c’ (9.16)

( equivalent to min –p’b - j = 1m log sj , minimizing convex function)

Page 5: Chapter 9.  Interior Point Methods

Linear Programming 2011 5

Figure 9.4: The central path and the analytic center

x(10)

x(1)

x(0.1)x(0.01)

x*

analytic center

c

Page 6: Chapter 9.  Interior Point Methods

Linear Programming 2011 6

Results from nonlinear programming

(NLP) min f(x)

s.t. gi(x) 0, i = 1, … , m

hi(x) = 0. i = 1, … , p

f, gi , hi : Rn R, all twice continuously differentiable

( gradient is given as a column vector)

Thm (Karush 1939, Kuhn-Tucker 1951, first order necessary optimality condition)

If x* is a local minimum for (NLP) and some condition (called constraint qualification) holds at x*, then there exist u R+

m, v Rp such that

(1) f(x*) + i = 1m ui gi(x*) + i = 1

p vi hi(x*) = 0

(2) u 0, gi(x*) 0, i = 1, … , m, i = 1m ui gi(x*) = 0

(3) hi(x*) = 0. i = 1, … , p

Page 7: Chapter 9.  Interior Point Methods

Linear Programming 2011 7

Remark:(2) is CS conditions and it implies that ui = 0 for non-active constraint gi .

(1) says f(x*) is a nonnegative linear combination of - gi(x*) for active

constraints and hi(x*)

(compare to strong duality theorem in p. 173 and its Figure )CS conditions for LP are KKT conditionsKKT conditions are necessary conditions for optimality, but it is also

sufficient in some situations. One case is when objective function is convex and constraints are linear, which includes our barrier problem.

Page 8: Chapter 9.  Interior Point Methods

Linear Programming 2011 8

Deriving KKT for barrier problem:

min B(x) = c’x - j=1n log xj, s.t. Ax = b ( x > 0 )

f(x) = c - X-1e, hi(x) = ai

( ai is i-th row vector of A, expressed as a column vector and

X-1 = diag( 1/x1, … , 1/xn ), e is the vector having 1 in all components.)

Using (Lagrangian) multiplier pi for hi(x), we get c - X-1e = A’p

(ignoring the sign of p)

Note that hi(x) = ai’x – bi and hi(x) = ai . ( h(x) = Ax – b : Rn Rm)

If we define s = X-1e , KKT becomes

A’p + s = c, Ax = b, XSe = e, ( x > 0, s > 0),

where S = diag ( s1, … , sn ).

Page 9: Chapter 9.  Interior Point Methods

Linear Programming 2011 9

For dual barrier problem,

min - p’b - j = 1m log sj , s.t. p’A + s’ = c’ ( s > 0 )

eS

b)s,p(f 1

i

ii e

A)s,p(h

( Ai is i-th column vector of A and ei is i-th unit vector.)

Using (Lagrangian) multiplier – xi for hi(p, s), we get

m

i i

ii e

A)x(

eS

b

11

Now i xiei = Xe, hence we have the conditions

A’p + s = c, Ax = b, XSe = e, ( x > 0, s > 0 )

which is the same conditions we obtained from the primal barrier function.

Page 10: Chapter 9.  Interior Point Methods

Linear Programming 2011 10

The conditions are given in the text as

Ax() = b, x() 0

A’p() + s() = c, s() 0 (9.17)

X()S()e = e,

where X() = diag ( x1(), … , xn() ), S() = diag ( s1(), … , sn() ).

Note that when = 0, they are primal, dual feasibility and complementary slackness conditions.

Lemma 9.5: If x*, p*, and s* satisfy conditions (9.17), then they are optimal solutions to problems (9.15) and (9.16)

Page 11: Chapter 9.  Interior Point Methods

Linear Programming 2011 11

Pf) Let x*, p*, and s* satisfy (9.17), and let x be an arbitrary vector that satisfies x 0 and Ax = b. Then

B(x) = c’x - j=1n log xj

= c’x – (p*)’(Ax – b) - j=1n log xj

= (s*)’x + (p*)’b - j=1n log xj,

n + (p*)’b - j=1n log ( / sj*)

sj*xj - log xj attains min at xj = / sj*.

equality holds iff xj = / sj* = xj*

Hence B(x*) B(x) for all feasible x. In particular, x* is the unique

optimal solution and x* = x().

Similarly for p* and s* for dual barrier problem.

Page 12: Chapter 9.  Interior Point Methods

Linear Programming 2011 12

Primal path following algorithm

Starting from some 0 and primal and dual feasible x0 > 0, s0 > 0, p0, find solution of the barrier problem iteratively while 0.

To solve the barrier problem, we use quadratic approximation (2nd order Taylor expansion) of the barrier function and use the minimum of the approximate function as the next iterates.

Taylor expansion is

dX'dd)X'e'c()x(B

ddxx

)x(Bd

x

)x(B)x(B)dx(B

n

j,iji

ji

n

ii

i

2211

1

2

21

1

Also need to satisfy A(x + d) = b Ad = 0

Page 13: Chapter 9.  Interior Point Methods

Linear Programming 2011 13

Using KKT, solution to this problem is

d() = ( I – X2A’(AX2A’)-1A )( Xe – (1/ )X2c )

p() = (AX2A’)-1A ( X2c - Xe )

The duality gap is c’x – p’b = ( p’A + s’ )x – p’(Ax) = s’x

Hence stop the algorithm if (sk)’xk <

Need a scheme to have initial feasible solution

Page 14: Chapter 9.  Interior Point Methods

Linear Programming 2011 14

The primal path following algorithm

1. (Initialization) Start with some primal and dual feasible x0 > 0, s0 > 0, p0, and set k = 0.

2. (Optimality test) If (sk)’xk < stop; else go to Step 3.

3. Let Xk = diag ( x1k, … , xn

k ), k+1 = k ( 0< <1)

4. (Computation of directions) Solve the linear system

k+1 Xk-2d – A’p = k+1 Xk

-1e – c,

Ad = 0,

for p and d.

5. (Update of solutions) Let

xk+1 = xk + d,

pk+1 = p,

sk+1 = c – A’p.

6. Let k := k+1 and go to Step 2.

Page 15: Chapter 9.  Interior Point Methods

Linear Programming 2011 15

9.5 The primal-dual path following algorithm Find Newton directions both in the primal and dual space.

Instead of finding min of quadratic approximation of barrier function, it finds the solution for KKT system.

Ax() = b, ( x() 0 )

A’p() + s() = c, ( s() 0 ) (9.26)

X()S()e = e,

System of nonlinear equations because of the last ones. Let F: Rr Rr. Want z* such that F(z*) = 0

We use first order Taylor approximation around zk,

F( zk + d) ~ F(zk) + J(zk)d.

Here J(zk) is the r r Jacobian matrix whose (i, j)th element is

k

j

i zzz

)z(F

Page 16: Chapter 9.  Interior Point Methods

Linear Programming 2011 16

Try to find d that satisfies

F(zk) + J(zk)d = 0

d is called a Newton direction.

Here F(z) is given by

eXSe

csp'A

bAx

)z(F

eeSX

csp'A

bAx

d

d

d

XS

I'A

A

kkk

kk

k

ks

kp

kx

kk 0

0

00

This is equivalent to

Adxk = 0 (9.28)

A’dpk + ds

k = 0 (9.29)

Skdxk + Xkds

k = ke - XkSke (9.30)

Page 17: Chapter 9.  Interior Point Methods

Linear Programming 2011 17

Solution to the previous system is

dxk = Dk ( I – Pk )vk ( k),

dpk = - (ADk

2A’)-1ADkvk (k),

dsk = Dk

-1Pkvk (k),

where

Dk2 = XkSk

-1,

Pk = DkA’ (ADk2A’)-1ADk ,

vk (k) = Xk-1Dk ( ke – XkSke).

Also limit the step length to ensure xk+1 > 0, sk+1 > 0.

Page 18: Chapter 9.  Interior Point Methods

Linear Programming 2011 18

The primal-dual path following algorithm

1. (Initialization) Start with some feasible x0 > 0, s0 > 0, p0, and set k = 0.

2. (Optimality test) If (sk)’xk < stop; else go to Step 3.

3. (Computation of Newton directions) Let

k = (sk)’xk / n,

Xk = diag ( x1k, … , xn

k ),

Sk = diag ( s1k, … , sn

k ).

Solve the linear system (9.28) – (9.30) for dxk, dp

k, and dsk.

4. (Find step lengths) Let ,

)d(

xmin,min

ikx

ki

})d(:i{

kP

ikx

01

,)d(

smin,min

iks

ki

})d(:i{

kD

iks

01

( 0 < < 1 )

Page 19: Chapter 9.  Interior Point Methods

Linear Programming 2011 19

(continued)

5. (Solution update) Update the solution vectors according to

xk+1 = xk + Pk dx

k,

pk+1 = pk + Dk dp

k,

sk+1 = sk + Dk ds

k.

6. Let k := k+1 and go to Step 2.

Page 20: Chapter 9.  Interior Point Methods

Linear Programming 2011 20

Infeasible primal-dual path following methods

A variation of primal-dual path following.

Starts from x0 > 0, s0 > 0, p0, which is not necessarily feasible for either the primal or the dual, i.e. Ax0 b and/or A’p0 + s0 c.

Iteration same as the primal-dual path following except feasibility not maintained in each iteration.

Excellent performance.

Page 21: Chapter 9.  Interior Point Methods

Linear Programming 2011 21

Self-dual method Alternative method to find initial feasible solution w/o using big-M.

Given an initial possibly infeasible point (x0, p0, s0) with x0 > 0 and s0 > 0, consider the problem

minimize ( (x0)’s0 + 1) subject to Ax - b + b = 0

-A’p + c - c - s = 0 (9.33)

b’p – c’x + z - = 0

- b’p + c’x - z = - ((x0)’s0 + 1)

x 0, 0, s 0, 0

where b = b – Ax0, c = c – A’p0 – s0, z = c’x0 + 1 – b’p0.

This LP is self-dual.

Note that ( x, p, s, , , ) = ( x0, p0, s0, 1, 1, 1) is a feasible interior solution to (9.33)

Page 22: Chapter 9.  Interior Point Methods

Linear Programming 2011 22

Since both the primal and dual are feasible, they have optimal solutions and the optimal value is 0.

Primal-dual path following method finds an optimal solution ( x*, p*, s*, *, *, *) that satisfies

* = 0, x* + s* > 0, * + * > 0,

(s*)’x* = 0, ** = 0

( satisfies strict complementarity )

Can find optimal solution or determine unboundedness depending on the value of *, *. (see Thm 9.8)

Running time :

worst case :

observed : O( log n log( 0 / ))

))/log(n(O 0