Top Banner
Escaping Local Optima
21

Escaping Local Optima

Jan 17, 2016

Download

Documents

genera

Escaping Local Optima. Where are we?. Optimization methods. Complete solutions. Partial solutions. Exhaustive search Branch and bound Greedy Best first A* Divide and Conquer Dynamic programming. Exhaustive search Hill climbing. Improved methods based on complete solutions. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Escaping Local Optima

Escaping Local Optima

Page 2: Escaping Local Optima

Where are we?

Optimization methodsOptimization methods

Complete solutionsComplete solutions Partial solutionsPartial solutions

Exhaustive searchHill climbingExhaustive searchHill climbing

Exhaustive searchBranch and boundGreedyBest firstA*Divide and ConquerDynamic programming

Exhaustive searchBranch and boundGreedyBest firstA*Divide and ConquerDynamic programming

Page 3: Escaping Local Optima

Where are we going?

Optimization methodsOptimization methods

Complete solutionsComplete solutions Partial solutionsPartial solutions

Exhaustive searchBranch and boundGreedyBest firstA*Divide and ConquerDynamic programming

Exhaustive searchBranch and boundGreedyBest firstA*Divide and ConquerDynamic programming

Improved methods based on complete solutions

Improved methods based on complete solutions

Exhaustive searchHill climbingExhaustive searchHill climbing

Page 4: Escaping Local Optima

Escaping local optima

Many strategies including• Simulated annealing• Tabu search

• and many others…some examples -->

Page 5: Escaping Local Optima

(0) Basic Hill climbing

determine initial solution swhile s is not a local optimum

choose s’ in N(s) such that f(s’)>f(s)s = s’

return s

Page 6: Escaping Local Optima

(1) Randomized Hill climbing

determine initial solution s; bestS = swhile termination condition not satisfied

with probability p choose neighbour s’ at random (uniform)

elsechoose s’ with f(s’) > f(s) //climb if possible or s’ with max (f(s’)) over N(s)

s = s’; if (f(s) > f(bestS)) bestS = sreturn bestS

Page 7: Escaping Local Optima

(2) Variable Neighbourhooddetermine initial solution si = 1repeat

choose neighbour s’ in Ni(s) with max(f(s’))

if ((f(s’) > f(s))s = s’i = 1 // restart in first neighbourhood

elsei = i+1 // go to next neighbourhood

until i > iMaxreturn s

Page 8: Escaping Local Optima

Stochastic local search

many other important algorithms address the problem of avoiding the trap of local optima(possible source of project topics)

M&F focus on two only simulated annealing tabu search

Page 9: Escaping Local Optima

Simulated annealing metaphor:

slow cooling of liquid metalsto allow crystal structureto align properly

“temperature” T is slowly loweredto reduce random movementof solution s in solution space

Page 10: Escaping Local Optima

Simulated Annealing

determine initial solution s; bestS = sT = T0

while termination condition not satisfiedchoose s’ in N(s) probabilistically

if (s’ is “acceptable”) // function of Ts = s’if (f(s) > f(sBest)) bestS = s

update Treturn bestS

Page 11: Escaping Local Optima

Accepting a new solution

- acceptance more likely if f(s’) > f(s)- as execution proceeds,probability of acceptanceof s’ with f(s’) < f(s) decreases(becomes more like hillclimbing)

determine initial solution s; bestS = sT = T0

while termination condition not satisfiedchoose s’ in N(s) probabilisticallyif (s’ is “acceptable”) // function of T

s = s’if (f(s) > f(sBest)) bestS = s

update Treturn bestS

determine initial solution s; bestS = sT = T0

while termination condition not satisfiedchoose s’ in N(s) probabilisticallyif (s’ is “acceptable”) // function of T

s = s’if (f(s) > f(sBest)) bestS = s

update Treturn bestS

Page 12: Escaping Local Optima

the acceptance function

p =1

1+ ef (s' )− f (s)

TProbability of acceptance with simulated annealing

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

-100 -80 -60 -40 -20 0 20 40 60 80 100

improved fitness f(s') - f(s)

probability of acceptance of s'

T = 1

T = 5

T = 10

T = 20

T = 50

T = 100

T = 1000

T = 10000

T e

volv

es

*sometimes p=1 when f(s’)-f(s)> 0 *sometimes p=1 when f(s’)-f(s)> 0

Page 13: Escaping Local Optima

Simulated annealing with SAT

algorithm p.123 SA-SAT

propositions: P1,… Pn

expression: F = D1D2…Dk

where clause Di is a disjunction of propositions and negative props

e.g., Px ~Py Pz ~Pw

fitness function: number of true clauses

Page 14: Escaping Local Optima

Inner iterationassign random truth set TFFTrepeat for i=1 to 4

flip truth of prop i FFFTevaluate FTFTdecide to keep (or not) FFTT

changed value FFTF reduce T FFTT

Page 15: Escaping Local Optima

Tabu search (taboo)

always looks for best solutionbut some choices (neighbours) are

ineligible (tabu)ineligibility is based on recent moves:

once a neighbour edge is used, it is tabu for a few iterations

search does not stop at local optimum

Page 16: Escaping Local Optima

Symmetric TSP example

set of 9 cities {A,B,C,D,E,F,G,H,I}neighbour definition based on 2-opt*(27 neighbours)current sequence:

B - D - A - I - H - F - E - C - G - Bmove to 2-opt neighbour

B - E - F - H - I - A - D - C - G - Bedges B-E and D-C are now tabui.e., next 2-opt swap cannot involve these

edges

*example in book uses 2-swap, p 131

Page 17: Escaping Local Optima

TSP example, algorithm p 133how long will an edge be tabu? 3 iterationshow to track and restore eligibility?

data structure to store tabu statusof 9*8/2 = 36 edges

B - D - A - I - H - F - E - C - G - B

recency-basedmemory

A B C D E F G H

I 0 0 0 0 0 0 0 2

H 0 0 0 0 0 1 0

G 0 1 0 0 0 0

F 0 0 0 0 0

E 0 0 3 0

D 2 3 0

C 0 0

B 0

Page 18: Escaping Local Optima

procedure tabu searchbegin

tries <- 0repeat

generate a tourcount <- 0repeat

identify a set T of 2-opt movesselect best admissible move from Tmake appropriate 2-optupdate tabu list and other varsif new tour is best-so-far for a given tries

update local best tour informationcount <- count + 1

until count == ITERtries <- tries + 1if best-so-far for given tries is best-so-far (for all ‘tries’)

update global best informationuntil tries == MAX-TRIES

end

Page 19: Escaping Local Optima

applying 2-opt with tabu

from the table, some edges are tabu:B - D - A - I - H - F - E - C - G - B 2-opt can only consider:

AI and FE AI and CG FE and CG

A B C D E F G H

I 0 0 0 0 0 0 0 2

H 0 0 0 0 0 1 0

G 0 1 0 0 0 0

F 0 0 0 0 0

E 0 0 3 0

D 2 3 0

C 0 0

B 0

Page 20: Escaping Local Optima

importance of parameters

once algorithm is designed, it must be “tuned” to the problem selecting fitness function and

neighbourhood definition setting values for parameters

this is usually done experimentally

Page 21: Escaping Local Optima

procedure tabu searchbegin

tries <- 0repeat

generate a tourcount <- 0repeat

identify a set T of 2-opt movesselect best admissible move from Tmake appropriate 2-optupdate tabu list and other varsif new tour is best-so-far for a given tries

update local best tour informationcount <- count + 1

until count == ITERtries <- tries + 1if best-so-far for given tour is best-so-far for all tries

update global best informationuntil tries == MAX-TRIES

end