Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell, http://aima.cs.berkeley.edu/ instructors.html
Jan 05, 2016
Local SearchPat Riddle
2012 Semester 2Patricia J Riddle
Adapted from slides by Stuart Russell, http://aima.cs.berkeley.edu/instructors.html
2
Local search algorithmsIn many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution
State space = set of "complete" configurations
Find optimal configuration, e.g., TSPFind configuration satisfying constraints, e.g., timetable, n-queens
In such cases, we can use local search algorithms (iterative improvement algorithms)
keep a single "current" state, try to improve it
Constant space, suitable for online as well as offline search
3
Example: n-queens
Put n queens on an n × n board with no two queens on the same row, column, or diagonalMove a queen to reduce number of conflicts
Almost always solves n-queens problems almost instantaneouslyfor very large n, e.g., n=1million
4
Hill-climbing search: 8-queens problem
h = number of pairs of queens that are attacking each other, either directly or indirectly h = 17 for the above state
5
Hill-climbing search: 8-queens problem
A local minimum with h = 1
6
Example: Travelling Salesperson Problem
Start with any complete tour, perform pairwise exchanges
Variants of this approach get within 1% of optimal very quickly with thousands of cities
7
Hill-climbing search
"Like climbing Everest in thick fog with amnesia"
8
Hill-climbing search
Problem: depending on initial state, can get stuck in local maxima
9
Hill-climbing with 8-queens
• Gets stuck 86% of the time
• Solves only 14% of problem instances
– 4 steps on average when it succeeds
– 3 steps on average when it gets stuck
10
Hill climbing fixes
11
Sideways Moves
escape from shoulders
loop on at maxima Limit number of consecutive sideways steps
100 consecutive sideways moves in 8-queensNow Solves 94% of problem instances (up from 14%)Average 21 moves when succeeds Average 64 moves when it fails
12
Stochastic Hill Climbing
• Choose randomly among the uphill moves
• Or probability of selection based on steepness of move
• Converges more slowly, but can find better solutions
13
First Choice Hill-climbing
• Implementation of stochastic hill-climbing
• Generating successors randomly until it finds one which is uphill– Very good when a state has many successors– Don’t waste time generating them all
14
Random-restart hill climbing
series of hill-climbing searches (from randomly generated initial states)
overcomes local maxima trivially complete
15
8-queens with Random Restartsp is probability of success
Number of restarts is 1/p8 queens, no sideways, p=0.14, so ~7 restarts8 queens, sideways, p=0.94, so ~1.06 restarts
Number of steps(1 x cost of success) + ((1-p)/p) x cost of failure)8 queens, no sideways, (1 x 4) + ((.86/.14) x 3) ~ 22 steps8 queens, sideways, (1x21) + ((0.06/0.94) x 64) ~ 25 steps
For 3 million queens, random restart finds solutions in under a minute
16
Hill Climbing Summary
• The success of hill climbing depends very much on the shape of the state-space landscape.
17
Simulated annealing search
Idea: escape local maxima by allowing some "bad" moves but gradually decrease their size and frequency
18
Properties of simulated annealing search
One can prove: If T decreases slowly enough, then simulated
annealing search will find a global optimum with probability approaching 1
Intuition: Shake hard enough to dislodge from local minima but not hard enough to dislodge from global minima
Widely used in VLSI layout, airline scheduling, etc
19
Local beam search
Keep track of k states rather than just one, choose top k of all their successors
Start with k randomly generated states
At each iteration, all the successors of all k states are generated
If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
20
Same as k random restarts? NO
In a local beam search, useful information is passed among the parallel search threads.
But can quickly become concentrated in a small region
21
Stochastic Beam Search
Choose k successors randomly, biased towards good ones
Observe the close analogy to natural selection!
22
Genetic algorithms 1= stochastic local beam search + generate successors from pairs of states
23
Genetic algorithms 2A successor state is generated by combining two parent states
Start with k randomly generated states (population)
A state is represented as a string over a finite alphabet (often a string of 0s and 1s)
Evaluation function (fitness function). Higher values for better states.
Produce the next generation of states by selection, crossover, and mutation
24
Genetic Algorithms 3
GAs frequently use states encoded as strings (GPs use programs - trees)
Crossover helps iff substrings are meaningful components