Artificial Intelligence 2006 28 Aug. 2006 Đặng Xuân Hà <dxha at hau1.edu.vn> 1 Trí tuệ nhân tạo (artificial intelligence) Chapter 4. Informed search Informed search Đặng Xuân Hà DSE, FIT, HAU1 Office phone: 8276346; Ext.:132 Office location: Room 317, Administration building. Email: dxha at hau1.edu.vn; dangxuanha at gmail.com Website: http://www.hau1.edu.vn/it/dxha Artificial Intelligence 2006 2 28 Aug. 2006 Review: Tree search A search strategy is defined by picking the order of node expansion Artificial Intelligence 2006 3 28 Aug. 2006 Review: Uninformed tree search Breath-first search: Shallowest node first Depth-first search Deepest node first Uniform-cost search Least-cost node first Depth-limited search Depth-first with limited depth Iterative deepening search Depth-limited with increasing limit.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
d=24 : IDS = too many nodesA*(h1) = 39,135 nodes A*(h2) = 1,641 nodes
Artificial Intelligence 2006 2728 Aug. 2006
Relaxed problems
A problem with fewer restrictions on the actions is called a relaxed problemThe cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problemIf the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solutionIf the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution
Artificial Intelligence 2006 28 Aug. 2006
Đặng Xuân Hà <dxha at hau1.edu.vn> 10
Artificial Intelligence 2006 2828 Aug. 2006
Local search algorithms
In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution
State space = set of "complete" configurationsFind configuration satisfying constraints, e.g., n-queens
In such cases, we can use local search algorithms: keep a single "current" state, try to improve it
Artificial Intelligence 2006 2928 Aug. 2006
Example: n-queens
Put n queens on an n × n board with no two queens on the same row, column, or diagonal
Move queens to reduce conflicts
Artificial Intelligence 2006 3028 Aug. 2006
Hill-climbing search
"Like climbing Everest in thick fog with amnesia"
Artificial Intelligence 2006 28 Aug. 2006
Đặng Xuân Hà <dxha at hau1.edu.vn> 11
Artificial Intelligence 2006 3128 Aug. 2006
Hill-climbing search
"Like climbing Everest in thick fog with amnesia"
Artificial Intelligence 2006 3228 Aug. 2006
Hill-climbing search
Problem: depending on initial state, can get stuck in local maxima
Artificial Intelligence 2006 3328 Aug. 2006
Hill-climbing search: 8-queens problem
h = number of pairs of queens that are attacking each other, either directly or indirectly successor function: move a single queen to another square in the same columnh = 17 for the above state
Artificial Intelligence 2006 28 Aug. 2006
Đặng Xuân Hà <dxha at hau1.edu.vn> 12
Artificial Intelligence 2006 3428 Aug. 2006
Drawbacks of hill climbing
Ridge = sequence of local maxima difficult for greedy algorithms to navigatePlateaux = an area of the state space where the evaluation function is flat.Gets stuck 86% of the time.
Artificial Intelligence 2006 3528 Aug. 2006
Hill-climbing search: 8-queens problem
A local minimum with h = 1: no better successors
Artificial Intelligence 2006 3628 Aug. 2006
Hill-climbing variations
Stochastic hill-climbingRandom selection among the uphill moves.The selection probability can vary with the steepness of the uphill move.
First-choice hill-climbingcfr. stochastic hill climbing by generating successors randomly until a better one is found.
Random-restart hill-climbingTries to avoid getting stuck in local maxima.
Artificial Intelligence 2006 28 Aug. 2006
Đặng Xuân Hà <dxha at hau1.edu.vn> 13
Artificial Intelligence 2006 3728 Aug. 2006
Simulated annealing search
gradually decrease shaking to make sure the ballescape from local minima and fall into the global minimum
global minimum
local minimum
Artificial Intelligence 2006 3828 Aug. 2006
Simulated annealing search
Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency
Implement:Randomly select a move instead of selecting best moveAccept a bad move with probability less than 1 (p<1)p decreases by time
Artificial Intelligence 2006 3928 Aug. 2006
Properties of simulated annealing search
One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1
Widely used in VLSI layout, airline scheduling, etc
Artificial Intelligence 2006 28 Aug. 2006
Đặng Xuân Hà <dxha at hau1.edu.vn> 14
Artificial Intelligence 2006 4028 Aug. 2006
Local beam search
Keep track of k states rather than just one in hill climbing
Start with k randomly generated states
At each iteration, all the successors of all k states are generated
If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
Comparison to random restart hill climbing:Information is shared among k search threads: If one state generated good successor, but others did not “come here, the grass is greener!”
Artificial Intelligence 2006 4128 Aug. 2006
Genetic algorithms
Variant of local beam search with sexual recombination.
Artificial Intelligence 2006 4228 Aug. 2006
Genetic algorithms
A successor state is generated by combining two parent states
Start with k randomly generated states (population)
A state is represented as a string over a finite alphabet (often a string of 0s and 1s)
Evaluation function (fitness function). Higher values for better states.
Produce the next generation of states by selection, crossover, and mutation
Artificial Intelligence 2006 28 Aug. 2006
Đặng Xuân Hà <dxha at hau1.edu.vn> 15
Artificial Intelligence 2006 4328 Aug. 2006
Genetic algorithms
Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)
24/(24+23+20+11) = 31%
23/(24+23+20+11) = 29% etc
Artificial Intelligence 2006 4428 Aug. 2006
Genetic algorithms
Artificial Intelligence 2006 4528 Aug. 2006
References
Slides provided by Prof. Russell at http://aima.cs.berkeley.edu/