Top Banner
Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization
59

Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Mar 26, 2015

Download

Documents

Audrey Thomas
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Lecture Notes on AI-NN

Chapter 5

Information Processing & Utilization

Page 2: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Categories of Information Processing

-- Problem-Solving

-- Game-Playing

-- Theorem-Proving

-- Logic-Deduction

Page 3: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Section 5.1 Problem-Solving

§5-1 Introduction

Description of a problem:

Problem defining: the start - goal conditions Rule defining: a set of IF-THEN Strategy-finding: rule-application controlling

Page 4: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Example The Water Jug Problem

Initial Base: There are 2 jugs, a 4-kilo jug and a 3-kilo jug. Neither has any measurement marks on it.Rule Base: (1) There is a pump that can be used to fill the jug with water, or (2) You can pour water from jug on the ground or into another jug.Question: How to get exactly 2-kilo of water into the 4-kilo jug ?

Page 5: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Representation and Solution:

Kilos in 4-kilo jug kilos in 3-kilo jug

0 0 0 3 3 0 3 3 4 2 0 2 2 0

R1

R2R1

3

R2 1

R2 22R2

Page 6: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

It is clear that the Production System is suitable meansof representation for Problem-Solving.

Procedure PRODUCTION

1. DATA ← initial database2. Until DATA satisfies the termination condition, do: i) begin ii) select some rule, R, in the set of rules that can be applied to DATA iii) DATA ← result of applying R to DATA

Page 7: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

In most of AI applications, the information availableto the control strategy is usually not sufficient to permitselection of the most appropriate rule on every stage.The operation of AI production system can thus becharacterized as a SEARCH PROCESS in which rulesare tried until some sequence of them is found that produces a database satisfying the termination condition

Further, if the database of the problem to be solved isrepresented by means of graph, the search is calledGRAPH-SEARCH.

Page 8: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Procedure GRAPH-SEARCH

1. Create a search graph, G, consisting sole of the start node, S. Put S on OPEN (OPEN: a list of nodes just generated but not examined yet).2. Create a list, CLOSED, that is initially empty. (CLOSED is a list of nodes examined already)3. LOOP: if OPEN is empty, exit with failure.4. Select the first node on OPEN, remove it from OPEN to CLOSED, Call it node n.5. Examine n: if n is a goal node, exit with success. The solution is obtained by tracing a path along the pointers from n back to S in G.

Page 9: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

6. Expand n (Apply a rule to n), generating the set, M, of its successors that are not ancestors of n. Install these members of M as successors of n.7. Establish a pointer to n from these members of M that were not already on either OPEN or CLOSED. Add there members of M to OPEN. For each member of M that was already on OPEN or CLOSED, decide whether or not to redirect its pointer to n. For each member of M already on CLOSED, decide for each of its descendants in G whether or not to redirect its pointer.8. Reorder the list OPEN according to a certain rule.9. Go LOOP

Page 10: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

SNode on CLOSED

Node onOPEN

1=n

23

54

6

Pointersneed to beredirectedto

Redirection

Page 11: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

The crucial factor in search process is the ordering regulation that determines the fashion of the selectionof nodes for expansion next.

The search efficiency is dependent on the utility ofproblem information in node selection. In accord with the utility of problem information innode selection, search strategy can be divided into:

a) Blind Search, and

b) Heuristic Search

Page 12: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

§5-1-1 Blind Search on Tree

1) Breadth-First Search Node ordering: FIFOProcedure BFS1. Put start node s on OPEN. Set pointer P=02. If OPEN=NIL, failure.3. Select the first node on OPEN. Put it on CLOSED, Call it node n.4. If n=g, successful. The solution is obtained by tracing a path along the pointers from g to s in G.5. Expand n. If it has no successors, go to step 2. Or, generate all its successors, add them successively to the end on OPEN, and establish pointers from them to n; go to step 2.

Page 13: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Example: BFS

1238 4765

2831647 5

1=S

46=g

283164 75

28316475

2831 4765

283 64175

23

4

5 67

8 9

10 11

34

35 36 37 38

283 14765

2 3184765

28314 765

28316 754 19

See Nilsson p.71 Fu p.37

The shortest solution path

4539 44

20 33

83264175

2836 4175

83214765

283714 65

23184765

23 184765

28 143765

28314576

2831 6754

28 163754

8 3264175

123 84765

2 3684175

28364 175

2836741 5

8 3214765

2837146 5

83 264175

40 41 42 43

23418 765

2 8143765

2831457 6

2 8163754

2 3186754

2831567 4

16283754

21 3222 23 24 25 26 27 28 29 30 31

Page 14: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Comments on BFS:

It is guaranteed to find a optimal solution because ofits systematic search feature.

The major weakness with BFS is its inability to use the information related to the problem and thusa) It requires a large memory to store the great number of the nodes;b) It require a great amount of work to examine the great number of the nodesc) As result, BFS has low efficiency of search.

Page 15: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

§ 5-1-2 Depth First Search: Node Ordering: LIFO

Procedure DFS1. Put start node s on OPEN. Set d(s)=0, P=02. If OPEN=NIL, F.3. Select the first node on OPEN. Put it on CLOSED. Call it node n.4. If n=g, S. 5. If d(n)=d , go to step 2.6. If n is not expandable, go to step 2.7. Expand node n, generating all its successors. Establish pointers from them to n. Let d(successor)=d(n)+1. Add them to the front on OPEN in any order, then go to step 2.

B

Page 16: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Example: DFS d =5

283

184

765

2831647 5

1=S

283164 75

283

16475

2831 4765

283 64175

2

3

4

5

6 7

8

9

10 11

283 14765

2 3

765

283

714

765

28314 765

See Nilsson p.70 Fu p.42

The solution path

B

83264175

18

83 264175

8632 4175

8 3264175

12

13 14

15

16 17

83214

8 3214

83 214

8132 4765

19

20

21

22 23

6 4175

684175

23684175

23 684175

64 175

28 643175

28364517

2836741 5

283674 15

28367415

2 3765

283 65

2837146 5

2837 4615

28371465

1238 4765

123784 65

8 4 23184765

123 84765

24

25

26 27

28

29

30

31

Page 17: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Compared with BFS, DFS has the following features:1) If d is too small, the goal node may be missed, if too large, the greater amount of storage is needed.2) DFS may find the goal faster than BFS, while the the solution path may not be the shortest one if there are more than one goal node.3) DFS can often be carried out using a reasonably small amount of storage.

B

g

g

Page 18: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

§5-1-3 Informed (Heuristic) Search on Tree

(1) General Remarks -- The weakness in blind search: ignoring the information associated with the problem in selecting node for expansion next. -- Solution: Try to use the heuristic information in node ordering on OPEN -- Heuristic Search. -- The heuristic information is used in terms of Evaluation Function, f(.): f(n): node n Real number mapping nodes to their promising values.

Page 19: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

For any node n on a tree, let g*(n) be the actual cost of a minimal cost path from s to n.h*(n) be the cost of minimal cost path from n to g.f*(n) = g*(n) + h*(n) be the cost of an optimal path from s to g constrained to going through n.

Let againg be an estimation of g*, h be an estimation of h*, and

f(n) = g(n) + h(n) be an estimation of f*(n), which can

be used as an evaluation function for ordering nodeson OPEN.

Page 20: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Practically, g(n): the sum of the arc costs encountered while tracing the pointers from n to s.h(n): a function based on heuristic information from the problem, hence is called Heuristic Function.

The practical regulation is

If h(n) is very high, node n may be ignored; If h(n) is low, node n may be chosen to expand next.

Page 21: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

(2) Algorithm A and Algorithm A* on Tree Algorithm A is a special Tree-Search using evaluation function f(n) for ordering nodes on OPEN and always selecting for expansion the node with the lowest value of f(n).

The key for Algorithm A is the settings of h and g:When h=0, g=d, it reduces to BFS; h=0, g=0, random search; h=1/d, g=0, DFS; h>h*, the optimal path may be lost; h<h*, some search may be redundant, but optimal path can be found.

Page 22: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Algorithm A with h(n)<h*(n), for all n, is Algorithm A*.

Thus BFS is Algorithm A* and A* can always find aminimal length path to a goal.

Informed-ness of Algorithm A*:

A1*: f1(n) = g1(n) + h1(n) A2*: f2(n) = g2(n) + h2(n) with h1(n) < h*(n), h2(n) < h*(n)A2* is more informed than A1*, iff h*(n) > h2(n) > h1(n) for all non-goal node n.

Page 23: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Example of A*: 8-Puzzle ProblemLet f(n) = g(n) + h(n) = d(n) + w(n) d(n): depth of node n on search tree w(n): number of misplaced digits at node n

283

184

2831647 5

283164 75

16475

2831 4765

283 14765

765283

765

28314 765

83

184214

23

2 3765

714 65

1238 4765

123784 65

8 4

23184765

123 84765

1# 0+4=4

2# 3# 4#

6# 7# 8#

12# 13# 14# 15#

26#

27#

1+5=6 1+3=4 1+5=6

2+3=5 2+3=5 2+4=6

3+3=6 3+4=7

3+2=5 3+4=7

4+1=5

5+0=5

13 out of 27

Page 24: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Algorithm A* with h(n) = w(n) is more informed thanBFS which uses h(n) = 0.

h(n) = w(n) is a lower bound on the exact number ofsteps needed to reach the goal, h*(n), hence it is anAlgorithm A*.

However, w(n) does not provide good-enough estimateof h*(n). The information of “following order” is notutilized in w(n).

Page 25: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

A better estimate of h*(n) is h(n) = P(n) + 3S(n)P(n): the sum of the absolute distances that a digit is from “home”;S(n): a sequence score, taking value 2, for each non-central digit not proper followed 1, if a digit in the center 0, for other digitsE.g., for s = and g = , we have216

4 8753

1238 4765

P(s) = (3x1) + (3x2) + (1x3) + (1x0) = 12 (1,2,5) (3,4,8) (6) (7)S(s) = 8x2 = 16

Page 26: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

By using this h(n), we have f(n) = g(n) + P(n) +3S(n)with g(n) = d(n) and the above problem will find thesame optimal path but with fewer nodes expanded:

283

184

2831647 5

283164 75

16475

2831 4765

283 14765

765

28314 765

184

23

2 3765

1238 4765

123784 65

8 4

23184765

123 84765

11 out of 13

Page 27: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Since 0 < w(n) < h*(n) < P(n) + 3S(n), the solution path found happens to be of minimal path, although we were not guaranteed of finding an optimal path.

Summary: From the example above, we can see that the key in heuristic search is to determine the form of estimation function f(n) by utilizing heuristic information.

As is seen, the crucial difference between blind search and heuristic search is the ordering regulation. In Heuristic search, the node with the smallest value ofevaluation function will be chosen to be expanded first.

Page 28: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

(3) Algorithm A* for Graph Search

1. s OPEN. Set g(s)=0, f(s)=h(s)=whatever, P=0, CLOSED=NIL2. If OPEN=NIL, F3. Take first node on OPEN, call it Best-Node (BN), BN CLOSED4. If BN=g, S5. If BN not expandable, go to step 26. Expand BN, generating successors (SUCs) and do: (1) Set P: SUC BN (2) Compute g(SUC)=g(BN)+g(BN, SUC) (3) If SUC=old node (OLD) on OPEN, add OLD to the list of BN’s successors

Page 29: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

If g(SUC)<g(OLD), the OLD’s parent link should be reset to point to BN, record g(SUC) in the place of g(OLD) If g(SUC)>g(OLD), do nothing (4) If SUC= old node (OLD) on CLOSED, add OLD to the list of BN’s successors; do the same thing as in step 6(3), set the parent link and g and f values appropriately; However, if g(SUC)<g(OLD), the improvement must be propagate to OLD’s successors.7. Go to step 2.

Page 30: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

(4) Heuristic Power

The total cost of heuristic search consists of two parts:(a) Path cost = (path length) x unit length cost(b) Search cost spent for searching the solution path

(a)(b)Costs

Informed-ness

Page 31: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

(5) Measures of Heuristic Performance

(a) Penetrance, P, of A Search P is the extent to which the search has focused to a goal, rather than wandered off:

P = L / T where L: the length of the path found to the goal T: the total number of nodes generated during the search (including the goal node but not including the start node) Hence, P can be considered as a measure of search efficiency.

Page 32: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

(b) Effective Branching factor, B, of A Search

B is defined by the equation: B + B + … + B = T (total number of nodes)

Hence T =

2 L

(B - 1) BL

B - 1P = =

LT

L (B - 1)

B(B - 1)L

Where the assumptions are made:(1) The depth of the search tree = the length of path L(2) T = the number of nodes generated during search(3) B is a constant for all nodes in the tree

Page 33: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Home-Works1. Propose two h functions for the traveling salesman problem. Is either these h functions a lower bound on h*? Which of them would result in more efficient search ? Apply algorithm A with these h functions to the TSP problem.

2. Use the evaluation function f(n)=d(n)+w(n) with algorithm A to search backward from the goal to the start node in the 8-puzzle problem.

3. Discuss ways in which an h function might be improved during a search.

Page 34: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

§ 5-2 Game-Playing: AND/OR Graph Search

I. Game-Playing and AO Graph

Two-Person-Zero-Sum-Perfect Information Games Example: Grundy’s Game Two Players, Max and Min, have a pile of pennies.The first player, Min, divides the original pile into twopiles that must be unequal. Each player alternativelythereafter does the same to some single pile when it ishis turn to play. The game proceeds until every pile haseither just one penny or two. The player who first cannot play is the loser.

Page 35: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

7; Min

5,2;Max6,1;Max 4,3;Max

5,1,1;Min 4,2,1;Min 3,2,2;Min 3,3,1;Min

3,2,1,1;Max4,1,1,1;Max 2,2,2,1;Max

2,2,1,1,1;Min3,1,1,1,1;Min

2,1,1,1,1,1;Max Wining pathfor Max

AND/OR Graph

Page 36: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

From Max’s point of view, a win must be obtainablefrom all of Min’s successors and from at least one ofMax’s successors. It is an AND/OR graph.

In AND/OR graphs, there are hyper-arcs connectinga parent node with a set of its successors.

The hyper-arcs are called connectors.

Each k-connector is directed from a parent node to aset of k successor nodes.

Page 37: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

II. Features of AND/OR Graph Search

The choice of the node to expand next must dependnot only on the f value of that node itself, but also onwhether that node is part of the current best pathfrom the initial node. E.g.,

A

B C D

h=5 3 4

9

A

B C D

JIHGFE5 10 3 4 3 4

17

18

9 9

20

Page 38: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Thus, to search an AND/OR graph, it needs to dothree things at each step:1) Traverse the graph, starting at the initial node andfollowing the current best path, and accumulate the set of nodes that are on that path and haven’t been expanded.2) Pick one of these nodes and expand it. Add to thegraph its successors, compute f for each of them.3) Change the f value of the newly expanded node toreflect the new information provided by successors.Propagate this change backward through the graph.At each node that is visited while going up the graph,decide which of its successor arcs is the most promisingand mark it as part of the current best path.

Page 39: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

A

A

B

B

C D

C D

E F

3 4 5

9

6

4 4

10

9

3 4

A

B C D

G H E F5 7 4 4

104

612

This may cause thecurrent best pathto change.

Page 40: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Thus, an important feature in AND/OR graph search is that one must search a solution graph each time from the start node to the terminal node and needs tofrequently check to see if the start node solvable.

The definition of solvable node in AND/OR graph:1) Terminal node is solvable node;2) An OR node is solvable iff at least one of its successors is solvable;3) An AND node is solvable iff all its successors solvable.

Page 41: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

III. Procedure AO*(1) Create a search graph, G, consisting solely of the start node, s. Compute h(s). If s is a terminal node, h(s)=0, label s SOLVED.(2) Until s is labeled SOLVED, do:(3) Begin(4) Compute a partial solution graph, G’, in G by tracing down the marked connectors in G from s.(5) Select any non-terminal tip node, n, of G’.(6) Expand n generating all its successors and install these in G as successors of n. For each successor, n , not already occurring in G, computing h(n ). Label SOLVED any of these successors that are terminal nodes.

j j

Page 42: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

(7) Create a singleton set of nodes, S, consisting just of node n.(8) Until S is empty, do:(9) Begin(10) Remove from S a node m such that m has no descendants in G occurring in S.(11) Revise the cost h(m) for m, as follows: For each connector directed from m to a set of nodes {n , …, n }, compute h (m)=c + h(n ) + …+h(n ). Set h(m) to the minimum over all outgoing connectors of h (m) and mark the connector through which this minimum is achieved, erasing the previous marking if different.

li ki

i i 1i ki

i

Page 43: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

If all of the successors through this connector are labeled SOLVED, then label node m SOLVED.(12) If m has been marked SOLVED, or if the revised cost of m is different than its just previous cost, then add to S all these parent of m such that m is one of their successors through a marked connector.(13) End(14) End

Page 44: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

IV Search The Game Tree: MinMax Procedure

1. Localized Solution

It is usually impossible to decide a best move based on an entire search of a whole tree due to the nature of combinatorial explosion. Instead, we must merely try to find a good first move based on local search that issegmented by the artificial termination conditions.

After search artificially terminated, the estimation of the “best” first move can be made by applying a staticevaluation function to the tips of the search tree.

Page 45: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

2. Some Conventions

Two person, zero-sum, complete information games:(a) 2 players: Max and Min(b) Try to find a wining strategy for Max.(c) Max moves first, and alternatively thereafter.(d) The top node of a game tree is of depth 0.(e) Nodes at even-numbered depths are called Max nodes in which it is Max’s move next.(f) The artificial termination condition is a certain depth of search given previously.(g) Game positions favorable to Max cause evaluation function to have positive values, while positions favorable to Min cause f to have negative values.

Page 46: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Rules:(a) If Max were to choose among tip nodes, he chooses the node having largest evaluation. Thus, the Max node is assigned a back-up value equal to the maximum of the evaluation of the tip nodes.(b) If Min were to choose among tip nodes, he chooses the node having smallest evaluation. Thus the Min node is assigned a back-up value equal to minimum of the evaluations of the tip nodes.(c) After the parents of all tip nodes have been assigned back-up values, we back up values another level.(d) Continue to back up values, level by level, until all successors of the start node are assigned backed-up values.

Page 47: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Example: Tic-Tac-Toe Game

The player who first places his pieces ina straight line in the matrix is the winner.

Suppose thatMax is marked by while Min by ; Max plays first.A BFS is conducted with some revisions:-- Artificial termination condition: depth bound = 2;--A static evaluation function for position p is defined: N( ) - N( ), if p isn’t a wining positione(p) = 0 , if p is a wining position for Max -0 , if p is a wining position for Minwhere N( ) is the number of complete lines open for .

00

Page 48: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

The process is assumed to be shown below:

Max Node

Min Node

Best Move

-1-2

1

1

6-5 5-5 6-5 5-5 4-5

5-6 5-5 5-6 6-6 4-6

5-4 6-4

Page 49: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

4-3 3-3 5-3 3-3 4-3 4-3

4-2 3-2 5-2 3-2 4-2 3-2

4-3 4-3 3-3

1 1

0 01

1

Best Move for Max AnotherBest Move

4-2 4-2 5-2 3-2 4-2 4-2

Page 50: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Best Move

1 -o -o -o -o

-o -o

-o-o2-1 3-1 2-1 3-1

3-2 2-2 3-2 A B C D

2-1 3-1 3-1

2-2 3-22-2 2-1 2-12-1

o

o o o

o

o

o

o

Page 51: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

4. The Procedure

MinMax Procedure need to be improved: It separates completely the tree generation processwith position evaluation. Only after tree generation completed does position evaluation begin. This mayresult in a grossly inefficient strategy. See the last figure. If a tip node is evaluated as soon as it is generated,then after node A is generated and evaluated, there is no need in generating and evaluating nodes B, C, D.Min will choose A immediately. We can assign A’sparent the back-up value of -o and proceed with thesearch without generating B,C,D and their successors.

o

Page 52: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Another Possible SavingConsider the first step:

Suppose that-- DFS is employed, Artificial stopping condition: d = 2.-- Whenever a tip node is generated, its evaluation is computed.-- Whenever a position can be given a back-up value, this value is computed.

Start Node (Max)

Lower Bound

UpperBound

-1 -2

-1

6-5 5-5 6-5 5-5 4-5 4-6B

Node A B

Page 53: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Consider the situation: After node A and all itssuccessors have been generated but beforenode B is generated.The backed-up value of the start node is bound from below by -1, the lower bound, an value for the start node.

Start Node (Max)

Lower Bound

UpperBound

-1

6-5 5-5 6-5 5-5 4-5 4-6

Node A B

Next, B and its first successor are generated. The back-up value of node B is bounded from above by -2, an upper bound, a value. Because < , we can discontinue search below B.

-2

Page 54: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

It is obvious that:(a) The values of Max nodes can never decrease, and(b) The b values of Min nodes can never increase.

Thus we have the rules:(1) Search can be discontinued below any Min node having a value of its Max node ancestors. The final backed-up value of this Min node can be set to its value.(2) Search can be discontinued below any Max node having a value of its Min node ancestors. The final backed-up value of this Max node can be set to it’svalue.

Page 55: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

The values of and are computed, during search:(1) The value of a Max node is set equal to the current largest final backed-up value of its successors(2) The value of a Min node is set equal to the current smallest final backed-up value of its successors.

When search is discontinues under rule (1), we call it ancutoff, while under rule (2), a -cutoff.

The procedure terminates when all successors of the start node have been given final backed-up values, and the best first move is then the one creating that successor having the highest backed-up value.

Page 56: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Employing procedure always results in finding a move that is equally as good as the move that would have been found by the simple MiniMax methodsearching to the same depth.

The only difference is that the procedure find a best first move usually after much less search.

Efficiency of Procedure

In order to perform procedure, at least some part of the search tree must be generated to maximum depth,because and values must be based on the staticvalues of tip nodes. Hence, DFS is usually used.

Page 57: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

The final backed-up value of the start node is identicalto the static value of one of the tip nodes. If this tip node could be reached first in a DFS, the number of cutoffs could be maximum.

Suppose that a tree has depth D, and every node (excepttip node) has exactly B successors. Such a tree will have precisely B tip nodes. Suppose again an procedureideally generates successors in the order of their true backed-up values -- the lowest valued successor first for Min node and the largest valued successor first for Max. Then the number of the tip nodes N needed for the ideal cutoffs can be given as

D

D

Page 58: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

N = 2B -1, even D

= B + B -1, odd D

D/2

(D+1/2) (D-1)/2

Which is about the same as the number of tip nodes that would have been generated at depth D/2 without cutoffs.

Page 59: Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Exercises

1. The game “Nim” is played as follows: Two players alternate in removing one, two, or three pennies from a stack initially containing five pennies. The player who picks up the last penny loses. Show, by drawingthe game graph, that the player who has the second move can always win. Can you think of a simple characterization of the wining strategy ?