Top Banner
Epsilon-Transformat ion: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems - Initial Results* Weixiong Zhang and Joseph C. Computer Science Department University of California, Los Angeles Los Angeles, CA 90024 Email: {zhang, pemberto}@cs.ucla.edu Abstract It has been shown that there exists a transition in the average-case complexity of searching a random tree, from exponential to polynomial in the search depth. We develop a state-space transformation method, called e-transformation, that makes use of this com- plexity transition to find a suboptimal solution. The expected number of random tree nodes expanded by branch-and-bound (BnB) using e-transformation is cubic in the search depth, and the relative error of the solution cost compared to the optimal solution cost is bounded by a small constant. We also present an iter- ative version of e-transformation that can be used to find both optimal and suboptimal solutions. Depth- first BnB (DFBnB) using iterative e-transformation significantly improves upon truncated DFBnB on ran- dom trees with large branching factors and deep goal nodes, finding better solutions sooner on average. On the asymmetric traveling salesman problem, DFBnB using e-transformation outperforms a well-known lo- cal search method, and DFBnB using iterative e- transformation is superior to truncated DFBnB. Introduction It has been observed that phase transitions exist in many intelligent systems (Huberman & Hogg 1987) and combinatorial problems (Cheeseman, Kanefsky, & Taylor 1991; Karp & Pearl 1983; McDiarmid 1990; Mc- Diarmid & Provan 1991; Mitchell, Selman, & Levesque 1992; Zhang & Korf 1992; 1993; 1994). A phase transi- tion is a dramatic change to some problem property as some order parameter changes across a critical point. For example, water changes from a liquid to a solid when the temperature drops below the freezing point. The earliest evidence of computational phase transi- tions was the phase transition of a tree-search problem (Karp & Pearl 1983), which has recently been studied in detail (McDiarmid 1990; McDiarmid & Provan 1991; Zhang & Korf 1992; 1993; 1994). The problem is to find an optimal goal node of the following random tree. *This research was supported by NSF Grant No. IRI- 9119825, a grant from Rockwell International, a GTE grad- uate fellowship (1992-93), and a UCLA Chancellor’s Dis- sertation Year Fellowship (1993-94). Definition 1 (McDiarmid & Provan 1991) A random tree T(b,d) is a tree with depth d, and independent and identically distributed (i.i.d) random branching factors with mean b. Nonnegative edge costs are bounded i.i.d. random variables. The cost of a node is the sum of the edge costs on the path from the root to that node. An optimal goal node is a minimum-cost node at depth d. Best-first search (BFS) and depth-first branch-and- bound (DFBnB) can be used to search these random trees. Both are special cases of the general branch-and- bound (BnB) technique. See (Pearl 1984; Korf 1989; Kumar 1992) for the details of these algorithms. It turns out that the cost of an optimal goal node of T(b, d), and the expected complexity of BFS and DFBnB on T(b, d) experience phase transitions. The order parameter that determines these transitions is the expected number of children of a node whose cost is the same as that of their parent, which are called same-cost children. This is the same as the expected number of zero-cost edges emanating from a node. If ~0 is the probability that an edge has cost zero, then bpo is the expected number of same-cost children of a node. When bpo increases from less than one to greater than one, the expected cost of the optimal goal node of T(b, d) changes from a linear function of d to a con- stant, and the expected time complexity of BFS and DFBnB decreases from exponential in d to at most cu- bic in d. These phase transitions are summarized by the following lemma, and illustrated by Figure 1. Lemma 1 (McDiarmid 1990; McDiarmid & Provan 1991; Zhang & Korf 1993; 1994) Let C* be the op- timal goal cost of T(b, d) with b > 1, and NB and ND be the expected numbers of nodes expanded by BFS and DFBnB on T(b, d) respectively. As d -+ 00, (1) when bpo < 1, C*/d + cu almost surely’ , where CY is a constant, and NB = ND = Q(pd), for a constant /? > 1; (2) when bp, = 1, C*/(log logd) -+ 1 almost surely, NB = 0(d2), and ND = O(d3); and (3) when bpo > 1, C* is almost surely bounded, NB = B(d), and ‘A sequence of random variables X, is said to con- verge almost suP-ely (with probability one) to X if P&m,,, X, = X) = 1 (RCnyi 1970). Neural Networks 895 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
6

Epsilon-Transformation: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems

Feb 19, 2023

Download

Documents

Adwoa Opong
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Epsilon-Transformation: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems

Epsilon-Transformat ion: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems - Initial Results*

Weixiong Zhang and Joseph C. Computer Science Department

University of California, Los Angeles Los Angeles, CA 90024

Email: {zhang, pemberto}@cs.ucla.edu

Abstract

It has been shown that there exists a transition in the average-case complexity of searching a random tree, from exponential to polynomial in the search depth. We develop a state-space transformation method, called e-transformation, that makes use of this com- plexity transition to find a suboptimal solution. The expected number of random tree nodes expanded by branch-and-bound (BnB) using e-transformation is cubic in the search depth, and the relative error of the solution cost compared to the optimal solution cost is bounded by a small constant. We also present an iter- ative version of e-transformation that can be used to find both optimal and suboptimal solutions. Depth- first BnB (DFBnB) using iterative e-transformation significantly improves upon truncated DFBnB on ran- dom trees with large branching factors and deep goal nodes, finding better solutions sooner on average. On the asymmetric traveling salesman problem, DFBnB using e-transformation outperforms a well-known lo- cal search method, and DFBnB using iterative e- transformation is superior to truncated DFBnB.

Introduction It has been observed that phase transitions exist in many intelligent systems (Huberman & Hogg 1987) and combinatorial problems (Cheeseman, Kanefsky, & Taylor 1991; Karp & Pearl 1983; McDiarmid 1990; Mc- Diarmid & Provan 1991; Mitchell, Selman, & Levesque 1992; Zhang & Korf 1992; 1993; 1994). A phase transi- tion is a dramatic change to some problem property as some order parameter changes across a critical point. For example, water changes from a liquid to a solid when the temperature drops below the freezing point.

The earliest evidence of computational phase transi- tions was the phase transition of a tree-search problem (Karp & Pearl 1983), which has recently been studied in detail (McDiarmid 1990; McDiarmid & Provan 1991; Zhang & Korf 1992; 1993; 1994). The problem is to find an optimal goal node of the following random tree.

*This research was supported by NSF Grant No. IRI- 9119825, a grant from Rockwell International, a GTE grad- uate fellowship (1992-93), and a UCLA Chancellor’s Dis- sertation Year Fellowship (1993-94).

Definition 1 (McDiarmid & Provan 1991) A random tree T(b,d) is a tree with depth d, and independent and identically distributed (i.i.d) random branching factors with mean b. Nonnegative edge costs are bounded i.i.d. random variables. The cost of a node is the sum of the edge costs on the path from the root to that node. An optimal goal node is a minimum-cost node at depth d.

Best-first search (BFS) and depth-first branch-and- bound (DFBnB) can be used to search these random trees. Both are special cases of the general branch-and- bound (BnB) technique. See (Pearl 1984; Korf 1989; Kumar 1992) for the details of these algorithms.

It turns out that the cost of an optimal goal node of T(b, d), and the expected complexity of BFS and DFBnB on T(b, d) experience phase transitions. The order parameter that determines these transitions is the expected number of children of a node whose cost is the same as that of their parent, which are called same-cost children. This is the same as the expected number of zero-cost edges emanating from a node. If ~0 is the probability that an edge has cost zero, then bpo is the expected number of same-cost children of a node. When bpo increases from less than one to greater than one, the expected cost of the optimal goal node of T(b, d) changes from a linear function of d to a con- stant, and the expected time complexity of BFS and DFBnB decreases from exponential in d to at most cu- bic in d. These phase transitions are summarized by the following lemma, and illustrated by Figure 1.

Lemma 1 (McDiarmid 1990; McDiarmid & Provan 1991; Zhang & Korf 1993; 1994) Let C* be the op- timal goal cost of T(b, d) with b > 1, and NB and ND be the expected numbers of nodes expanded by BFS and DFBnB on T(b, d) respectively. As d -+ 00, (1) when bpo < 1, C*/d + cu almost surely’, where CY is a constant, and NB = ND = Q(pd), for a constant /? > 1; (2) when bp, = 1, C*/(log logd) -+ 1 almost surely, NB = 0(d2), and ND = O(d3); and (3) when bpo > 1, C* is almost surely bounded, NB = B(d), and

‘A sequence of random variables X, is said to con- verge almost suP-ely (with probability one) to X if P&m,,, X, = X) = 1 (RCnyi 1970).

Neural Networks 895

From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.

Page 2: Epsilon-Transformation: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems

I I I bw 1

optimal goal cost is bounded by a constant

complexity is polynomial in d

transition boundary

optimal goal cost is linear in d complexity is exponential in d

Figure 1: Phase transitions of tree search problems.

ND = O(d2). •I

Many practical search problems, such as planning and scheduling, require computation exponential in the search depth, even in the average case. However, we usually do not need optimal solutions, but rather ones that have a satisfactory quality and can be found quickly.

In this paper, we develop a state-space transforma- tion method, called &-transformation, that can be used by a search algorithm, such as BnB, to find suboptimal solutions quickly. This method makes use of the phase transition in Figure 1. We analyze its average-case performance. We also present an iterative version of &-transformation for finding both suboptimal and op- timal solutions. Finally, we evaluate the performance of both methods on random trees and the asymmetric traveling salesman problem.

Epsilon-Transformation &-transformation is based on the following very simple observation of Figure 1. For a random tree T(b, d), if we can increase the expected number of same-cost children of a node so that bpo 2 1, then the expected complexity of finding an optimal goal node becomes polynomial in d. This can be accomplished by raising the probability po of zero-cost edges, since the branch- ing factor b is usually fixed by the structure of the state space. However, increasing po means obtaining a better node-cost function (Zhang & Korf 1994), which requires more information about the problem, and is generally impractical. By sacrificing solution quality, however, we are able to transform the problem of find- ing an optimal solution with exponential average com- putation, to the problem of finding a suboptimal so- lution with polynomial average computation by artifi- cially increasing PO. This is illustrated by Figure 2.

We increase po by setting some non-zero edge costs to zero. To reduce the amount of information lost, and to improve the expected solution quality, we only set to zero those edge costs that are below a particular value

1.0 I I I

0.8 -

polynomial region

branching factor b

15 20

Figure 2: Transform a difficult problem to an easy one.

E. This is why we call our method E-transformation. E is set to the smallest value such that a suboptimal goal node can be found in polynomial average time.

Definition 2 For a constant E, an E-tree T,(b, d) of a random tree T(b, d) is the same as T(b, d), except that those edge costs in T(b, d) that are less than or equal to E are set to zero in TE(b,d), and the node costs are updated accordingly. The edge and node costs of T(b, d) are referred to as actual values, and the edge and node costs of T,(b, d) are called face values.

&-transformation converts one random tree to an- other one with an adjusted edge-cost distribution, i.e., with an increased probability of a zero-cost edge. Let f(x) be the density function and F(z) be the dis- tribution of edge costs. Then the probability that an edge has cost less than or equal to E is F(E) = Jl f(t)dt, which is also the probability pE that an edge of T, (b, d) has cost zero. Figure 3(a) illustrates how &-transformation adjusts an edge-cost density func- tion. Figure 3(b) h s ows a T(2,2) and its corresponding T,(2,2) with E = 0.25, where the numbers in the nodes and on the edges are node costs and edge costs, respec- tively. The optimal goal node of an g-tree is not nec- essarily the optimal goal node of its original tree, thus &-transformation is not guaranteed to find an optimal goal node.

After the transformation, BFS or DFBnB can be used to find an optimal goal node of TE (b, d), and return the actual value of this goal node. For simplicity, we call BnB, BFS, or DFBnB using E-transformation E- BnB, E-BFS, or E-DFBnB, respectively.

In order for E-BnB to run in polynomial average time, the value of E is chosen such that bp, s 1. To maximize the solution quality, we select the mmimum E that satisfies bpE > 1. That is, we choose

&* = min{@p, 2 l}, where pE = F(E). (1)

When E = E*, we use the term E*-transformation. The performance of E*-transformation is summarized by the following theorem.

896 Neural Networks

Page 3: Epsilon-Transformation: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems

initial edge-cost density original random tree

n

adjusted edge-cost density

optimh goal node

(b) transform a random tree (a) adjust an edge-cost density

Figure 3: An example of e-transformation.

Theorem 1 On a random tree T(b, d) with bpo < 1, as d -+ 00, E*-BnB runs in expected time that is at most cubic in d, and finds a goal node whose relative solution cost error ((C - C*)/C*, where C is the so- lution cost and C* is the optimal solution cost) is al- most surely a constant less than or equal to (S/a - l), where a is a constant as defined in Lemma 1, and S = E[edge cost x 1 x 5 E*].

Proof See (Zhang & Pemberton 1994). •I A useful feature of E-transformation is that a trade-

off can be made between the average search effi- ciency and the average solution quality. Solutions with higher (lower) average costs can be produced with less (greater) average computation by using a larger (smaller) value of E .

Learning E and Actual-Value Pruning The value of E is a function of the branching factor b and the edge-cost distribution F. For practical prob- lems, b and F are generally not available. Neverthe- less, the value of & can be learned on-line during search. Consider DFBnB as an example. If DFBnB examines the children of the current node in increasing order of their face vales (node ordering), and breaks ties in favor of a node with a lower actual node cost, then the first leaf node reached is the same whether &-transformation is used or not. DFBnB can sample the branching fac- tors and edge costs of all nodes along the path to the first leaf node, and use them to estimate b and F. As the search proceeds, the estimates of b and F can be refined and used to update the value of E.

&-BnB can also use actual-value pruning, which pre- vents BnB from exploring an interior node if its actual value exceeds the actual value uaV of the best goal node found up to that point. This pruned interior node can- not lead to a goal node with an actual value less than uaV. Intuitively, one might expect actual-value prun-

ing to improve the efficiency of BnB, and not to af- fect the solution quality. However, actual-value prun- ing reduces the opportunity to update the face-value upper bound, consequently causing some nodes with higher face value to be expanded, which are not vis- ited by &-BnB without actual-value pruning. Overall, &-BnB with and without actual-value pruning explore different parts of the search tree. Their relative effect on runtime and solution quality depends on their rela- tive pruning power and the specific problem instance. Our results on random trees show that E*-DFBnB with actual-value pruning runs longer but finds better solu- tions than &*-DFBnB without actual-value pruning.

Iterative dhansformation If we need a better solution than can be guaranteed by E-transformation, then we can use an E that is less than E*. In order to determine the largest value of E that satisfies a given error bound, we need to know the optimal solution cost, which in general is not available.

We suggest an algorithmic approach to address this issue, which is called iterative E-BnB. Iterative &-BnB performs a sequence of BnB searches with a series of E-transformations, where the value of E is reduced over successive iterations. The first iteration performs E*- BnB. Within each iteration, BnB keeps track of the largest actual edge cost encountered that is strictly less than E‘, among all those that are set to zero. Call this value fvmaa:. At the end of an iteration, if the cost of the solution found is less than the required solu- tion cost by comparing it to some lower bound, then the algorithm stops. Otherwise, a new value of E is calculated. The algorithm is then repeated until a sat- isfactory solution is found.

The most conservative way to update E is to set 5 = fvrna,. It can be easily shown that if edge costs are integers bounded by a constant, then iterative E- BnB that uses E = fv,,, in the next iteration expands asymptotically the same number of nodes as &-BnB that uses the exact value of E for finding a solution of required quality. In general, however, a small reduc- tion in the value of E may only cause a few new nodes to be explored in the subsequent iteration, which in turn may lead to a large number of iterations, and consequently a large node-regeneration overhead. Al- ternatively, we may decrease the value of E by a larger amount, such as E = fvmac/2.

Experimental Study In this section, we identify the conditions under which &-transformation and iterative &-transformation are ef- fective. To this end, we compare &-DFBnB and itera- tive E-DFBnB with other approximation algorithms.

Iterative E-DFBnB can be used in the same way as truncated DFBnB (Ibaraki et al. 1983; Zhang 1993) to find approximate and optimal solutions. Truncated DFBnB is a DFBnB that terminates prematurely when the total available computation has been exhausted.

Neural Networks 897

Page 4: Epsilon-Transformation: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems

ncated DFBnB

&-DFBnB with actual value pruning

lo2 node expansions

(a) b=2, d= 10

90

-$ 80

‘5 io

9 60

edge costs from (0,1,2, . . . . 2’6-1)

truncated DFBnB

E-DFBnB with

102 103 104 105 node expansions

(b) b= 10, d=20

Figure 4: Iterative E-DFBnB vs. truncated DFBnB.

The best solution found up to that point can then be taken as an approximation. The main difference be- tween these two algorithms is that the territory ex- plored by iterative E-DFBnB is generally smaller than the territory explored by truncated DFBnB, although iterative e-DFBnB may re-expand a node many times.

Local search (Johnson 1990; Kanellakis & Papadim- itriou 1980; Lin & Kernighan 1973) is a well-known approximation method for many difficult combinato- rial problems. Starting at an initial solution, such as one generated by a polynomial-time approximation al- gorithm, local search continuously improves the cur- rent solution by local perturbations, until no further improvement can be made. This process may be in- voked many times with different initial solutions. A serious drawback of local search is that it cannot de- termine if the best solution found so far is optimal, unless the optimal solution cost is already known.

Random Trees We ran both iterative E-DFBnB and truncated DFBnB on the same set of random trees, and recorded the total number of node expansions when either algorithm up- dated its current best solution. We then measure their

performance as the average solution cost for a given number of node expansions, since expanding a node is the primary operation. In our experiments, the value of E was updated to fvmac/2 after each iteration.

Figure 4 shows our results on uniform random trees T(b = 2,d = 10) and T(b = 10, d = 20). The edge costs are uniformly chosen from (0, 1,2,. . +, 216 - 1). The results are averaged over 1000 trials. The hor- izontal axes, on a logarithmic scale, are the average number of node expansions, and the vertical axes are the average relative goal cost error. Figure 4 indicates that iterative &-DFBnB without actual-value pruning is slightly better than with actual-value pruning. Com- pared to truncated DFBnB, iterative E-DFBnB finds a better solution with the same average number of node expansions. For instance, at 1000 node expansions in Figure 4(b), the relative error for iterative E-DFBnB is 10.4010, while the relative error for truncated DFBnB is 40.4%. The results also show that when the branch- ing factor and tree depth are increased (from Figure 4(a) to 4(b)), t t i era ive &-DFBnB further outperforms truncated DFBnB.

The relative advantage of iterative E-transformation also depends on the edge-cost distribution. Specifi- cally, the relative improvement in average solution cost of &-DFBnB over truncated DFBnB decreases when the probability of a zero-cost edge is increased.

Asymmetric Traveling Salesman ProbIem The asymmetric traveling salesman problem (ATSP) is an NP-hard combinatorial problem (Garey & John- son 1979). Given n cities and an asymmetric matrix (c;,j) that defines a cost between each pair of cities, the ATSP is to find a minimum-cost tour that visits each city exactly once and returns to the starting city. The ATSP can be optimally solved by BnB, using the solu- tion cost of the related assignment problem (AP) (Pa- padimitriou & Steiglitz 1982) as a monotonic heuristic function. The state space of the ATSP under BnB is a tree without duplicate nodes. See (Balas & Toth 1985) for a description of the method.

In our implementation of E-DFBnB and iterative EDFBnB, we used the sampling method described above to learn the value of E* for the first iteration. In each subsequent iteration, the value of E was set to fvmaz/2. F rom our experiments, &-DFBnB without actual-value pruning performs worse than &-DFBnB with actual-value pruning, and thus we present the re- sults of &-DFBnB with actual-value pruning.

We used many different cost matrices in our exper- iments. Our data shows that iterative E-DFBnB finds better solutions sooner than truncated DFBnB on av- erage, and local search performs much worse than E- DFBnB and truncated DFBnB. Figure 5(a) compares &-DFBnB with truncated DFBnB on 500-city random ATSP’s, where costs ci,j are uniformly chosen from (0, 1,2, * * *, 216 - 1). The results are averaged over 100 trials. The horizontal axis is the CPU time on a

898 Neural Networks

Page 5: Epsilon-Transformation: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems

; 0.190-

‘2 0.185-

x 0.180- g 0.175- ; 0.170-

0.165t

500&y ATSP c(Q) uniformly chosen from

I I I I 0 200 400 600

time (sec.)

z 6.0 8

2 5.0

random matrix c(Q) = {0,1,2, . . . ,2*6-l}

I

g 1.0 &

0.0 200 400 600 800 1000

number of cities (a) iterative E -DFBnB

vs. truncated DFBnB (b) &*-DFBnB vs. local search

Figure 5: Iterative &-DFBnB vs. truncated DFBnB and local search on the asymmetric TSP.

Sun4/sparc460 workstation, and the vertical axis is the average relative solution cost error with respect to the AP lower bound.

We also compared &*-DFBnB with a local search method (Papadimitriou & Kanellakis 1980) which was applied five times for each problem instance in our ex- periments. The five different initial tours were gen- erated by the nearest-neighbor, nearest insertion, far- thest insertion, greedy algorithms, and the patching algorithm (Johnson 1990; Karp 1979). We used ran- dom cost matrices and matrices converted from no- wait flowshop scheduling for four machines, which is NP-hard (Kanellakis & Papadimitriou 1980). No-wait flowshop scheduling involves determining a sequence for processing a set of jobs where each job must be handled by a set of machines in the same preset order. The objective is a sequence that minimizes a cost func- tion, such as total completion time, which was used in our experiments. The no-wait constraint additionally requires the next machine to be available when a job is ready for it. The scheduling problem instances were generated by uniformly choosing the processing time of a job on a machine from (0, 1,2,. . ., 216 - 1). We then converted them into ATSP’s using the method in (Reddi & Ramamoorthy 1972).

Local search runs much longer than e*-DFBnB on average for the problem instances we considered, be- cause we used five initial tours for local search. Figure 5(b) shows the solution quality, expressed as the av- erage tour cost error relative to the AP lower bound, versus the number of cities. Each data point is aver- aged over 100 trials. The results show that &*-DFBnB outperforms local search: it finds better solutions than local search on average even though local search was allowed to use more computation.

Related Work Phase transitions of heuristic search were originally re- vealed by Karp and Pearl (Karp & Pearl 1983). Their results have been extended by McDiarmid and Provan

(McDiarmid 1990; McDiarmid & Provan 1991), and Zhang and Korf (Zhang & Korf 1993; 1994) to random trees with arbitrary branching factors and real-valued edge costs. Huberman and Hogg (Huberman & Hogg 1987) argued that phase transitions are universal in large intelligent systems. Cheeseman et al. (Cheese- man, Kanefsky, & Taylor 1991) empirically showed that phase transitions exist in many NP-hard combi- natorial optimization problems.

In their seminal paper, Karp and Pearl (Karp & Pearl 1983) also proposed an algorithm that finds a suboptimal goal node of a tree most of the time, but may fail sometimes, and runs in expected time linear in the tree depth. McDiarmid and Provan (McDiarmid 1990; McDiarmid & Provan 1991) extended Karp and Pearl’s approximation algorithm to a general random tree. One problem with Karp and Pearl’s algorithm is that it is incomplete, meaning that it is not guaran- teed to find a goal node. Furthermore, the algorithm uses parameters that depend on the optimal goal cost, which is generally unknown, and hence their algorithm is difficult to apply in practice.

It is well known in the operations research commu- nity that approximate solutions can be obtained by prematurely terminating DFBnB, taking the best so- lution found so far as an approximation. This method is also referred to as truncated DFBnB (Zhang 1993), which we adopted in this paper. The earliest study of this method that we found was made by Ashour (Ashour 1970). Ibaraki et al. (Ibaraki et al. 1983) sys- tematically studied approximation methods based on BnB, which they called suboptimal BnB algorithms.

Conclusions We have presented a new method, called E-transfor- mation, that can be used by branch-and-bound (BnB) to find approximate solutions to combinatorial prob- lems. This method is a state-space transformation, which exploits the computational phase transitions of tree search problems. On a random tree, E-BnB runs

Neural Networks 899

Page 6: Epsilon-Transformation: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems

in expected time that is cubic in the search depth, and finds a suboptimal goal node whose expected relative solution cost error is bounded by a small constant. We also developed an iterative version of s-transformation to find both approximate and optimal solutions.

On random trees with large numbers of distinct edge costs, large branching factors, and deep goal nodes, it- erative e-DFBnB outperforms truncated DFBnB, find- ing better solutions sooner on average. On the asym- metric traveling salesman problem, &-DFBnB outper- forms a local search method, and iterative E-DFBnB is superior to truncated DFBnB.

Overall, we recommend that &-transformation be used for problems whose search trees have a small prob- ability of a zero-cost edge and large branching factors.

To our knowledge, E-transformation is the first at- tempt to exploit phase transitions in order to solve combinatorial problems. Since phase transitions exist in many intelligent systems and combinatorial prob- lems, we hope that the idea of E-transformation can be carried over to other problems and search methods.

Acknowledgment The authors are grateful to Colin McDiarmid and Judea Pearl for helpful discussions, and to the anony- mous reviewers for comments. Special thanks Korf for support, discussions and comments.

to Rich

References Ashour, S. 1970. An experimental investigation and comparative evaluation of flow-shop scheduling tech- niques. Operations Research 18:541-545. Balas, E., and Toth, P. 1985. Branch and bound methods. In Traveling Salesman Problem. Essex: John Wiley and Sons, Essex. 361-401. Cheeseman, P.; Kanefsky, B.; and Taylor, W. M. 1991. Where the really hard problems are. In Proc. 12th IJCAI 331-337 . Garey, M. k., and Johnson, D. S. 1979. Computers and Intractability. New York, NY: Freeman. Huberman, B. A., and Hogg, T. 1987. Phase tran- sitions in artificial intelligence systems. Artificial In- telligence 33:155-171. Ibaraki, T.; Muro, S.; Murakami, T.; and Hasegawa, T. 1983. Using branch-and-bound algorithms to ob- tain suboptimal solutions. Zeitchrift fir Operations Research 27~177-202. Johnson, D. S. 1990. Local optimization and the trav- eling salesman problem. In Proc. 17th Intern. Collo- quium on Automata, Languages and Programming. Kanellakis, P. C., and Papadimitriou, C. H. 1980. Local search for the asymmetric traveling salesman problem. Operations Research 28:1086-1099. Karp, R. M., and Pearl, J. 1983. Searching for an optimal path in a tree with random costs. Artificial Intelligence 21:99-117.

Karp, R. M. 1979. A patching algorithm for the nonsymmetric traveling-salesman problem. SIAM J. Comput. 81561-573. Korf, R. E. 1989. Search: A survey of recent re- sults. In Exploring Artificial Intelligence. Morgan Kaufmann. 197-237. Kumar, V. 1992. Search branch-and-bound. In Ency- clopedia of Artificial Intelligence. New York: Wiley- Interscience, 2nd edition. 1468-1472. Lin, S., and Kernighan, B. W. 1973. An effective heuristic algorithm for the traveling salesman prob- lem. Operations Research 21:498-516. McDiarmid, C. J. H., and Provan, G. M. A. 1991. An expected-cost analysis of backtracking and non- backtracking algorithms. In Proc. 12th IJCAI, 172- 177. McDiarmid, C. J. H. 1990. Probabilistic analysis of tree search. In Disorder in Physical Systems. Oxford Science. 249-260. Mitchell, D.; Selman, B.; and Levesque, H. 1992. Hard and easy distributions of SAT problems. In Proc . 10th AAAI 459-465 . Papadimitriou, Cl H., and Kanellakis, P. 1980. Flow- shop scheduling with limited temporary storage. J. of ACM 27:533-549. Papadimitriou, C. H., and Steiglitz, K. 1982. Com- binatorial Optimization: Algorithms and Complexity. Englewood Cliffs, NJ: Prentice-Hall. Pearl, J. 1984. Heuristics. Reading, MA: Addison- Wesley. Reddi, S., and Ramamoorthy, C. 1972. On the flow- shop sequencing problem with no wait in process. Op- erational Research Quarterly 23:323-331. Renyi, A. 1970. Probability Theory. Amsterdam: North-Holland. Zhang, W., and Korf, R. E. 1992. An average- case analysis of branch-and-bound with applications: Summary of results. In Proc. 10th AAAI, 545-550. Zhang, W., and Korf, R. E. 1993. Depth-first vs. best-first search: New results. In Proc. llth AAAI, 769-775. Zhang, W., and Korf, R. E. 1994. Performance of linear-space search algorithms. Artijkiul Intelligence to appear. Zhang, W., and Pemberton, J. C. 1994. Epsilon- transformation: Exploiting phase transitions to solve combinatorial optimization problems. Technical Re- port UCLA-CSD-940003, Computer Science Depart- ment, University of California, Los Angeles, CA. Zhang, W. 1993. Truncated branch-and-bound: A case study on the asymmetric TSP. In Working Notes of AAAI-93 Spring Symp.: AI and NP-Hard Prob- lems, 160-166.

900 Neural Networks