Top Banner
33 MEMORY ADAPTIVE REASONING & GREEDY ASSIGNMENT TECHNIQUES FOR THE CAPACITATED MINIMUM SPANNING TREE PROBLEM Erik Rolland 1 , Raymond A. Patterson 2 and Hasan Pirkul 2 1 Department of Accounting & Management Information Systems Fisher College of Business, The Ohio State University, Columbus, Ohio 43210, USA. [email protected] 2 School of Management, The University of Texas at Dallas, Richardson Texas 75083-0688, USA. [email protected], [email protected] Abstract: It is the purpose of this paper to investigate effects of adding ran- domization to a memory-based heuristic. The algorithms we propose are applied to the Capacitated Minimum Spanning Tree problem (CMST), and we study the combined effects of simultaneously applying a memory-based and a random- based heuristic to the CMST. This paper uses the Adaptive Reasoning Tech- nique (ART) and concepts from the greedy randomized adaptive search proce- dure for solving the CMST. The resulting hybrid procedure is tested against the stand-alone Esau-Williams heuristic procedure, as well as the stand-alone greedy assignment technique. We find that randomization does not construc- tively add to the memory-based procedure, as ART alone typically outperforms all other approaches in terms of solution quality, while expending a modest amount of computational effort. 33.1 INTRODUCTION The capacitated minimum spanning tree problem (CMST) plays an important role in the design of backbone telecommunications networks. This problem is NP-complete [18], and various solution methods have been proposed in the literature [4, 10, 13, 3, 20, 19]. In a recent paper, Adaptive Reasoning Technique 485
13

Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

May 13, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

33 MEMORY ADAPTIVE

REASONING & GREEDY ASSIGNMENT

TECHNIQUES FOR THE CAPACITATED

MINIMUM SPANNING TREE PROBLEMErik Rolland1, Raymond A. Patterson2 and Hasan Pirkul2

1Department of Accounting & Management Information Systems

Fisher College of Business, The Ohio State University, Columbus, Ohio 43210, USA.

[email protected]

2School of Management, The University of Texas at Dallas, Richardson

Texas 75083-0688, USA.

[email protected], [email protected]

Abstract: It is the purpose of this paper to investigate effects of adding ran-domization to a memory-based heuristic. The algorithms we propose are appliedto the Capacitated Minimum Spanning Tree problem (CMST), and we studythe combined effects of simultaneously applying a memory-based and a random-based heuristic to the CMST. This paper uses the Adaptive Reasoning Tech-nique (ART) and concepts from the greedy randomized adaptive search proce-dure for solving the CMST. The resulting hybrid procedure is tested againstthe stand-alone Esau-Williams heuristic procedure, as well as the stand-alonegreedy assignment technique. We find that randomization does not construc-tively add to the memory-based procedure, as ART alone typically outperformsall other approaches in terms of solution quality, while expending a modestamount of computational effort.

33.1 INTRODUCTION

The capacitated minimum spanning tree problem (CMST) plays an importantrole in the design of backbone telecommunications networks. This problemis NP-complete [18], and various solution methods have been proposed in theliterature [4, 10, 13, 3, 20, 19]. In a recent paper, Adaptive Reasoning Technique

485

Page 2: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

486 META-HEURISTICS 98: THEORY & APPLICATIONS

(ART: a new type of adaptive memory-based learning procedure) was describedfor this problem [19]. This paper extends this previous research by studyingthe combination of randomized greedy heuristics with the ART memory-basedapproach.

This paper is structured as follows: the next section outlines the descriptionof the CMST and previous literature. The greedy and memory based solu-tion procedures are described in Section 33.3. Section 33.4 demonstrates thecomputational results, while conclusions are summarized in Section 33.5.

33.2 PROBLEM DESCRIPTION AND PREVIOUS RESEARCH

The CMST is a problem of connecting a set of demand nodes to a centralnode through a minimum-cost tree network. Given a graph, G(V,E), whereV is the set of demand nodes (with the associated demand vector Dv), E isthe set of possible edges in the graph (with the associated arc cost vector Ce,and an arc capacity K), and V ∗ is a designated goal node, the objective isto find a minimum-cost spanning tree that is rooted in node V ∗ where eachsub-tree branch from node V ∗ does not contain more than K nodes. Severalformulations have been proposed for this problem [7, 8, 9, 10].

One of the first, and best-known, heuristics for the CMST was proposed byEsau and Williams [4], denoted as EW. This procedure starts with an ”empty”spanning tree (no arcs are used), and then adds arcs in a greedy fashion basedon a modified cost structure without violating the capacity constraint, untilit finds a feasible spanning tree. The EW modified cost structure for an arc(i, j) is the arc cost (cij) less the minimum distance directly to the goal node(min{ciV ∗ , cjV ∗}). Altinkemer and Gavish [1, 2] presented the Parallel SavingsHeuristic (PSH) method of solving the CMST. The PSH produces reasonablygood solution values for the CMST. Hall [13] presents a cutting plane algorithmthat often produces good solutions, but typically requires excessive CPU times.Gouveia and Martins [11] use a cutting plane algorithm, which may requireextraordinarily long CPU time.

Given the limitations and shortcomings of the cutting-plane and heuristicsolution procedures, a meta-heuristic algorithm is a viable solution approach.Sharaiha et al. [20] developed a tabu search procedure for solving the CMST.Their algorithm is designed specifically for the CMST, using techniques relatedto problem structure to improve on the solutions. Amberg et al. [3] proposedvarious tabu search-based procedures, as well as as a simulating annealing ap-proach, which produce excellent results for the CMST.

Numerous heuristics for solving combinatorial optimization problems arebased on greedy selection schemes. These heuristics greedily assign values todecision variables, but typically prohibit assignments that violate the problemconstraints. The EW procedure discussed above is exactly such a technique.The problem with such greedy techniques is typically that they incorrectly as-sign values to decision variables early on in the selection process that eliminatelater selections that may lead to obtaining a better objective function value for

Page 3: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

MEMORY ADAPTIVE REASONING 487

the problem. There are at least two approaches that can help circumvent thisweakness in a greedy procedure:

1. Use of memory to inhibit poor selections by the greedy procedure. Whileiteratively applying a greedy procedure, memory can be used to keeptrack of recent choices, in order to prohibit poor selections by the greedyprocedure. This assumes that the greedy heuristic is executed repeatedly,and memory is updated from one iteration to the next.

2. Avoid the selection of the best greedy option by randomly selecting froma first, second, third, or worse most greedy option.

Second-order heuristics [15, 16, 12] have previously been proposed for net-work optimization and the CMST. These approaches share some constructssimilar to those found in ART, but without the dynamic memory functions ofART. Second-order heuristics were found to be successful for the CMST in [16].

Patterson et al. [19] demonstrated a different use of memory to manipulatethe EW greedy heuristic procedure. Additional problem constraints are foundthrough a learning and memory process for the CMST.

Hart and Shogan [13] proposed an algorithm for the vehicle routing problembased on randomized greedy selection and compared the cardinality implemen-tation to the percentage implementation of randomized greedy selection for theClarke and Wright heuristic. These ideas later were refined by Feo and Resende[5, 6], and form the basis for the greedy randomized adaptive search procedure(GRASP). With GRASP, the first-best greedy choice is not always made. In-stead, a random choice is made between the first, second, third, or worse bestgreedy choice. Furthermore, a local search component is added. GRASP hasbeen found to be a very efficient solution approach for numerous combinatorialoptimization problems [6, 17]. It is the purpose of this paper to investigate theapplication of a modified and simplified version of GRASP to the CMST, andto study the combined effects of simultaneously using a memory-based heuristicand this so-called greedy assignment technique to solve the CMST.

33.3 CONSTRAINT-BASED ADAPTIVE REASONING AND GRASP

For most problems there exists a greedy heuristic that produces solutions forthe problem. We know that for Matroid problems (such as the un-capacitatedversion of the Minimum Spanning Tree problem), a greedy heuristic alwaysproduces the optimal solution. Many greedy heuristics are constructive; thatis, they build up complete solutions from scratch, one solution component ata time. While studying the behavior of a greedy heuristic, such as the EWprocedure [4] for the CMST, one will often see that the greedy heuristic reallyonly makes a handful of crucial mistakes [15]. Actually, the mistakes can bedivided into two categories: primary mistakes, that are due to the greedy se-lection criteria; and secondary mistakes, that are due to limited choices basedon earlier primary mistakes. The primary mistakes are the crucial ones, and,given an opportunity, these are the mistakes that we would prevent.

Page 4: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

488 META-HEURISTICS 98: THEORY & APPLICATIONS

ART is an adaptive reasoning technique that attempts to ”learn” about theprimary mistakes of a heuristic. In the ART framework, the greedy heuristic isexecuted repeatedly, and for each new execution we probabilistically introduceconstraints that may prohibit certain solution elements from being consideredby the greedy heuristic. That is, a prohibition of the edge (i, j) leads to thecorresponding decision variable being set to zero (xij = 0 where xij = 1 if theedge from node i to node j is included in the solution or xij = 0 if the edge fromnode i to node j is not included in the solution). The prohibitions may last formore than one iteration, and as such, we may at any time have a collection ofactive constraints due to the prohibited solution elements. The length of theprohibition is determined probabilistically, but can often be a function of thecost of the solution element. The active constraints are held in a short-termmemory. A long-term memory holds information regarding which constraintswere in the active memory for the best set of solutions (for example, for the bestten solutions found so far). We can now augment these two memory functionswith some basic principles of learning. For example, we can impose a certaindegree of memory loss during the execution of the algorithm (some amountof memory loss is often important, even in human memory, since it enablesus to explore ”old territory” in light of newly discovered evidence). We canalso impose a ”propensity to learn” control variable, which can be varied overthe execution time of ART (this is similar to a principle observed in humans:we are less ”willing” to learn as we grow older). The end-result of the ARTalgorithm is a set of prohibitions that, when used in conjunction with a greedyalgorithm, would enable us to find an optimal, or close to optimal, solution.The execution cycle of ART is as follows:

1. EW heuristic is used to solve the CMST, subject to any additional non-redundant problem constraints created by ART. The additional con-straints are determined in the same manner which tabu restrictions areset in tabu search (the ART method as applied to the CMST sub-problemalone is fully described by Patterson et al. [19]).

2. The CMST solution found in Step 1 is evaluated:

(a) Additional non-redundant constraints are added, and set to be ineffect for a certain number of time periods (based on the parametersof the learning principles).

(b) Additional non-redundant constraints whose time duration has ex-pired are removed.

(c) A long-term memory is updated to hold the 10 best found solutions.

(d) Elements of learning are invoked as necessary.

3. Repeat from Step 1 until we have reached a pre-set maximum number ofiterations.

A depiction of ART is given in Figure 33.1, whereas a more detailed flowchart is given in Figure 33.2. Figure 33.1 depicts the conceptual view of ART.The four main components of ART are as follows:

Page 5: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

MEMORY ADAPTIVE REASONING 489

L e a r n i n gP r i n c i p l e s

L o n g - T e r m M e m o r y( c o n s t r a i n t s f o r b e s t s o l u t i o n s )

A c t i v e M e m o r y( n o n - r e d u n d a n t c o n s t r a i n t s )

O u t p u tS o l u t i o n

I m p o s e a c t i v em e m o r y

U p d a t e a c t i v em e m o r y

U p d a t e l o n g -t e r m m e m o r y

P r i m a r y H e u r i s t i c s( c o n s t r u t i v e g r e e d y p r o c e d u r e s )

E s a u - W i l l i a m s h e u r i s t i c

Figure 33.1 ART Components and Execution Cycle.

C y c l e = 0

C y c l e = C y c l e + 1L e a r n i n g r a t e =2 * 1 . 8 ^ ( C y c l e - 1 )

S o l v e C M S T w i t h E Wh e u r i s t i c

( w i t h o r w i t h o u tG R A S P )

E v a l u a t e t h e s o l u t i o n .C r e a t e a n d r e m o v e

c o n s t r a i n t s

I m p r o v e m e n t i n b e s tf e a s i b l e s o l u t i o n i n l a s t 1 5i t e r a t i o n s o f t h i s c y c l e ?

S h o r t e n m e m o r yd u r a t i o n f o r s o m e

p r o p o r t i o n o f m e m o r y .R e d u c e l e a r n i n g

i n g r a i n m e n t r a t e b y1 0 %

H a v e n p h a s e sb e e n e x e c u t e d ?

S T O P

S T A R T

Y E S

N O

H a v e 5 c y c l e sb e e n e x e c u t e d ?

N o

Y e s

N O

Y E S

R e s e t m e m o r y t o b e s ts o l u t i o n .

I n g r a i n m e m o r y b y a ne x t r a 1 0 % f o r a l la c t i v e c o n s t r a i n t s

Figure 33.2 ART: A Detailed View.

Page 6: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

490 META-HEURISTICS 98: THEORY & APPLICATIONS

1. Learning principles are applied to an EW solution to determine whichdecision variables are prohibited and for how long. These principles areloosely based on observations of human learning. The learning rules arebased on common-sense notions of memory and learning such as: forget-ting; reinforcing stronger memories through ingrainment; not trying toteach an old dog new tricks; variation in the willingness to learn overtime; and variation in the time duration that learning is remembered.

2. A long-term memory is formed from the short-term memory associatedwith the set of best solutions. The long-term memory holds a summaryof the non-redundant constraints found in the best solutions, and as suchis a characterization of the combination of additional constraints that arefruitful for the problem at hand.

3. A short-term memory holds the iteration at which a constraint on a de-cision variable expires. This constraint is similar to a tabu restrictionthat eliminates a decision variable from consideration. This memory isset when a decision variable is selected by the primary heuristic. With agiven probability, the short-term memory prohibits the use of the deci-sion variable for a certain number of iterations. The time duration of theprohibition depends on both the cost associated with the arc and somedegree of randomness. Since the EW primary heuristic makes few mis-takes, the probability of actually prohibiting an arc is typically low forthe CMST (the actual starting parameter used is 2%, and is increased bya factor of 1.8 for each phase of the algorithm (see Figure 33.2).

4. A constructive primary heuristic (EW) is used iteratively to solve theproblem subject to the evolving set of additional problem constraints.The constraints that have been added prohibit particular decision vari-ables from being included in the primary heuristic’s selection process.The ART program alters the greedy selection sequence made by the pri-mary heuristic algorithm. By altering a choice in the solution path of thegreedy heuristic (through additional constraints), the subsequent choicesmade by the greedy heuristic are altered.

Greedy Assignment Technique According to Kontoravdis and Bard [17],“greedy randomized adaptive search procedures combine greedy heuristics, ran-domization, and local search. The computations are performed in two steps orphases. The first step includes the sequential construction of feasible solutions.At each iteration, all feasible moves are ranked according to an adaptive greedyfunction that takes into account the present state and one is randomly selectedfrom a restricted candidate list. In the second step, local search is used toarrive at a local optimum.” Our interpretation of GRASP for the CMST prob-lem omits the local search procedure and only includes step one of traditionalGRASP implementations. An alternative implementation would be to add alocal search procedure to the top solutions found using both ART and GRASP.It is anticipated that in general, a starting position with lower cost solutions

Page 7: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

MEMORY ADAPTIVE REASONING 491

before the application of a local search procedure will result in lower cost solu-tions after the local search procedure is applied. Thus, GRASP as implementedin this paper does not include the local search procedure. That is, a detailedoutline of our greedy assignment technique (yet under the acronym GRASP)is as follows.

With GRASP, the first-best greedy choice is not always made [14, 5, 6, 17].Rather, a candidate list of the top greedy choices is created for each decision inthe heuristic, from which the selection is randomly chosen. Thus, for the CMST,a simple GRASP implementation can be obtained by introducing the random-ized selection into EW. Thus, one selects the arc to be chosen randomly amongthe n best alternatives (where n can be determined experimentally). This is re-ferred to as n-GRASP. The basic EW procedure uses n = 1. When combiningART with GRASP, the modified GRASP capable EW heuristic replaces thetraditional EW heuristic inside the ART framework (with n to be determinedexperimentally). No other modification to ART is necessary. A standaloneGRASP procedure is then obtained by simply disabling the memory withinART. Thus, testing the GRASP procedure implies running the GRASP capa-ble EW procedure numerous times within the ART looping framework. Thatis, the minimum number of repeated executions of the GRASP enabled EW is350, where n is constant within this execution cycle. The maximum number ofrepeated executions of the GRASP enabled EW depends on how often a new”best solution” is found within the ART looping framework. Allowing execu-tions of GRASP enabled EW to continue as long as improvements continue tobe found facilitates a fair comparison with ART.

33.4 COMPUTATIONAL RESULTS

EW [4] is extremely fast, but does not produce high-quality solutions. TheEW, ART, GRASP, and ART-GRASP hybrid heuristics were coded in Fortran77, and executed on a Silicon Graphics INDY (R-4000 based MIPS) computer,with 32 MB of memory. All problems were solved with unit demand data.Twenty well know problem sets from the OR-Library1 were obtained. Amberget al. [3] and Sharaiha et al. [20] both obtained good results based on tabusearch. The lower bounds on the optimal solutions (which in most cases arethe optimal solutions) were kindly provided to us by Luis Gouveia ([10] andpersonal communication based on his research in progress). Some of theselower bounds are newer than the bounds reported in Sharaiha et al. [20]. Thetc problem instances have the goal node in the center of the graph, whereasthe te problem instances have the goal node at the end (corner) of the nodescatter. The problems consist of 40 node and 80 node graphs. The 40 nodeproblems were solved using arc capacities of 3, 5, and 10, whereas the 80 nodeproblems were solved with arc capacities of 5, 10, and 20. Thus, a total of 60

1http://mscmga.ms.ic.ac.uk/info.html

Page 8: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

492 META-HEURISTICS 98: THEORY & APPLICATIONS

Problem Lower GapID Capacity EW Bound Opt EW A2G 2G A3G 3G ART

tc40–1 3 774 742 Opt 4.31 1.21 0.67 0.40 0.40 0.00tc40–2 3 749 717 4.46 1.53 1.53 1.39 1.53 0.98tc40–3 3 728 716 1.68 1.12 1.12 0.98 1.12 0.42tc40–4 3 804 775 3.74 1.42 0.52 0.65 0.90 0.65tc40–5 3 760 741 Opt 2.56 0.54 0.54 0.81 0.54 0.00tc40–1 5 595 586 Opt 1.54 0.34 1.19 1.02 1.37 0.34tc40–2 5 588 578 1.73 0.87 0.87 0.87 0.87 0.87tc40–3 5 602 577 Opt 4.33 1.39 1.39 1.39 1.39 0.69tc40–4 5 645 617 Opt 4.54 0.00 1.30 0.16 1.13 0.00tc40–5 5 615 600 2.50 1.50 1.50 1.83 1.50 0.83tc40–1 10 516 498 Opt 3.61 3.21 2.81 2.81 1.61 0.00tc40–2 10 505 490 Opt 3.06 2.65 2.24 1.63 2.24 0.41tc40–3 10 517 500 Opt 3.40 0.00 1.60 1.60 1.60 0.00tc40–4 10 524 512 Opt 2.34 0.78 1.37 1.56 1.17 0.39tc40–5 10 540 504 Opt 7.14 2.78 4.76 2.38 1.19 0.00

Avg 40: 3.40 1.29 1.56 1.30 1.24 0.37

tc80–1 5 1182 1094 8.04 5.48 6.67 4.84 6.12 3.75tc80–2 5 1170 1090 7.34 4.50 5.50 5.87 5.87 3.76tc80–3 5 1131 1067 6.00 8.15 9.37 3.75 4.97 2.72tc80–4 5 1151 1070 7.57 3.93 5.51 6.17 5.51 4.39tc80–5 5 1338 1268 5.52 2.68 4.02 2.68 4.10 2.52tc80–1 10 920 878 4.78 3.19 3.76 4.21 3.76 2.28tc80–2 10 917 875 4.80 3.20 3.66 4.11 3.54 1.83tc80–3 10 916 869 5.41 3.80 4.72 4.49 4.37 2.42tc80–4 10 915 863 6.03 3.82 5.79 4.75 5.33 3.36tc80–5 10 1069 998 7.11 4.71 5.51 5.71 5.71 4.01tc80–1 20 856 834 Opt 2.64 1.44 1.92 3.36 1.92 0.48tc80–2 20 836 820 Opt 1.95 1.95 1.46 2.93 1.71 0.73tc80–3 20 856 828 Opt 3.38 1.93 2.66 2.90 2.66 1.45tc80–4 20 866 820 Opt 5.61 1.71 3.17 1.95 3.17 0.98tc80–5 20 971 916 Opt 6.00 4.26 4.69 4.59 5.02 2.73

Avg 80: 5.48 3.65 4.56 4.15 4.25 2.49

Avg 40–80: 4.44 2.47 3.06 2.73 2.74 1.43

Table 33.1 Results for the tc problems

known problem instances were solved. The 60 problem instances are all fullyconnected graphs.

Complete results for the center goal node problems tc are presented in Ta-bles 33.1 and 33.2. The average gap between the bounds and the EW procedurewas 4.44%, compared to 2.47% for size 2 GRASP-ART hybrid, 3.06% for size 2GRASP, 2.73% for size 3 GRASP-ART hybrid, 2.74% for 3-GRASP, and 1.43%for ART.

Complete results for the edge goal node problems te are presented in Ta-bles 33.3 and 33.4. The gaps were 5.57% for EW, compared to 3.44% for size 2GRASP-ART hybrid, 3.21% for size 2 GRASP, 4.31% for size 3 GRASP-ARThybrid, 4.04% for 3-GRASP, and 2.48% for ART. Across both te and tc, thegap results were 5.01% for EW, compared to 2.95% for size 2 GRASP-ARThybrid, 3.14% for size 2 GRASP, 3.52% for size 3 GRASP-ART hybrid, 3.39%for 3-GRASP, and 1.95% for ART on average.

The results indicate that ART often improves on GRASP for a particularsize candidate list, but that solution results are generally better when ART isused by itself. In other words, GRASP weakens ART by making less than first-best decisions. The ART memory function seems to be a more effective way ofaltering the deterministic heuristic process than GRASP alone or GRASP-ARThybrids.

Page 9: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

MEMORY ADAPTIVE REASONING 493

Problem CPU time1

ID Capacity A2G 2G A3G 3G ART

tc40–1 3 39 44 38 43 39tc40–2 3 39 44 38 43 37tc40–3 3 38 43 39 44 38tc40–4 3 40 45 40 45 39tc40–5 3 40 44 39 44 39tc40–1 5 42 47 41 48 38tc40–2 5 41 47 44 47 38tc40–3 5 41 46 42 47 38tc40–4 5 42 47 42 46 37tc40–5 5 40 45 42 46 38tc40–1 10 43 47 45 50 37tc40–2 10 44 48 43 48 39tc40–3 10 43 47 42 47 39tc40–4 10 42 49 42 49 40tc40–5 10 43 48 44 49 41

Avg: 41 46 41 46 38.5

tc80–1 5 276 331 284 324 264tc80–2 5 303 325 263 310 272tc80–3 5 277 332 273 320 271tc80–4 5 280 319 268 325 261tc80–5 5 296 327 287 332 258tc80–1 10 289 329 288 344 274tc80–2 10 289 334 281 337 273tc80–3 10 287 335 289 338 272tc80–4 10 278 333 289 345 284tc80–5 10 292 351 285 339 284tc80–1 20 292 344 283 349 270tc80–2 20 294 343 288 339 266tc80–3 20 286 334 288 347 281tc80–4 20 291 345 299 349 278tc80–5 20 297 339 284 345 270

Avg: 289 335 283 336 271.9

Avg tc: 165 190 162 191 155.21 In seconds on a 133 MHz R4000 SGI Indigo

Table 33.2 CPU times for the tc problems

Problem Lower GapID Capacity EW Bound Opt EW A2G 2G A3G 3G ART

te40–1 3 1215 1190 2.10 0.67 0.34 0.25 0.50 0.08te40–2 3 1134 1103 2.81 0.73 2.45 1.63 1.63 1.00te40–3 3 1146 1115 Opt 2.78 1.70 0.99 1.35 1.17 0.36te40–4 3 1153 1132 1.86 0.71 0.18 1.50 0.62 0.62te40–5 3 1147 1104 3.89 1.54 1.36 1.99 1.72 1.09te40–1 5 857 830 3.25 2.29 2.53 2.77 3.73 2.05te40–2 5 839 792 5.93 1.64 3.03 2.65 3.66 1.01te40–3 5 820 797 2.89 1.63 1.00 1.38 2.26 2.76te40–4 5 854 814 4.91 1.23 2.70 3.81 1.84 2.21te40–5 5 816 784 4.08 2.42 2.42 3.57 1.79 1.15te40–1 10 658 596 10.40 6.71 8.05 6.04 8.72 5.70te40–2 10 632 573 10.30 4.19 3.49 6.28 5.06 0.70te40–3 10 596 568 4.93 2.64 2.29 4.23 4.05 1.94te40–4 10 638 596 7.05 1.68 0.34 3.36 2.35 1.34te40–5 10 597 572 Opt 4.37 2.27 3.50 3.15 3.50 1.22

Avg 40: 4.77 2.14 2.31 2.93 2.84 1.55

te80–1 5 2604 2531 2.88 1.46 1.58 2.53 2.45 1.34te80–2 5 2633 2522 4.40 3.41 2.58 3.81 3.77 1.94te80–3 5 2723 2593 5.01 3.39 2.82 4.47 3.47 2.93te80–4 5 2624 2539 3.35 2.72 2.80 4.33 3.31 1.89te80–5 5 2593 2458 5.49 1.30 1.83 3.62 2.48 0.90te80–1 10 1746 1631 7.05 4.90 5.15 6.38 7.42 3.92te80–2 10 1748 1602 9.11 7.12 7.37 7.80 7.68 5.24te80–3 10 1828 1660 10.12 5.66 5.72 6.33 6.81 4.58te80–4 10 1685 1614 4.40 4.03 3.97 4.03 4.71 3.41te80–5 10 1712 1586 7.94 5.42 4.98 5.30 6.62 3.91te80–1 20 1330 1256 5.89 5.02 5.10 8.12 6.37 4.38te80–2 20 1289 1201 7.33 7.74 4.91 7.91 5.58 4.41te80–3 20 1340 1257 6.60 6.44 4.69 6.60 6.44 3.74te80–4 20 1343 1247 7.70 8.34 5.37 9.78 8.02 6.26te80–5 20 1334 1231 8.37 4.14 2.84 4.22 3.49 2.19

Avg 80: 6.38 4.74 4.11 5.68 5.24 3.40

Avg 40–80: 5.57 3.44 3.21 4.31 4.04 2.48

Avg tc–te: 5.01 2.95 3.14 3.52 3.39 1.95

Table 33.3 Results for the te problems

Page 10: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

494 META-HEURISTICS 98: THEORY & APPLICATIONS

Problem CPU time1

ID Capacity A2G 2G A3G 3G ART

te40–1 3 39 43 39 44 36te40–2 3 40 44 39 44 37te40–3 3 39 44 39 44 36te40–4 3 40 43 39 44 37te40–5 3 39 44 39 43 38te40–1 5 41 46 43 47 38te40–2 5 43 47 43 49 40te40–3 5 43 48 44 47 38te40–4 5 45 48 43 47 38te40–5 5 42 46 42 47 39te40–1 10 43 47 45 50 37te40–2 10 42 48 44 48 42te40–3 10 42 47 43 49 41te40–4 10 43 49 43 50 41te40–5 10 44 48 45 51 40

Avg: 41.6 46.3 41.9 46.9 38.5

te80–1 5 277 321 287 330 260te80–2 5 266 323 271 330 264te80–3 5 267 315 268 335 270te80–4 5 273 324 268 328 271te80–5 5 289 318 269 332 255te80–1 10 298 347 297 339 277te80–2 10 286 326 278 350 274te80–3 10 293 350 285 354 282te80–4 10 286 339 294 349 264te80–5 10 278 333 292 330 294te80–1 20 288 354 285 338 276te80–2 20 289 343 307 354 273te80–3 20 291 350 303 345 292te80–4 20 296 345 288 342 268te80–5 20 296 348 295 367 299

Avg: 285 336 286 342 274.6

Avg te: 163 191 164 194 156.6

Avg te–tc: 189 220 188 223 179.5

1 In seconds on a 133 MHz R4000 SGI Indigo

Table 33.4 CPU times for the te problems

Page 11: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

REFERENCES 495

33.5 SUMMARY AND CONCLUSIONS

This paper examines the combination of memory-based and GRASP-basedheuristics applied to the CMST. Our experimentation suggests that GRASP,and the combination of GRASP with ART, does not lead to overall improve-ment in solution quality as compared to ART alone. However, better solutionswere found in some cases when using standalone GRASP, and GRASP withART. This shows that the additional variance in the search through the so-lution space introduced by GRASP has some value. However, comparison ofART with and without GRASP, indicate that average solution quality generallydeteriorates when GRASP is added to ART, and that stand-alone ART on theaverage does not benefit from GRASP.

Opportunities for future work include examining the effect of using the per-centage version of GRASP instead of the cardinality version which was usedin this paper. In addition, the number of GRASP choices available to choosefrom at each decision point in the EW heuristic can be varied during each it-eration of the EW heuristic. For example, we could decrease the number ofGRASP choices as the EW heuristic proceeds, resulting in more randomnessat the front-end of the EW algorithm. Alternatively, the number of GRASPchoices could be varied (e.g., randomly) over the many iterations of the EWheuristic.

Acknowledgements

We are grateful to Luis Gouveia for providing us with the bounds for the tc andte test problems. Moreover, the helpful comments and clarifications of StefanVoß are greatly appreciated. Remaining errors are on our side. (We might skipthe last two sentences.)

References

[1] Altinkemer, K. and B. Gavish (1986) Parallel saving heuristics for thetopological design of local access tree networks, Proc. Of IEEE Conferenceon Communications.

[2] Altinkemer, K. and B. Gavish (1988) Heuristics with constant error guar-antees for the design of tree networks, Management Science, 34, 331–341.

[3] Amberg, A., W. Domschke, and S. Voß (1996) Capacitated minimum span-ning trees: Algorithms using intelligent search, Combinatorial Optimiza-tion: Theory and Practice, 1, 9–39.

[4] Esau, L.R. and K.C. Williams (1966) On teleprocessing systems design,Part II – A method for approximating the optimal network, IBM SystemsJournal, 5(3), 142–147.

[5] Feo, T.A. and M.G.C. Resende (1989) A probabilistic heuristic for a com-putationally difficult set covering problem, Operations Research Letters,8, 67–71.

Page 12: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

496 META-HEURISTICS 98: THEORY & APPLICATIONS

[6] Feo, T.A. and M.G.C. Resende (1995) Greedy randomized adaptive searchprocedures, Journal of Global Optimization, 6, 109–133.

[7] Gavish, B. (1982) Topological design of centralized computer networks –Formulations and algorithms, Networks, 12, 355–377.

[8] Gavish, B. (1983) Formulations and algorithms for the capacitated minimaldirected tree problem, Journal of the Association of Computing Machinery,30, 118–132.

[9] Gouveia, L. (1993) A Comparison of directed formulations for the capac-itated minimum spanning tree problem, Telecommunications Systems, 1,51–76.

[10] Gouveia, L. (1995) A 2n Constraint formulation for the capacitated mini-mum spanning tree problem, Operations Research, 43, 130–141.

[11] Gouveia, L. and P. Martins (1995) An extended flow based formulationfor the capacitated minimal spanning tree, presented at the Third ORSATelecommunications Conference, Boca Raton, FL.

[12] Gouveia, L. and M.J. Lopes (1997) Using generalized capacitated treesfor designing the topology of local access networks, TelecommunicationSystems, 7, 315–337.

[13] Hall, L. (1996) Experience with a cutting plane algorithm for the capac-itated spanning tree problem, INFORMS Journal on Computing, 8, 219–234.

[14] Hart, J.P. and A.W. Shogan (1987) Semi-greedy heuristics: An empiricalstudy, Operations Research Letters, 6, 107–114.

[15] Karnaugh, M. (1976) A new class of algorithms for multipoint networkoptimization, IEEE Transactions on Communications, 24, 500–505.

[16] Kershenbaum, A., R. Boorstyn, and R. Oppenheim (1980) Second-ordergreedy algorithms for centralized network design, IEEE Transactions onCommunications, 22, 1835–1838.

[17] Kontoravdis, G. and J.F. Bard (1995) A GRASP for the vehicle routingproblem with time windows, ORSA Journal on Computing, 7, 10–23.

[18] Papadimitriou, C.H. (1978) The complexity of the capacitated tree prob-lem, Networks, 4, 217–230.

[19] Patterson, R., E. Rolland, and H. Pirkul (1997) A memory adaptive reason-ing technique for solving the capacitated minimum spanning tree problem,Working Paper, University of California, Riverside.

Page 13: Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem

REFERENCES 497

[20] Sharaiha, Y.M., M. Gendreau, G. Laporte, and I.H. Osman (1997) A tabusearch algorithm for the capacitated minimum spanning tree problem, Net-works, 29, 161–171.