33 MEMORY ADAPTIVE REASONING & GREEDY ASSIGNMENT TECHNIQUES FOR THE CAPACITATED MINIMUM SPANNING TREE PROBLEM Erik Rolland 1 , Raymond A. Patterson 2 and Hasan Pirkul 2 1 Department of Accounting & Management Information Systems Fisher College of Business, The Ohio State University, Columbus, Ohio 43210, USA. [email protected]2 School of Management, The University of Texas at Dallas, Richardson Texas 75083-0688, USA. [email protected], [email protected]Abstract: It is the purpose of this paper to investigate effects of adding ran- domization to a memory-based heuristic. The algorithms we propose are applied to the Capacitated Minimum Spanning Tree problem (CMST), and we study the combined effects of simultaneously applying a memory-based and a random- based heuristic to the CMST. This paper uses the Adaptive Reasoning Tech- nique (ART) and concepts from the greedy randomized adaptive search proce- dure for solving the CMST. The resulting hybrid procedure is tested against the stand-alone Esau-Williams heuristic procedure, as well as the stand-alone greedy assignment technique. We find that randomization does not construc- tively add to the memory-based procedure, as ART alone typically outperforms all other approaches in terms of solution quality, while expending a modest amount of computational effort. 33.1 INTRODUCTION The capacitated minimum spanning tree problem (CMST) plays an important role in the design of backbone telecommunications networks. This problem is NP-complete [18], and various solution methods have been proposed in the literature [4, 10, 13, 3, 20, 19]. In a recent paper, Adaptive Reasoning Technique 485
13
Embed
Memory Adaptive Reasoning \u0026 Greedy Assignment Techniques for the Capacitated Minimum Spanning Tree Problem
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
33 MEMORY ADAPTIVE
REASONING & GREEDY ASSIGNMENT
TECHNIQUES FOR THE CAPACITATED
MINIMUM SPANNING TREE PROBLEMErik Rolland1, Raymond A. Patterson2 and Hasan Pirkul2
1Department of Accounting & Management Information Systems
Fisher College of Business, The Ohio State University, Columbus, Ohio 43210, USA.
Abstract: It is the purpose of this paper to investigate effects of adding ran-domization to a memory-based heuristic. The algorithms we propose are appliedto the Capacitated Minimum Spanning Tree problem (CMST), and we studythe combined effects of simultaneously applying a memory-based and a random-based heuristic to the CMST. This paper uses the Adaptive Reasoning Tech-nique (ART) and concepts from the greedy randomized adaptive search proce-dure for solving the CMST. The resulting hybrid procedure is tested againstthe stand-alone Esau-Williams heuristic procedure, as well as the stand-alonegreedy assignment technique. We find that randomization does not construc-tively add to the memory-based procedure, as ART alone typically outperformsall other approaches in terms of solution quality, while expending a modestamount of computational effort.
33.1 INTRODUCTION
The capacitated minimum spanning tree problem (CMST) plays an importantrole in the design of backbone telecommunications networks. This problemis NP-complete [18], and various solution methods have been proposed in theliterature [4, 10, 13, 3, 20, 19]. In a recent paper, Adaptive Reasoning Technique
485
486 META-HEURISTICS 98: THEORY & APPLICATIONS
(ART: a new type of adaptive memory-based learning procedure) was describedfor this problem [19]. This paper extends this previous research by studyingthe combination of randomized greedy heuristics with the ART memory-basedapproach.
This paper is structured as follows: the next section outlines the descriptionof the CMST and previous literature. The greedy and memory based solu-tion procedures are described in Section 33.3. Section 33.4 demonstrates thecomputational results, while conclusions are summarized in Section 33.5.
33.2 PROBLEM DESCRIPTION AND PREVIOUS RESEARCH
The CMST is a problem of connecting a set of demand nodes to a centralnode through a minimum-cost tree network. Given a graph, G(V,E), whereV is the set of demand nodes (with the associated demand vector Dv), E isthe set of possible edges in the graph (with the associated arc cost vector Ce,and an arc capacity K), and V ∗ is a designated goal node, the objective isto find a minimum-cost spanning tree that is rooted in node V ∗ where eachsub-tree branch from node V ∗ does not contain more than K nodes. Severalformulations have been proposed for this problem [7, 8, 9, 10].
One of the first, and best-known, heuristics for the CMST was proposed byEsau and Williams [4], denoted as EW. This procedure starts with an ”empty”spanning tree (no arcs are used), and then adds arcs in a greedy fashion basedon a modified cost structure without violating the capacity constraint, untilit finds a feasible spanning tree. The EW modified cost structure for an arc(i, j) is the arc cost (cij) less the minimum distance directly to the goal node(min{ciV ∗ , cjV ∗}). Altinkemer and Gavish [1, 2] presented the Parallel SavingsHeuristic (PSH) method of solving the CMST. The PSH produces reasonablygood solution values for the CMST. Hall [13] presents a cutting plane algorithmthat often produces good solutions, but typically requires excessive CPU times.Gouveia and Martins [11] use a cutting plane algorithm, which may requireextraordinarily long CPU time.
Given the limitations and shortcomings of the cutting-plane and heuristicsolution procedures, a meta-heuristic algorithm is a viable solution approach.Sharaiha et al. [20] developed a tabu search procedure for solving the CMST.Their algorithm is designed specifically for the CMST, using techniques relatedto problem structure to improve on the solutions. Amberg et al. [3] proposedvarious tabu search-based procedures, as well as as a simulating annealing ap-proach, which produce excellent results for the CMST.
Numerous heuristics for solving combinatorial optimization problems arebased on greedy selection schemes. These heuristics greedily assign values todecision variables, but typically prohibit assignments that violate the problemconstraints. The EW procedure discussed above is exactly such a technique.The problem with such greedy techniques is typically that they incorrectly as-sign values to decision variables early on in the selection process that eliminatelater selections that may lead to obtaining a better objective function value for
MEMORY ADAPTIVE REASONING 487
the problem. There are at least two approaches that can help circumvent thisweakness in a greedy procedure:
1. Use of memory to inhibit poor selections by the greedy procedure. Whileiteratively applying a greedy procedure, memory can be used to keeptrack of recent choices, in order to prohibit poor selections by the greedyprocedure. This assumes that the greedy heuristic is executed repeatedly,and memory is updated from one iteration to the next.
2. Avoid the selection of the best greedy option by randomly selecting froma first, second, third, or worse most greedy option.
Second-order heuristics [15, 16, 12] have previously been proposed for net-work optimization and the CMST. These approaches share some constructssimilar to those found in ART, but without the dynamic memory functions ofART. Second-order heuristics were found to be successful for the CMST in [16].
Patterson et al. [19] demonstrated a different use of memory to manipulatethe EW greedy heuristic procedure. Additional problem constraints are foundthrough a learning and memory process for the CMST.
Hart and Shogan [13] proposed an algorithm for the vehicle routing problembased on randomized greedy selection and compared the cardinality implemen-tation to the percentage implementation of randomized greedy selection for theClarke and Wright heuristic. These ideas later were refined by Feo and Resende[5, 6], and form the basis for the greedy randomized adaptive search procedure(GRASP). With GRASP, the first-best greedy choice is not always made. In-stead, a random choice is made between the first, second, third, or worse bestgreedy choice. Furthermore, a local search component is added. GRASP hasbeen found to be a very efficient solution approach for numerous combinatorialoptimization problems [6, 17]. It is the purpose of this paper to investigate theapplication of a modified and simplified version of GRASP to the CMST, andto study the combined effects of simultaneously using a memory-based heuristicand this so-called greedy assignment technique to solve the CMST.
33.3 CONSTRAINT-BASED ADAPTIVE REASONING AND GRASP
For most problems there exists a greedy heuristic that produces solutions forthe problem. We know that for Matroid problems (such as the un-capacitatedversion of the Minimum Spanning Tree problem), a greedy heuristic alwaysproduces the optimal solution. Many greedy heuristics are constructive; thatis, they build up complete solutions from scratch, one solution component ata time. While studying the behavior of a greedy heuristic, such as the EWprocedure [4] for the CMST, one will often see that the greedy heuristic reallyonly makes a handful of crucial mistakes [15]. Actually, the mistakes can bedivided into two categories: primary mistakes, that are due to the greedy se-lection criteria; and secondary mistakes, that are due to limited choices basedon earlier primary mistakes. The primary mistakes are the crucial ones, and,given an opportunity, these are the mistakes that we would prevent.
488 META-HEURISTICS 98: THEORY & APPLICATIONS
ART is an adaptive reasoning technique that attempts to ”learn” about theprimary mistakes of a heuristic. In the ART framework, the greedy heuristic isexecuted repeatedly, and for each new execution we probabilistically introduceconstraints that may prohibit certain solution elements from being consideredby the greedy heuristic. That is, a prohibition of the edge (i, j) leads to thecorresponding decision variable being set to zero (xij = 0 where xij = 1 if theedge from node i to node j is included in the solution or xij = 0 if the edge fromnode i to node j is not included in the solution). The prohibitions may last formore than one iteration, and as such, we may at any time have a collection ofactive constraints due to the prohibited solution elements. The length of theprohibition is determined probabilistically, but can often be a function of thecost of the solution element. The active constraints are held in a short-termmemory. A long-term memory holds information regarding which constraintswere in the active memory for the best set of solutions (for example, for the bestten solutions found so far). We can now augment these two memory functionswith some basic principles of learning. For example, we can impose a certaindegree of memory loss during the execution of the algorithm (some amountof memory loss is often important, even in human memory, since it enablesus to explore ”old territory” in light of newly discovered evidence). We canalso impose a ”propensity to learn” control variable, which can be varied overthe execution time of ART (this is similar to a principle observed in humans:we are less ”willing” to learn as we grow older). The end-result of the ARTalgorithm is a set of prohibitions that, when used in conjunction with a greedyalgorithm, would enable us to find an optimal, or close to optimal, solution.The execution cycle of ART is as follows:
1. EW heuristic is used to solve the CMST, subject to any additional non-redundant problem constraints created by ART. The additional con-straints are determined in the same manner which tabu restrictions areset in tabu search (the ART method as applied to the CMST sub-problemalone is fully described by Patterson et al. [19]).
2. The CMST solution found in Step 1 is evaluated:
(a) Additional non-redundant constraints are added, and set to be ineffect for a certain number of time periods (based on the parametersof the learning principles).
(b) Additional non-redundant constraints whose time duration has ex-pired are removed.
(c) A long-term memory is updated to hold the 10 best found solutions.
(d) Elements of learning are invoked as necessary.
3. Repeat from Step 1 until we have reached a pre-set maximum number ofiterations.
A depiction of ART is given in Figure 33.1, whereas a more detailed flowchart is given in Figure 33.2. Figure 33.1 depicts the conceptual view of ART.The four main components of ART are as follows:
MEMORY ADAPTIVE REASONING 489
L e a r n i n gP r i n c i p l e s
L o n g - T e r m M e m o r y( c o n s t r a i n t s f o r b e s t s o l u t i o n s )
A c t i v e M e m o r y( n o n - r e d u n d a n t c o n s t r a i n t s )
O u t p u tS o l u t i o n
I m p o s e a c t i v em e m o r y
U p d a t e a c t i v em e m o r y
U p d a t e l o n g -t e r m m e m o r y
P r i m a r y H e u r i s t i c s( c o n s t r u t i v e g r e e d y p r o c e d u r e s )
E s a u - W i l l i a m s h e u r i s t i c
Figure 33.1 ART Components and Execution Cycle.
C y c l e = 0
C y c l e = C y c l e + 1L e a r n i n g r a t e =2 * 1 . 8 ^ ( C y c l e - 1 )
S o l v e C M S T w i t h E Wh e u r i s t i c
( w i t h o r w i t h o u tG R A S P )
E v a l u a t e t h e s o l u t i o n .C r e a t e a n d r e m o v e
c o n s t r a i n t s
I m p r o v e m e n t i n b e s tf e a s i b l e s o l u t i o n i n l a s t 1 5i t e r a t i o n s o f t h i s c y c l e ?
S h o r t e n m e m o r yd u r a t i o n f o r s o m e
p r o p o r t i o n o f m e m o r y .R e d u c e l e a r n i n g
i n g r a i n m e n t r a t e b y1 0 %
H a v e n p h a s e sb e e n e x e c u t e d ?
S T O P
S T A R T
Y E S
N O
H a v e 5 c y c l e sb e e n e x e c u t e d ?
N o
Y e s
N O
Y E S
R e s e t m e m o r y t o b e s ts o l u t i o n .
I n g r a i n m e m o r y b y a ne x t r a 1 0 % f o r a l la c t i v e c o n s t r a i n t s
Figure 33.2 ART: A Detailed View.
490 META-HEURISTICS 98: THEORY & APPLICATIONS
1. Learning principles are applied to an EW solution to determine whichdecision variables are prohibited and for how long. These principles areloosely based on observations of human learning. The learning rules arebased on common-sense notions of memory and learning such as: forget-ting; reinforcing stronger memories through ingrainment; not trying toteach an old dog new tricks; variation in the willingness to learn overtime; and variation in the time duration that learning is remembered.
2. A long-term memory is formed from the short-term memory associatedwith the set of best solutions. The long-term memory holds a summaryof the non-redundant constraints found in the best solutions, and as suchis a characterization of the combination of additional constraints that arefruitful for the problem at hand.
3. A short-term memory holds the iteration at which a constraint on a de-cision variable expires. This constraint is similar to a tabu restrictionthat eliminates a decision variable from consideration. This memory isset when a decision variable is selected by the primary heuristic. With agiven probability, the short-term memory prohibits the use of the deci-sion variable for a certain number of iterations. The time duration of theprohibition depends on both the cost associated with the arc and somedegree of randomness. Since the EW primary heuristic makes few mis-takes, the probability of actually prohibiting an arc is typically low forthe CMST (the actual starting parameter used is 2%, and is increased bya factor of 1.8 for each phase of the algorithm (see Figure 33.2).
4. A constructive primary heuristic (EW) is used iteratively to solve theproblem subject to the evolving set of additional problem constraints.The constraints that have been added prohibit particular decision vari-ables from being included in the primary heuristic’s selection process.The ART program alters the greedy selection sequence made by the pri-mary heuristic algorithm. By altering a choice in the solution path of thegreedy heuristic (through additional constraints), the subsequent choicesmade by the greedy heuristic are altered.
Greedy Assignment Technique According to Kontoravdis and Bard [17],“greedy randomized adaptive search procedures combine greedy heuristics, ran-domization, and local search. The computations are performed in two steps orphases. The first step includes the sequential construction of feasible solutions.At each iteration, all feasible moves are ranked according to an adaptive greedyfunction that takes into account the present state and one is randomly selectedfrom a restricted candidate list. In the second step, local search is used toarrive at a local optimum.” Our interpretation of GRASP for the CMST prob-lem omits the local search procedure and only includes step one of traditionalGRASP implementations. An alternative implementation would be to add alocal search procedure to the top solutions found using both ART and GRASP.It is anticipated that in general, a starting position with lower cost solutions
MEMORY ADAPTIVE REASONING 491
before the application of a local search procedure will result in lower cost solu-tions after the local search procedure is applied. Thus, GRASP as implementedin this paper does not include the local search procedure. That is, a detailedoutline of our greedy assignment technique (yet under the acronym GRASP)is as follows.
With GRASP, the first-best greedy choice is not always made [14, 5, 6, 17].Rather, a candidate list of the top greedy choices is created for each decision inthe heuristic, from which the selection is randomly chosen. Thus, for the CMST,a simple GRASP implementation can be obtained by introducing the random-ized selection into EW. Thus, one selects the arc to be chosen randomly amongthe n best alternatives (where n can be determined experimentally). This is re-ferred to as n-GRASP. The basic EW procedure uses n = 1. When combiningART with GRASP, the modified GRASP capable EW heuristic replaces thetraditional EW heuristic inside the ART framework (with n to be determinedexperimentally). No other modification to ART is necessary. A standaloneGRASP procedure is then obtained by simply disabling the memory withinART. Thus, testing the GRASP procedure implies running the GRASP capa-ble EW procedure numerous times within the ART looping framework. Thatis, the minimum number of repeated executions of the GRASP enabled EW is350, where n is constant within this execution cycle. The maximum number ofrepeated executions of the GRASP enabled EW depends on how often a new”best solution” is found within the ART looping framework. Allowing execu-tions of GRASP enabled EW to continue as long as improvements continue tobe found facilitates a fair comparison with ART.
33.4 COMPUTATIONAL RESULTS
EW [4] is extremely fast, but does not produce high-quality solutions. TheEW, ART, GRASP, and ART-GRASP hybrid heuristics were coded in Fortran77, and executed on a Silicon Graphics INDY (R-4000 based MIPS) computer,with 32 MB of memory. All problems were solved with unit demand data.Twenty well know problem sets from the OR-Library1 were obtained. Amberget al. [3] and Sharaiha et al. [20] both obtained good results based on tabusearch. The lower bounds on the optimal solutions (which in most cases arethe optimal solutions) were kindly provided to us by Luis Gouveia ([10] andpersonal communication based on his research in progress). Some of theselower bounds are newer than the bounds reported in Sharaiha et al. [20]. Thetc problem instances have the goal node in the center of the graph, whereasthe te problem instances have the goal node at the end (corner) of the nodescatter. The problems consist of 40 node and 80 node graphs. The 40 nodeproblems were solved using arc capacities of 3, 5, and 10, whereas the 80 nodeproblems were solved with arc capacities of 5, 10, and 20. Thus, a total of 60
1http://mscmga.ms.ic.ac.uk/info.html
492 META-HEURISTICS 98: THEORY & APPLICATIONS
Problem Lower GapID Capacity EW Bound Opt EW A2G 2G A3G 3G ART
known problem instances were solved. The 60 problem instances are all fullyconnected graphs.
Complete results for the center goal node problems tc are presented in Ta-bles 33.1 and 33.2. The average gap between the bounds and the EW procedurewas 4.44%, compared to 2.47% for size 2 GRASP-ART hybrid, 3.06% for size 2GRASP, 2.73% for size 3 GRASP-ART hybrid, 2.74% for 3-GRASP, and 1.43%for ART.
Complete results for the edge goal node problems te are presented in Ta-bles 33.3 and 33.4. The gaps were 5.57% for EW, compared to 3.44% for size 2GRASP-ART hybrid, 3.21% for size 2 GRASP, 4.31% for size 3 GRASP-ARThybrid, 4.04% for 3-GRASP, and 2.48% for ART. Across both te and tc, thegap results were 5.01% for EW, compared to 2.95% for size 2 GRASP-ARThybrid, 3.14% for size 2 GRASP, 3.52% for size 3 GRASP-ART hybrid, 3.39%for 3-GRASP, and 1.95% for ART on average.
The results indicate that ART often improves on GRASP for a particularsize candidate list, but that solution results are generally better when ART isused by itself. In other words, GRASP weakens ART by making less than first-best decisions. The ART memory function seems to be a more effective way ofaltering the deterministic heuristic process than GRASP alone or GRASP-ARThybrids.
This paper examines the combination of memory-based and GRASP-basedheuristics applied to the CMST. Our experimentation suggests that GRASP,and the combination of GRASP with ART, does not lead to overall improve-ment in solution quality as compared to ART alone. However, better solutionswere found in some cases when using standalone GRASP, and GRASP withART. This shows that the additional variance in the search through the so-lution space introduced by GRASP has some value. However, comparison ofART with and without GRASP, indicate that average solution quality generallydeteriorates when GRASP is added to ART, and that stand-alone ART on theaverage does not benefit from GRASP.
Opportunities for future work include examining the effect of using the per-centage version of GRASP instead of the cardinality version which was usedin this paper. In addition, the number of GRASP choices available to choosefrom at each decision point in the EW heuristic can be varied during each it-eration of the EW heuristic. For example, we could decrease the number ofGRASP choices as the EW heuristic proceeds, resulting in more randomnessat the front-end of the EW algorithm. Alternatively, the number of GRASPchoices could be varied (e.g., randomly) over the many iterations of the EWheuristic.
Acknowledgements
We are grateful to Luis Gouveia for providing us with the bounds for the tc andte test problems. Moreover, the helpful comments and clarifications of StefanVoß are greatly appreciated. Remaining errors are on our side. (We might skipthe last two sentences.)
References
[1] Altinkemer, K. and B. Gavish (1986) Parallel saving heuristics for thetopological design of local access tree networks, Proc. Of IEEE Conferenceon Communications.
[2] Altinkemer, K. and B. Gavish (1988) Heuristics with constant error guar-antees for the design of tree networks, Management Science, 34, 331–341.
[3] Amberg, A., W. Domschke, and S. Voß (1996) Capacitated minimum span-ning trees: Algorithms using intelligent search, Combinatorial Optimiza-tion: Theory and Practice, 1, 9–39.
[4] Esau, L.R. and K.C. Williams (1966) On teleprocessing systems design,Part II – A method for approximating the optimal network, IBM SystemsJournal, 5(3), 142–147.
[5] Feo, T.A. and M.G.C. Resende (1989) A probabilistic heuristic for a com-putationally difficult set covering problem, Operations Research Letters,8, 67–71.
496 META-HEURISTICS 98: THEORY & APPLICATIONS
[6] Feo, T.A. and M.G.C. Resende (1995) Greedy randomized adaptive searchprocedures, Journal of Global Optimization, 6, 109–133.
[7] Gavish, B. (1982) Topological design of centralized computer networks –Formulations and algorithms, Networks, 12, 355–377.
[8] Gavish, B. (1983) Formulations and algorithms for the capacitated minimaldirected tree problem, Journal of the Association of Computing Machinery,30, 118–132.
[9] Gouveia, L. (1993) A Comparison of directed formulations for the capac-itated minimum spanning tree problem, Telecommunications Systems, 1,51–76.
[10] Gouveia, L. (1995) A 2n Constraint formulation for the capacitated mini-mum spanning tree problem, Operations Research, 43, 130–141.
[11] Gouveia, L. and P. Martins (1995) An extended flow based formulationfor the capacitated minimal spanning tree, presented at the Third ORSATelecommunications Conference, Boca Raton, FL.
[12] Gouveia, L. and M.J. Lopes (1997) Using generalized capacitated treesfor designing the topology of local access networks, TelecommunicationSystems, 7, 315–337.
[13] Hall, L. (1996) Experience with a cutting plane algorithm for the capac-itated spanning tree problem, INFORMS Journal on Computing, 8, 219–234.
[14] Hart, J.P. and A.W. Shogan (1987) Semi-greedy heuristics: An empiricalstudy, Operations Research Letters, 6, 107–114.
[15] Karnaugh, M. (1976) A new class of algorithms for multipoint networkoptimization, IEEE Transactions on Communications, 24, 500–505.
[16] Kershenbaum, A., R. Boorstyn, and R. Oppenheim (1980) Second-ordergreedy algorithms for centralized network design, IEEE Transactions onCommunications, 22, 1835–1838.
[17] Kontoravdis, G. and J.F. Bard (1995) A GRASP for the vehicle routingproblem with time windows, ORSA Journal on Computing, 7, 10–23.
[18] Papadimitriou, C.H. (1978) The complexity of the capacitated tree prob-lem, Networks, 4, 217–230.
[19] Patterson, R., E. Rolland, and H. Pirkul (1997) A memory adaptive reason-ing technique for solving the capacitated minimum spanning tree problem,Working Paper, University of California, Riverside.
REFERENCES 497
[20] Sharaiha, Y.M., M. Gendreau, G. Laporte, and I.H. Osman (1997) A tabusearch algorithm for the capacitated minimum spanning tree problem, Net-works, 29, 161–171.