Top Banner
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004 925 MAGMA: A Multiagent Architecture for Metaheuristics Michela Milano and Andrea Roli Abstract—In this work, we introduce a multiagent architecture called the MultiAGent Metaheuristic Architecture (MAGMA) conceived as a conceptual and practical framework for meta- heuristic algorithms. Metaheuristics can be seen as the result of the interaction among different kinds of agents: The basic architec- ture contains three levels, each hosting one or more agents. Level-0 agents build solutions, level-1 agents improve solutions, and level-2 agents provide the high level strategy. In this framework, classical metaheuristic algorithms can be smoothly accommodated and extended. The basic three level architecture can be enhanced with the introduction of a fourth level of agents (level-3 agents) coor- dinating lower level agents. With this additional level, MAGMA can also describe, in a uniform way, cooperative search and, in general, any combination of metaheuristics. We describe the entire architecture, the structure of agents in each level in terms of tuples, and the structure of their coordina- tion as a labeled transition system. We propose this perspective with the aim to achieve a better and clearer understanding of meta- heuristics, obtain hybrid algorithms, suggest guidelines for a soft- ware engineering-oriented implementation and for didactic pur- poses. Some specializations of the general architecture will be pro- vided in order to show that existing metaheuristics [e.g., greedy randomized adaptive procedure (GRASP), ant colony optimiza- tion (ACO), iterated local search (ILS), memetic algorithms (MAs)] can be easily described in our framework. We describe coopera- tive search and large neighborhood search (LNS) in the proposed framework exploiting level-3 agents. We show also that a simple hy- brid algorithm, called guided restart ILS, can be easily conceived as a combination of existing components in our framework. Index Terms—Agents, combinatorial optimization, constraint satisfaction, metaheuristics. I. INTRODUCTION M ETAHEURISTIC techniques are used to solve hard problems (generally NP-hard problems) instead of, or in conjunction with, exact algorithms. When the problem size becomes very large, exact algorithms might be too compu- tationally expensive. In these cases, approximate algorithms (which do not guarantee to find an optimal solution) are a very effective alternative to complete methods. Furthermore, the usefulness of metaheuristics has been recently proved in successful combination with complete methods (like complete search strategies in constraint programming [26], [55]). Meta- heuristics are approximate algorithms which encompass and combine constructive methods, local search strategies, local Manuscript received July 30, 2002; revised February 7, 2003. This work was supported by the SOCS project, funded by the CEC, under Contract IST-2001- 32530. This paper was recommended by Associate Editor F. Gomide. M. Milano is with the DEIS—University of Bologna, 40136 Bologna, Italy (e-mail: [email protected]). A. Roli is with the University “G.D’Annunzio,” Chieti, Italy, and also with the DEIS—University of Bologna, 40136 Bologna, Italy (e-mail: [email protected]). Digital Object Identifier 10.1109/TSMCB.2003.818432 optima escaping strategies and population-based search. They include, but are not restricted to tabu search [33], simulated annealing [46], evolutionary computation [11], memetic algo- rithms (MAs) [53], scatter search [32], ant colony optimization (ACO) [14], [21], greedy randomized adaptive procedure (GRASP) [23], iterated local search (ILS) [64], [65], and variable neighborhood search [35], [36]. Surveys and current research on metaheuristics can be found in [1], [8], [14], [71]. In this paper, we revisit metaheuristics in a multiagent per- spective and provide a framework called the MultiAGent Meta- heuristic Architecture (MAGMA), where existing and hybrid metaheuristic algorithms can be smoothly accommodated. We first provide motivations of this paper in Section II. In Sec- tion III, we survey the most commonly used classifications of metaheuristics. Section IV defines basic concepts for the defini- tion of metaheuristics. Then, we describe MAGMA in Section V and provide a formal definition of the agents and their coordina- tion in Section VI. In Section VII, we show some specializations of MAGMA that describe some among the best known meta- heuristic algorithms. We describe the MAGMA architecture of GRASP, ACO, ILS, and MAs. In addition, cooperative search and large neighborhood search (LNS) are considered. We also describe a hybrid algorithm, called guided restart ILS, which can be easily implemented in our framework. Finally, we present related works and some conclusions. II. MOTIVATIONS Classifications of metaheuristics are useful when they help to outline structural similarities and differences among algorithms. Once some common key properties have been recognized, a general description can be formulated, and the algorithms of that class can be defined as specializations of it. A metaheuristics framework can be useful for • comparing existing algorithms; • designing (new) hybrid algorithms; • supporting software engineering. In this work, we first try to identify the common principles and basic components underlying metaheuristic algorithms. These components will be encapsulated in communicating software components with clear structure and interface. Components are organized in levels. Components belonging to lower levels are simple and, in some cases, simply react to events, while components belonging to higher levels have deliberative capa- bilities, embed complex strategies and behave autonomously. For these reasons, in our system, components are called agents, even if the starting point of the agent view of our architecture is more oriented to software engineering rather than to distributed artificial intelligence and multi agent systems [74]. In fact, our 1083-4419/04$20.00 © 2004 IEEE
17

Magma: A multiagent architecture for metaheuristics

Mar 12, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Magma: A multiagent architecture for metaheuristics

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004 925

MAGMA: A Multiagent Architecturefor Metaheuristics

Michela Milano and Andrea Roli

Abstract—In this work, we introduce a multiagent architecturecalled the MultiAGent Metaheuristic Architecture (MAGMA)conceived as a conceptual and practical framework for meta-heuristic algorithms. Metaheuristics can be seen as the result of theinteraction among different kinds of agents: The basic architec-ture contains three levels, each hosting one or more agents. Level-0agents build solutions, level-1 agents improve solutions, and level-2agents provide the high level strategy. In this framework, classicalmetaheuristic algorithms can be smoothly accommodated andextended. The basic three level architecture can be enhanced withthe introduction of a fourth level of agents (level-3 agents) coor-dinating lower level agents. With this additional level, MAGMAcan also describe, in a uniform way, cooperative search and, ingeneral, any combination of metaheuristics.

We describe the entire architecture, the structure of agents ineach level in terms of tuples, and the structure of their coordina-tion as a labeled transition system. We propose this perspectivewith the aim to achieve a better and clearer understanding of meta-heuristics, obtain hybrid algorithms, suggest guidelines for a soft-ware engineering-oriented implementation and for didactic pur-poses. Some specializations of the general architecture will be pro-vided in order to show that existing metaheuristics [e.g., greedyrandomized adaptive procedure (GRASP), ant colony optimiza-tion (ACO), iterated local search (ILS), memetic algorithms (MAs)]can be easily described in our framework. We describe coopera-tive search and large neighborhood search (LNS) in the proposedframework exploiting level-3 agents. We show also that a simple hy-brid algorithm, called guided restart ILS, can be easily conceivedas a combination of existing components in our framework.

Index Terms—Agents, combinatorial optimization, constraintsatisfaction, metaheuristics.

I. INTRODUCTION

METAHEURISTIC techniques are used to solve hardproblems (generally NP-hard problems) instead of, or

in conjunction with, exact algorithms. When the problem sizebecomes very large, exact algorithms might be too compu-tationally expensive. In these cases, approximate algorithms(which do not guarantee to find an optimal solution) are avery effective alternative to complete methods. Furthermore,the usefulness of metaheuristics has been recently proved insuccessful combination with complete methods (like completesearch strategies in constraint programming [26], [55]). Meta-heuristics are approximate algorithms which encompass andcombine constructive methods, local search strategies, local

Manuscript received July 30, 2002; revised February 7, 2003. This work wassupported by the SOCS project, funded by the CEC, under Contract IST-2001-32530. This paper was recommended by Associate Editor F. Gomide.

M. Milano is with the DEIS—University of Bologna, 40136 Bologna, Italy(e-mail: [email protected]).

A. Roli is with the University “G.D’Annunzio,” Chieti, Italy, and alsowith the DEIS—University of Bologna, 40136 Bologna, Italy (e-mail:[email protected]).

Digital Object Identifier 10.1109/TSMCB.2003.818432

optima escaping strategies and population-based search. Theyinclude, but are not restricted to tabu search [33], simulatedannealing [46], evolutionary computation [11], memetic algo-rithms (MAs) [53], scatter search [32], ant colony optimization(ACO) [14], [21], greedy randomized adaptive procedure(GRASP) [23], iterated local search (ILS) [64], [65], andvariable neighborhood search [35], [36]. Surveys and currentresearch on metaheuristics can be found in [1], [8], [14], [71].

In this paper, we revisit metaheuristics in a multiagent per-spective and provide a framework called the MultiAGent Meta-heuristic Architecture (MAGMA), where existing and hybridmetaheuristic algorithms can be smoothly accommodated. Wefirst provide motivations of this paper in Section II. In Sec-tion III, we survey the most commonly used classifications ofmetaheuristics. Section IV defines basic concepts for the defini-tion of metaheuristics. Then, we describe MAGMA in Section Vand provide a formal definition of the agents and their coordina-tion in Section VI. In Section VII, we show some specializationsof MAGMA that describe some among the best known meta-heuristic algorithms. We describe the MAGMA architecture ofGRASP, ACO, ILS, and MAs. In addition, cooperative searchand large neighborhood search (LNS) are considered. We alsodescribe a hybrid algorithm, called guided restart ILS, whichcan be easily implemented in our framework. Finally, we presentrelated works and some conclusions.

II. MOTIVATIONS

Classifications of metaheuristics are useful when they help tooutline structural similarities and differences among algorithms.Once some common key properties have been recognized, ageneral description can be formulated, and the algorithms of thatclass can be defined as specializations of it. A metaheuristicsframework can be useful for

• comparing existing algorithms;• designing (new) hybrid algorithms;• supporting software engineering.

In this work, we first try to identify the common principles andbasic components underlying metaheuristic algorithms. Thesecomponents will be encapsulated in communicating softwarecomponents with clear structure and interface. Componentsare organized in levels. Components belonging to lower levelsare simple and, in some cases, simply react to events, whilecomponents belonging to higher levels have deliberative capa-bilities, embed complex strategies and behave autonomously.For these reasons, in our system, components are called agents,even if the starting point of the agent view of our architecture ismore oriented to software engineering rather than to distributedartificial intelligence and multi agent systems [74]. In fact, our

1083-4419/04$20.00 © 2004 IEEE

Page 2: Magma: A multiagent architecture for metaheuristics

926 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004

architecture can be implemented as a centralized system where,instead of agents, we have software components such as objects.

We define a four level architecture. In each level, one ormore agents act: to the first level belong solution builders, tothe second one solution improvers, then strategic agents livein the third level and coordinating agents in the fourth one.Agents belonging to the same level have the same structure. Wewill clearly define which is the structure of each agent and itsinterface. Agents belonging to different levels communicate forexchanging results and information. We define the coordinationscheme among agents as a labeled transition system.

The distinction and separation between different roles andlevels is important for the comprehension of metaheuristics,their comparison, their design and implementation. In fact, inthis way we are able to bring together flexibility in computing,possibly distribution in processing and heterogeneous formsof realizations. The framework proposed fulfills in fact theabove-mentioned features of a metaheuristic framework. First,it enables comparison of existing algorithms from a structuralperspective, as far as the number of agents, their structure, andtheir communications are concerned. For example, algorithmsproviding a feedback from higher levels to lower ones are thoseenabling a deliberative capability like a form of learning or adap-tation in the metaheuristic algorithm. Indeed, the feedback loopmeans that the search history is exploited to dynamically balanceintensification and diversification, to adaptively tune parametersand to apply learning techniques, like the ones used in [19], [44],[52]. In this perspective, the framework can be useful for teachingmetaheuristics, with the aim of describing the basic algorithmiccomponents, their interfaces, and the effect of communication.

Second, the architecture can be helpful in the design of (new)hybrid algorithms. For instance, we can start from an existing al-gorithm and we can add a further communication link betweentwo levels. The new information exchange can be used to in-fluence the behavior of one or more agents (e.g., exploitationof the search history). We may also add or substituting one ormore agents in any level. Another possibility that is stronglyused in cooperative search concerns the design of hybrid al-gorithms putting together existing metaheuristics. Cooperativesearch has been proved useful for solving large-scale optimiza-tion problems and multicriteria optimization problems.

Finally, we can support software engineering since we iso-late basic components that can be easily composed and reused.In fact, we can change only one part of the architecture by main-taining the interface and changing the implementation withoutre-writing the whole application. The user can now composehis/her own strategy by putting together software components,without starting from scratch a new code each time an algorithmhas to be implemented. We believe this is the first step toward theintegration of metaheuristic components in open systems wherethe ever growing need of extra functionalities and integrationwith other types of software is strongly required.

III. METAHEURISTIC ALGORITHMS CLASSIFICATIONS

There are different ways to classify and describe meta-heuristic algorithms. Perhaps, the most familiar and easiestis the description based on the algorithm origins. There are

nature-inspired algorithms, such as genetic algorithms and antalgorithms, and non-nature-inspired ones such as tabu searchand ILS. this classification is quite rough for two main reasons:i) Recent hybrid algorithms, e.g., MAs [53], do not completelyfit neither class or, in a sense, they fit both at the same time,and ii) this classification is not helpful to compare algorithms.

Algorithm 1 Adaptive Memory ProgrammingInitialize the memorywhile stopping condition no met doGenerate a new provisory solution usingdata stored in the memoryImprove with a local search; let bethe improved solutionUpdate the memory using pieces of knowl-edge brought byend while

Another way to classify metaheuristics is by differentiatingthem between single and population heuristics. The first arecommonly called trajectory methods and encompass localsearch-based metaheuristics, such as tabu search, ILS andvariable neighborhood search. They all share the propertyof describing a trajectory (continuous or not) in the searchspace during the search process. Population metaheuristics,conversely, generate search processes that dynamically de-scribe the evolution of a set of points in the search space. Themost important population based methods are evolutionarycomputation, MAs, scatter search, and ACO.

A very interesting framework is called adaptive memory pro-gramming (AMP) [68]. AMP describes a class of metaheuris-tics as algorithms which use a memory to store information col-lected during the search process; this information is used to biasthe solution construction and the future search (hence the termadaptive). AMP high level algorithm, as it appears in [68], isreported in Algorithm I.

Within this general framework, several metaheuristics canbe described, such as tabu search, ACO, and scatter search.Tabu search explicitly handles a short term memory (the tabulist), which is used to avoid cycling among the same set ofstates. In population-based metaheuristics, memory is implicitlyembedded in shared data structures (population itself in evolu-tionary algorithms and pheromone trails in ACO). AMP clearlypoints out the importance of the use of memory during the solu-tion construction and the search. Nevertheless, the level of ab-straction chosen enables only to informally introduce conceptsof memory, stored information and solutions, and its usefulnessis limited to a general description of algorithms which make useof search history. AMP cannot be used as a framework whichhelps to design hybrid algorithms, as its descriptive level is toogeneral.

Other ways of describing metaheuristic algorithms have beenproposed. For example, in [37], a framework for evolutionarycomputation algorithms is discussed, where evolutionary algo-rithms are described depending on seven main features: indi-vidual representation, evolution process, neighborhood struc-ture, information sources, infeasibility, intensification, and di-versification.

Page 3: Magma: A multiagent architecture for metaheuristics

MILANO AND ROLI: MAGMA: MULTIAGENT ARCHITECTURE FOR METAHEURISTICS 927

Fig. 1. Conceptual process from the original problem to the representation used in the algorithm. The model definition is algorithm dependent: Solution space,neighborhood structure, and objective function have to be tailored to the chosen metaheuristics.

IV. PROBLEM, MODELS, AND FITNESS LANDSCAPE

In this section, we introduce the problem classes metaheuris-tics usually attack and we briefly discuss some design issuesof problem modelization. Furthermore, we introduce the fitnesslandscape, which is an important component of our definitionof search process.

A. Search Problem Formalization

The widest class of problems metaheuristics deal with areconstraint satisfaction problems (CSPs) and combinatorial op-timization problems (COPs). They can be defined by

• a set of variables, representingproblem entities;

• variable domains, , i.e., values that vari-ables can assume;

• constraints among variables, , ,limiting the combinations of fea-

sible assignments;• an objective function to be optimized (minimized or max-

imized).In CSPs, the objective function is constant, as they are solved

by any feasible solution, whilst for COPs it is usually a nontrivial function. There are important variants of these problems:MAX-CSPs [29], in which the objective is to maximize thenumber of satisfied constraints (or a weighted sum of them) andmulticriteria COPs [34], [72] in which more than one criterionhas to be optimized.

A model is a nonambiguous and formal definition of theproblem: Which are the main entities, the constraints, the costs(or values) associated with configurations of entities, and thegoal.

Often the problem entities are directly modeled and used bythe algorithm. The choice of a specific model is, in general, “al-gorithm-oriented” (see Fig. 1), because it has to be tailored to thealgorithm that will be used to solve the problem. This holds bothfor systematic constructive methods and metaheuristics. For ex-ample,aproblemmodeledviabinaryvariablesandlinear inequal-itieswillbeeffectivelysolvedthroughIntegerProgrammingtech-niques, while the same model is not in general the best one for aconstraint programming finite-domain solver. In the metaheuris-

tics context, if, for instance, simple genetic algorithms are used,solutions may be coded as binary strings. The model may also de-fine an alternative objective function, for example by introducingpenaltiesforinfeasiblesolutions,orbyusingsmoothingfunctionsto shape the search landscape. Observe that the search processmay alsodynamically changemodels (e.g., by swappingbetweentwo different objective functions [5]).

B. Fitness Landscapes

The local search process can be viewed as an exploration ofa landscape aimed at finding a global optimum, or, at least, a“good” local optimum.

A fitness landscape is defined by a triple

whereset of solutions (or states);neighborhood function that defines theneighborhood structure, by assigning to every aset of states ;objective function, which in this specific case is calledfitness function, .

The fitness landscape (FL) can be interpreted as a graph (seeFig. 2) in which nodes are solutions (with their fitness value)and arcs represent the neighborhood relation between states.

The neighborhood function implicitly defines an operatorthat takes a state and transforms it into another state

. Conversely, given an operator , it is possible to definea neighborhood of a variable :

can be obtained

by one application of on

Usually, the operator is symmetric: If is a neighbor ofthen is a neighbor of . In a graph representation (like

the one depicted in Fig. 2) undirected arcs represent symmetricneighborhood structures. A desirable property of the neighbor-hood structure is to allow a path from every pair of nodes (i.e.,the neighborhood is strongly optimally connected) or at leastfrom any node to an optimum (i.e., the neighborhood is weaklyoptimally connected). Nevertheless, there are some exceptions

Page 4: Magma: A multiagent architecture for metaheuristics

928 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004

Fig. 2. Example of undirected graph representing a fitness landscape. Eachnode is associated with a solution s and its corresponding fitness value F (s ).Arcs represent transitions between states by means of '. Undirected arcscorrespond to symmetric neighborhood structure.

of effective neighborhood structures that do not enjoy this prop-erty [54].

The notion of FL and neighborhood enables to view localsearch algorithms as search processes in a graph. The searchstarts from an initial node and explores the graph moving froma node to one of its neighbors, until it reaches a terminationcondition.

There exists another definition of fitness landscape [42],which is more formal and general and can be used here as well.However, for the purpose of this contribution it is sufficientto know that this formal definition can deal with states (nodesof the search graph) composed of populations of solutions.The key point is the introduction of multisets, which are setswith possible repetitions of elements. A multiset substitutesa single solution and the operator transforms a multiset intoanother one. Furthermore, the operator is defined as a function

, which assigns a probability foreach possible transition between states.1 This definition ofFL enables to deal with population heuristics, like geneticalgorithms, in the same way as simple local search algorithms.

There are some important design issues in developing asearch algorithm over a FL: the solution representation, theneighborhood structure, and the fitness function. Furthermore,there are some ways to cope with constraints. For example, itis possible to map a CSP or a COP into a free optimizationproblem, where there are not constraints and infeasible solu-tions are penalized by modifying the fitness function [22].

It is worth underlining that, given a fitness function, thechoice of an operator determines the properties of the land-scape. This is the “one operator one landscape” concept,introduced in [42], [43]. The algorithm performance is stronglyaffected by the model chosen and, in general, no best choiceexists which leads to the best performance with every al-gorithm/problem combination. This empirical conjecture istheoretically supported by the no free lunch theorem [75].

C. Examples

In this section, we present some examples of the previouslyintroduced concepts.

1M(S) denotes the (infinite) set of multisets whose elements are drawn fromS.

Fig. 3. Example of 2-exchange move. The tour can be reconstructed in onlyone way.

Fig. 4. Example of 3-exchange move. After the removal of three arcs there aresix possible ways to reconstruct the tour. In this figure just one of them is shown.

1) Traveling Salesman Problem: The traveling salesmanproblem (TSP) is defined as follows: Given an undirectedgraph (supposed fully connected), with nodes and each arcassociated with a positive value, find the Hamiltonian tour withthe minimum total cost. From the problem definition, severalmodels are possible. We will discuss two among the mostcommon ones.

Model 1A solution (i.e., an Hamiltonian tour) can be represented as

a sequence of arcs , where and are nodes of thegraph. The solution cost is given by the sum of costs of the arcsin the tour.

Two very successful neighborhood structures that can beintroduced in the above-given representation are called 2-ex-change and 3-exchange which lead to 2 and 3-opt improvementalgorithms [41]. Both algorithms need to start with a feasiblesolution. The 2-exchange operator deletes two arcs from thefeasible tour and reconnects the two paths into a tour in theother way (see Fig. 3). In Fig. 4, the 3-exchange move issketched: Three arcs are removed and the feasible tour is thenreconstructed by using different arcs (in this case there are sixpossible ways to reconstruct the cycle).

The FL generated by the two moves are different, as the de-fined neighborhoods differ in size and connectivity properties.The 2-exchange neighborhood has a cardinalityproportional to , while for 3-exchange is propor-tional to . Observe that the solutions defined in this modelare feasible for the original problem as well, because moves pre-serve the Hamiltonian tour structure.

Model 2A solution (i.e., an Hamiltonian tour) can be alternatively rep-

resented as a sequence of values , whereare nodes of the graph and the solution represents the sequenceof nodes in the tour. For example, represents the

Page 5: Magma: A multiagent architecture for metaheuristics

MILANO AND ROLI: MAGMA: MULTIAGENT ARCHITECTURE FOR METAHEURISTICS 929

tour . The solution cost is given by thesum of costs of the arcs in the tour.

A simple move operator may be the exchange of any pair ofnodes in the sequence. If the initial solution is a feasible tour, theresult of a move is guaranteed to be feasible. Similarly, it is pos-sible to define moves that pick up nodes and exchangethem, leading to the definition of neighborhoods . Ob-serve that these operators define neighborhoods of cardinalityproportional to .

It is worth noting that, if the representation chosenwas interpreted as “the position of node

is ,” we should introduce an additional constraint to avoidsubtours.

2) Satisfiability Problem: In this example we discuss a pos-sible representation of the satisfiability problem (SAT) whensolved via local search.

SAT is defined as follows: Given a set of clauses, each ofwhich is the logical disjunction of literals (a literal is a vari-able or its negation), the goal is to find an assignment to thevariables that satisfies all the clauses. For example: the set ofclauses issatisfied by the assignments

.We model the problem by defining as states the variable as-

signments. In this example we define the move operator as theflip of one variable. This move operator induces a neighbor-hood structure such that the neighbors of a state are all thestates at Hamming distance equal to 1 from (this is the defini-tion of neighborhood most often adopted in SAT problems). TheSAT problem does not involve any optimization criteria sinceany solution satisfying all the constraints is accepted. However,solving SAT with LS, we need to introduce in the model an ob-jective function which evaluates the assignments with respectto their “closeness” to the satisfying ones. A typical objectivefunction is the number of satisfied clauses, but other choicesare possible.2 Thus, feasible solution for the SAT problem aremapped in optimal solutions in the model.

V. MAGMA: MULTIAGENT METAHEURISTIC ARCHITECTURE

The weak notion of agent [24], [59], [74], [76] states that anagent is a (software) system that enjoys the following proper-ties: autonomy, social ability, reactivity, and pro-activeness. Inthis context, we just adopt the metaphor of agent referring to asystem able to build a solution, move over a landscape, com-municate with other agents, be active (goal-oriented) and, pos-sibly, be adaptive. Agents in our architecture are reactive, sincethey must act as soon as their input is provided. Agents are au-tonomous, since they may need to have deliberative capabilities,as in the most general case they incorporate complex searchstrategies. The social ability is required only for communica-tion; we do not take into account agent society formation andbehavior. Our agents are proactive since their goal is to applythe strategy to the model.

We devise different kinds of agents, with different function-alities, perspectives and goals. These agents act in a multilevel

2See, for example, nonoblivious functions for MAXSAT problems in [6].

Fig. 5. Multilevel architecture for metaheuristic algorithms. Note thatonly numbered arrows are indeed used by existing metaheuristics, whilenon-numbered arrows can be used for extending existing algorithms.

architecture (MAGMA, depicted in Fig. 5), composed of somelevels each of which corresponds to a different level of abstrac-tion.

At each level there are one or more specialized agents, eachimplementing an algorithm. LEVEL-0 provides a feasible solu-tion (or a set of solutions) for the upper level; it can be consid-ered the solution level. LEVEL-1 deals with solution improve-ment and agents perform a trajectory in the fitness landscapeuntil a termination condition is verified; this can be defined asthe level which deals with neighborhood structure. LEVEL-2agents have a global view of the space, or, at least, their task isto guide the search toward promising regions and provide mech-anisms for escaping from local optima. Therefore, this can bedefined as the landscape level.

Classical metaheuristic algorithms can be easily described viathese three levels. In fact, we will show in Section. VII that usingonly three levels we are able to describe simple metaheuris-tics. With simple we mean nonhybrid algorithms. When var-ious search strategies (complete or incomplete) are combined,we have a cooperative search framework, where metaheuris-tics and other search methods coexist and exchange results. Inthis case, an additional level (LEVEL-3) coordinating differentsearch strategies should be devised. Therefore, this can be de-fined as the coordination level and deals with different land-scapes and strategies.

Coordination among agents involves both communicationand synchronization. A formal and general coordination modelwill be provided in Section VI. Communications between anytwo levels are possible, therefore an algorithm can be describedas the result of the interaction of some agents (algorithmiccomponents) each specialized for a specific task. Communi-cation among agents can be implemented through any kind ofmechanism and protocol. The designer is free of choosing themost suitable communication scheme. For example, agents canuse a blackboard or a message passing mechanism.

Agents can be formally defined with tuples, whose elementsdescribe computational capabilities, knowledge and goals. Thisdefinition encompasses all the current metaheuristics. Eachtuple is composed of two main components: the model andthe strategy . In the following, we formalize the tuples foreach level, whereas in Section VII, we will specialize the tupleson specific algorithms.

Page 6: Magma: A multiagent architecture for metaheuristics

930 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004

A. Definition of the Multilevel Architecture

LEVEL-0 To this level belong solution builder agents,whose task is to produce starting solutions. LEVEL-0 agents,hereinafter referred to as L0-agents, deal with problem entitiessuch as variables, domain values, constraints and partial solu-tions, depending on the representation chosen. L0-agents’ goalis to construct an initial solution for the upper level. L0-agentsmay exploit strategies like random initialization, greedy con-struction, probabilistic construction (e.g., pheromone based),etc. Their computational capabilities are usually quite limited,but can be very complex, as we will show describing LNS.They are generally reactive agents, whose behavior is basedupon a heuristic criterion (which may be dynamically changedby upper levels agents).

Agents of LEVEL-0 are defined by tuple ,where . are components, the domains

define the values that components can assume, are theconstraints, and finally, is the objective function.is the strategy, which will be specialized depending on themetaheuristic algorithm we are describing. We will providesome examples of strategies in Section VII. The strategy is theactual constructive algorithm, for instance it can be a randominitialization, a greedy or a probabilistic construction, etc. Com-ponents can be variables or partial solutions, or any elementwhich is used as a building block for solutions. Constraints areneeded for the agent to construct solutions which are feasiblewith respect to the model (indeed, it is possible that they areinfeasible for the original problem, as shown in Section IV-C2,where an example of LS on SAT is discussed).

It is worth noting that L0-agents are constructive agents thatare able to build a feasible solution for the model . This so-lution is computed depending on the strategy which can bean approximate algorithm, such as a greedy strategy, that simplyreturns the first solution found. We will show that L0-agents canalso implement a complete algorithm, like branch and boundwhich computes the optimal solution with respect to the ob-jective function. The fact that a single agent can compute theoptimal solution has two advantages: The first is that our archi-tecture can be used to describe not only metaheuristics but alsocomplete methods. Second, MAGMA can describe hybrid al-gorithms, such as cooperative search and LNS, where completesolvers are integrated into metaheuristics [15], [26], [55] andL0-agents then optimally solve subproblems.

As an output, LEVEL-0 provides one or more solutions to theupper levels. In addition, LEVEL-0 can be triggered each timea restart is performed.

LEVEL-1 This level contains solution improver agents. Eachagent searches in an FL with a local search, trying to improvethe solution it has received from another agent. LEVEL-1 agents(L1-agents) implement various search algorithms and they canconstitute either single independent search agents, or cooper-ating agents. L1-agents deal with solutions and neighborhoodstructures. The usual concept of short term memory belongs tothis level: agents can use their recent past to intensify the searchor to escape from local optima. Therefore, L1-agents are usu-ally not just reactive, but they may have deliberative capabilitieseven if rather limited.

L1-agents are defined by tuple , where. is the strategy used to improve the solution,

and again, it will be specialized on the specific metaheuristic.Let us consider now the model : Sol is the initial solution,

is the neighborhood structure, is the search history andis the fitness function.

The solution provided by LEVEL-0 (arrow 1 in Fig. 5) orany other agent (see for example arrow 4) is the starting pointof the search, guided by the strategy which, in general, makesuse of the search history and the fitness function. A very impor-tant element is the neighborhood structure, which defines theportion of search space visible from each state. The neighbor-hood structure can be also dynamic (as for tabu search) or theagent can dynamically change neighborhood during the searchprocess, if suggested by upper level agents (arrow 4). For in-stance, the combination of different neighborhoods has beenproven very effective in variable neighborhood search [36] andin a new metaheuristic called multineighborhood search [18]. Ingeneral, every agent can receive not only simple pieces of infor-mation (such as solutions), but also more complex data such asneighborhoods, statistics or parts of the search history, etc.

The output of LEVEL-1 is the improved solution, whichusually represents a local optimum or a state corresponding tosearch stagnation.

It is worth to stress that the more complex an agent is, themore difficult is the separation of basic algorithmic compo-nents. Therefore, instead of having LEVEL-1 agents with verysophisticated dynamic neighborhood structures, this architec-ture suggests to design simpler agents coordinated by an upperlevel agent. Thus, for example, metaheuristics with dynamicneighborhoods can be described as a system composed ofLEVEL-1 agents with only one neighbor which alternate theirsearch process under the coordination of a strategic agent ofLEVEL-3.

LEVEL-2 LEVEL-2 agents (L2-agents) are strategic agents,since their main role is to balance intensification and diversifi-cation. Todays most effective metaheuristics have usually a non-trivial strategy to dynamically balance intensification and diver-sification, achieved by the use of search history (descriptionsof such advanced strategies can be found in [33]). Moreover,L2-agents can sample or abstract the search space looking forpromising regions to explore. With respect to the general archi-tecture depicted in Fig. 5, they can perform activities like thefollowing.

• Store the best solutions found among those provided byLEVEL-1 (arrow 3).

• Suggest what building blocks (e.g., partial solutions) havethe highest probability of being part of the optimal solution(arrow 7).

• Suggest what regions of a (single) landscape are the mostpromising (arrow 4).

• Dynamically bias the intensification/diversification bal-ance (arrows 4 and 7).

• Switch between two landscapes (arrow 4).L2-agents deal with landscapes. Long term memory is imple-

mented and exploited at this level, where data concerning thewhole history of the search process of lower level agents are

Page 7: Magma: A multiagent architecture for metaheuristics

MILANO AND ROLI: MAGMA: MULTIAGENT ARCHITECTURE FOR METAHEURISTICS 931

stored. One of the main uses of long term memory is the imple-mentation of diversification strategies.

L2-agents are described by the tuple , where, and is the strategy used for guiding lower

level agents.The model contains , i.e., the set of solutions of L1-agents,

the history and the fitness function . The set of solutionsprovided by L1-agents can be used in population based methods,in which recombination and mutation operators are applied tothe population. The history is the structured ensemble of datacollected during the overall search process. It includes informa-tion on every agent search process, for instance, the best solu-tion found. This information is exploited by the strategy, whichguides lower level agents.

Using levels 0, 1, and 2, we are able to describe almostall simple metaheuristic algorithms, i.e., algorithms that donot combine more than one strategy in each level. However,more complex metaheuristics can be devised, namely thoseintegrating more than one (complete or incomplete) strategy.Suppose for example, that we have to describe a cooperativesearch mechanism that combines two metaheuristics andexchanges solutions. In this, case, while the two metaheuristicscan be settled in the three level architecture, there is somethingmissing, i.e., the coordination of the two metaheuristics.We therefore need an additional level that coordinates twoalgorithms, identifies which parts of the solutions shouldbe exchanged, if any, and so on. In general, we need somedeliberation capability that organizes lower level agents. Forthis purpose, we introduce an additional level which will beused to describe cooperative algorithms.

LEVEL-3 Agents belonging to this level (called L3-agents)coordinate lower level agents behavior. These agents know themodel and strategy of all lower level agents and coordinate theirbehavior and communications. For example, in a cooperativesearch framework, where more than one metaheuristic algo-rithm is involved, the information they communicate to eachother is decided by a level-3 agent. If solutions are exchanged,the level-3 agent should for instance have a notion of distancebetween solution and exchange solution that are enough dif-ferent for the other. This capability could also be applied toselect, adapt and improve the lower level agents in populationbased metaheuristics (those actions involve communications ex-pressed by arrows 5 and 6). As a final example, which will beexplained in detail in Section VII-F, we mention LNS. LNS isone of the most successful techniques for merging completeand incomplete methods. The idea is to start with a solution,fix some parts of it, and optimally solve the rest. In this case,level-3 agents decide those parts that should be fixed and thosesubproblems that should be solved in order to eventually im-prove the solution.

Thus, L3-agents perform meta-reasoning on the behaviorof agents at lower levels. The computational capabilities ofL3-agents can be the highest among all the levels.

L3-agents are described by the tuple , where, and is the strategy used for guiding

lower level agents. The model contains the complete de-scription of lower level agents, and the strategy defines thespecific coordination scheme.

It is worth noting that each level has the same structurecomposed of a model and a strategy. Observe that the higherthe level, the higher the abstraction. In fact, is stronglyproblem dependent: components represent problem entitiesand constraints are relations among them. On the contrary,manipulates solutions independently from their semantics and

further enlarges the gap between the problem and its repre-sentation since it focuses on intensification and diversificationstrategies. Finally, can be considered as a meta-modelcontaining the tuples of lower level agents.

An interesting observation is that levels 2 and 3 have a sim-ilar characteristic: they guide underlying levels. However, whilelevel 2 coordinates trajectory methods, level 3 coordinates meta-heuristic algorithms, but both have deliberative capabilities.

In MAGMA, the definition of the environment is very simple.What agents perceive as “external” is just the set of agents towhom they communicate. Therefore, for any agent , the en-vironment is defined as the set of agents that send informationto and the set of agents receiving information from .

VI. COORDINATION

On the most general level of abstraction, we can describeagent coordination as the composition of basic states and ac-tions. Agents wait for an input, then execute the algorithm spec-ified by the strategy and send the result to the destination (i.e.,the specified receiver agent).

A particularly suitable formalism to describe this kind of co-ordination is the one of labeled transitions systems (LTSs) [31].LTSs are mainly used to formally define the coordination amongprocesses, but they are also applied in the context of multiagentsystems [73].

Formally, an LTS is a pair , where is a set of pro-cesses and is an infix predicate. is a set ofpossible actions that can be performed by the processes. Actionsare defined with respect to a chosen level of abstraction, there-fore they can be general actions (e.g., execute the constructionalgorithm) or specific actions (e.g., perform the greedy solutionconstruction as construction algorithm). Usually, the followingnotation is used: , meaning that the process can performthe action , and after its completion, it reaches a state whereis the process remaining part. Actions of can be composedof computational activities (in a finite number for a finite timeinterval) and can be also synchronization of a process with theenvironment or the receipt of a signal sent by the environment.

LTSs define the result of the execution of all the allowed ac-tions for every process of the system. In case of alternatives (i.e.,a process is in a state such that it is allowed to perform morethan one action), LTSs do not specify the choice among the pos-sible cases.

For the sake of readability, we will indicate state transitionsand not process transitions, i.e., means that the agent instate can perform action and after its completion it will bein state

We formalize the coordination among agents by means ofLTSs, assuming the most general level of abstraction, which cor-responds to the level of abstract classes in the object orientedterminology. On this level of abstraction, the actions agents can

Page 8: Magma: A multiagent architecture for metaheuristics

932 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004

perform are three: send, execute and receive. The actions sendand receive refer to the communication between agents (e.g.,send solution to L1-agent, receive solution from L0-agent). sendcan also have as arguments a list of agents along with the respec-tive messages to be sent. The action execute encapsulates theoverall agent computation, i.e., we adopt a blackbox descriptionof the computational part of the agents. The action execute is anabstraction of the execution of the particular algorithm specifiedby the strategy of the agent tuple.

On the same level of abstraction, at each time stampagents can assume one out of three states: , , and

. The state means that the agent is waitingfor an input to start the execution of the algorithm. Agents arein state when they have received the needed input andthey can start their execution. Finally, correspondsto the state reached after the termination of the execution (i.e.,when the agent is ready to send its output to the receiver agent).

It is also possible that an agent needs more than one inputbefore it can start the execution of the strategy. Therefore, wealso add a fourth state that represents the condition in which anagent is waiting for the completion of the collection of inputs.We call this state .

The corresponding labeled transition system, which describesall the agents of the multilevel architecture is composed of fourpossible transitions:

••••

Every specialization of metaheuristics in MAGMA can bedefined by instantiating this general LTS.

Finally, we note again that the notion of environment used inMAGMA is fairly simple, since it is defined, for every agent, asthe set of senders and receivers.

VII. SPECIALIZING MAGMA

MAGMA can be exploited either as a conceptual and im-plementation framework for metaheuristic algorithms or as ametaphor for designing multiagent algorithms for optimizationproblems. In this section, we concentrate on the first topic, andwe show how some among the most effective metaheuristic al-gorithms can be described in terms of MAGMA. We will de-scribe some specializations of the architecture showing that bothtrajectory and population based algorithms can be easily definedin our framework.

In the following subsections, we will specialize the tuples de-scribing the agents involved in each metaheuristic and the agentcoordination through an LTS. Obviously, the models, i.e., ,depend on the particular problem to be solved, thus here they areskipped (some examples have been described in Section IV-C).Thus, for each tuple, we will describe only the strategy.

There are metaheuristics that can just be described byL0-agents and L1-agents. For example, the basic Tabu Searchcan be realized with a single L0-agent which randomly con-structs an initial solution and a single L1-agent that performsthe search with one or more tabu lists. Of course, tabu searchmetaheuristics make often use of long-term memory, and there-

Fig. 6. Multilevel architecture version of GRASP.

fore, they need a further agent at LEVEL-2, which balancesintensification and diversification, for instance by enlargingand restricting the tabu list. Typical examples of dynamic tabulength tuning can be found in [33] (for instance, a strategycalled strategic oscillation) and in [5] and [6], where an adap-tive mechanism is proposed. The issue involving the dynamiccontrol of intensification and diversification is one among themost relevant in metaheuristics research [5], [6], [8], [10], [13].

The detailed description and discussion of the metaheuristicalgorithms is beyond the scope of this paper and they are pre-sented just in their easiest and general form. It is worth notingthat variants of these algorithms can be easily obtained bychanging the algorithms implemented at any level. Moreover,it is always possible to introduce more than one agent per level.Communications between levels are very important as well,and in principle they involve all possible combinations of linksbetween levels, even if the metaheuristic algorithms designedso far use just some of all possible communication links.

A. GRASP

We start our series of examples with a very simple algo-rithm: GRASP [23]. This algorithm is composed of a greedyconstruction procedure and a local search procedure (see thesketched Algorithm 2). The construction procedure builds a so-lution by applying a greedy function which, at each step, addsa randomly chosen component among the ones which have thehighest greedy function value. Then, a local search phase starts;the choice of the local search algorithm depends on the problemto be solved. GRASP is easily described in MAGMA: The firstlevel generates a solution by means of a greedy algorithm, asingle L1-agent performs local search, and finally a L2-agentkeeps track of the best solution found. Communications are uni-directional and information flows from LEVEL-0 to LEVEL-1and from LEVEL-1 to LEVEL-2. In Fig. 6, the MAGMA ver-sion of GRASP is depicted.

Algorithm 2 GRASPwhile stopping criterion not satisfied doConstruct a solution with a Greedy Ran-domized procedureApply local search toMemorize the best solution foundend while

The instantiation of the strategy in each tuple for GRASP isthe following:

Page 9: Magma: A multiagent architecture for metaheuristics

MILANO AND ROLI: MAGMA: MULTIAGENT ARCHITECTURE FOR METAHEURISTICS 933

• is a greedy random constructive strategy;• is a simple local search strategy implementing hill

climbing;• stores the current best solution.

Coordination among agents composing GRASP can be de-fined by specializing the LTS given in Section VI. We alwayssuppose that agents’ models are instantiated by an initializa-tion process that we call , except when explicitly statedthat (a part of) the model is instantiated by a L3-agent.sends a message inst containing all the information needed bythe agents.

The coordination model for the L0-agent is the following:

•••

-

The coordination model for the L1-agent is the following:

•-

•-

Finally, for the L2-agent the coordination model is

•-

••

Algorithm 3 ACOwhile stopping criterion not satisfied doConstruct a set of solutions

by using pheromone trailsand heuristic informationApply local search to every solution inUpdate pheromone trailsend while

B. ACO

ACO is a well known population-based metaheuristic for de-signing algorithms for COPs [19]–[21]. The inspiring idea be-hind ACO is the foraging behavior of real ants and the way theycommunicate. Real ants deposit a pheromone trail while theyare walking and select their direction in a stochastic way. Thehigher the pheromone value on a path, the higher the probabilitythe ant chooses that path. Artificial ants are simple agents thatconstruct a solution by choosing the next component in a prob-abilistic way (function of the pheromone). In ACO, pheromoneis a mean to communicate good solution components. Severalimplementations of ACO have been proposed, often with dif-ferent choices for the construction rules and the way to updatepheromone trails [9], [21], [30], [58], [66], [67].

A generic ACO algorithm is sketched in Algorithm 3.In Fig. 7, the MAGMA specialization of ACO is depicted:

We have L0-agents (the ants) providing initial solutions byusing a constructive procedure biased by pheromone trails andheuristic information. For brevity, we will indicate with thematrix of pheromone values. L1-agents perform local searchand one L2-agent updates . The last piece of information (newpheromone values) is then used again by L0-agents to construct

Fig. 7. Multilevel architecture version of ACO with local search.

new solutions. We observe that in this case the communicationis also from the third level to the first and this means that theprocess is self-adapting, since a feedback loop involves the ini-tial and the final steps.

The instantiations of the strategy in each tuple for ACO arethe following.

• is a probabilistic constructive procedure.• can be any trajectory method.• : L2-agent stores the current best solution and update

pheromone trails.The coordination model for the L0-agent

is the following.

• (The first iteration is per-formed on the basis of the initial values for pheromone.)

•-

(After the first iteration thevalues are given by the L2-agent. Observe that it is notpossible that both the transitions from state are activeat the same time.)

••

-

The coordination model for the L1-agent is the following:

•-

••

-

Finally, for the L2-agent the coordination model is

•- -

••

- -

C. ILS

ILS is a simple but powerful metaheuristic algorithm [64],[65]. It applies local search (or a more general trajectorymethod) to an initial solution until it finds a local optimum;then, it perturbs the solution and restarts local search from theperturbed state. The importance of the perturbation (change inthe solution) is obvious: a very small change in the solutioncould not make it possible to escape from the local optimumbasin of attraction; on the other side, a too strong modificationcan be comparable to a simple random restart. In order toaccomplish these requirements, several criteria have beenadopted, most of which use the history of the search. A verysimple version of ILS is described in Algorithm 4.

Page 10: Magma: A multiagent architecture for metaheuristics

934 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004

The design of ILS algorithms has several degrees of freedomin the choice of the initial solution construction procedure, per-turbation, and acceptance criteria.

In the MAGMA framework, ILS can be described as follows(see Fig. 8): One L0-agent provides an initial solution (eitherrandomly, or heuristically generated), then it stops its activity.At LEVEL-1, one solution improver agent applies a local searchalgorithm to the solution and, when it meets a local optimumit stops, waiting for a new solution to improve. At LEVEL-2the agent keeps track of the search process of the local searchagent and, when it stops, it provides the new starting solutionas a modification of the current one. Observe that, in this case,there is a continuous communication between LEVEL-1 andLEVEL-2, while LEVEL-0 participates just for the initializa-tion, or for a random restart3 (not considered in Fig. 8). It can beobserved that the structure of ILS is very similar to the structureof GRASP. However, the level-2 agent in ILS is much more in-telligent than that of GRASP. In fact, it should guide the pertur-bation of the level-1 agent toward unexplored (and promising)areas.

Algorithm 4 ILSGenerate initial solutionExecute LS from an initial state until alocal optimum is foundwhile stopping criterion not satisfied doPerturb and obtainExecute LS from until it finds a localoptimumOn the basis of an acceptance criteriondecide whether to setend while

The instantiations of the strategy in each tuple for ILS are thefollowing.

• is a randomly, or heuristic-based constructive proce-dure.

• is any trajectory method.• is such that the LEVEL-2 agent stores the current best

solution, perturbs the solution returned by the L1-agentand decides whether to accept the new returned solution.

The coordination model for the L0-agent is the following:

•••

-

The coordination model for the L1-agent is the following:

•-

••

-

Finally, for the L2-agent the coordination model is

•-

••

-

3Random restart can, of course, be added to every approximate algorithm.

Fig. 8. Multi-level architecture version of ILS.

Algorithm 5 MAGenerate initial populationwhile stopping criterion not satisfied doApply local search to every individual

Apply cooperation operators to and ob-tainApply competition operators to and ob-tain

end while

D. MAs

In this section, we propose the application of the MAGMAframework to MAs [53]. We include in this category genetic al-gorithms which apply also local search, in a way similar to ACOplus local search. In an MA (see Algorithm 5), an initial popu-lation of solutions is generated; then every solution is improvedby applying local search. The resulting new population cooper-ates and/or competes to produce a new population. The cooper-ation is a way to exchange information among individuals, andit can be implemented with genetic operators like mating andcrossover; the competition can be implemented with the selec-tion genetic operator or more elaborated mechanisms.

A MA in MAGMA is described as follows: Many L0-agentsgenerate the initial population of solutions. Each solution canbe generated either randomly or by means of a constructiveprocedure. L1-agents take the solutions from the lower leveland improve them by using local search (either some steps ofa local search algorithm, or until they reach a local optimum).At LEVEL-2, we have one agent whose task is to generate newsolutions by applying recombination and mutation operators.Then, the new population is produced by applying cooperationor competition operators. The new population is directly givenas input to LEVEL-1. Fig. 9 shows the multilevel architecturefor a generic MA. Note that since the L2-agent iteratively re-ceives the solutions from all the L1-agents, applies cooperationand competition operators, and sends the new solutions to theL1-agents (one for each agent), we represented this one-to-manycommunication by including the L1-agents in one single com-ponent.

The instantiation of the strategy in each tuple for MA hassome similarities (at least for the first two levels) with ILS, withmore than one agent per level (this appears also from Figs. 8 and9):

Page 11: Magma: A multiagent architecture for metaheuristics

MILANO AND ROLI: MAGMA: MULTIAGENT ARCHITECTURE FOR METAHEURISTICS 935

Fig. 9. Multilevel architecture version of MAs.

• : constructive procedure (e.g., random, heuristic, etc.),implemented by a population of L0-agents;

• : trajectory method, implemented by a population ofL1-agents;

• : the L2-agent evaluates the population and applies re-combination, mutation and selection/competition opera-tors.

The coordination model for the L0-agent ( ,where is the population cardinality) is the following4 :

•••

-

The coordination model for the L1-agent is the following:

•-

••

Finally, for the L2-agent, the coordination model is

•-

••

- -

E. New Algorithm: Guided Restart ILS

After the description of well known metaheuristics inMAGMA, we briefly show how MAGMA can be used todesign a very simple new algorithm. In the following example,we do not claim to describe a brand new algorithm, but werather aim at showing how to combine available algorithmiccomponents and strategies.

If we consider ILS in the MAGMA framework, we observethree communication arrows: one from the L0-agent which pro-vides the initial solution to the L1-agent. The remaining two ar-rows show the communication between L1-agent and L2-agent(local search and perturbation cycle). We note the absence of anarrow from LEVEL 2 to LEVEL 0, which characterizes ACOand other algorithms which exploit the search history to biasthe initial solution construction. Therefore, we can add an arrowfrom the L2-agent to the L0-agent and enrich the strategy of theL2-agent by a diversification mechanism based on the searchhistory. For instance, the L2-agent may accumulate statistics on

4I denotes the ith individual (solution)

the frequency of a list of recent solutions and bias the restartingprocess.5

We implemented this diversification mechanism and com-pared its performance with respect to simple random restart ILS.On large MAXSAT instances the guided restart enables the al-gorithm to reach the best known solution with higher frequencythan ILS with random restart. The description of the algorithmand experimental results are reported in the Appendix.

We believe that the use of MAGMA can clarify the interactionof algorithmic components, enabling the designer to build newcombinations, starting from basic blocks and communicationschemes.

F. Cooperative Search

In this section we show and discuss the definition of coopera-tive search algorithms in MAGMA. We will note that the level-3agents are particularly important for the integration and cooper-ation of different problem solving strategies.

Cooperative search [7], [16], [38], [39], [62], [69], [70] con-sists of a search performed by agents that exchange informa-tion about states, models, entire sub-problems, solutions or othersearch space characteristics. Agents can be either homogeneous(implementing the same algorithm) or heterogeneous (differentalgorithms).

We refer to [38] for the definition of cooperative search: “Co-operation involves a collection of agents that interact by commu-nicating information […].” Therefore, the main characteristic ofcooperative search is the information exchange among agentsduring the search process. Cooperative search goes beyond thecombination of metaheuristics, as it encompasses the integrationof complete techniques (e.g., in multiobjective optimization)and of mixed techniques (e.g., local search in constraint pro-gramming [15], [26], [55]). Moreover, in the field of distributedproblem solving, the subdivision of a problem in subproblemssolved by cooperating agents is a typical issue. In the following,we will limit our discussion to cooperative search achieved bythe combination of different metaheuristics. We also give an ex-ample of the integration of metaheuristics with complete algo-rithms, to show that MAGMA can also describe hybrid algo-rithms.

We can distinguish some kinds of cooperative search de-pending on the model of the problem used by the agents andthe type of information they exchange. Cooperating agentscan either have the same model of the problem or they canhave different models. In the latter case, some agents mayalso have models representing parts of the problem at hand(subproblems). Concerning information exchange, agents canexchange complete or partial solutions. Complete solutionsrepresent a point in the solution space, whereas partial solutionsrepresent structured areas of the search space.

In addition, LEVEL-3 agents can identify promising solu-tion building blocks used to feed other metaheuristic algorithms.In addition, a notion of distance among solution is essential inmany cooperative search algorithms. The level-3 agents are de-voted to compute this distance.

5Interesting diversification strategies developed for ACO useful also for theother metaheuristics can be found in [10].

Page 12: Magma: A multiagent architecture for metaheuristics

936 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004

Moreover, information can be positive or negative. In theformer case, (partial or complete) solutions are exchanged ifthey are estimated of good quality, or if they are considered hintsfor promising areas of the search space. In the latter case, in-formation is used to avoid visiting states or areas of the searchspace, which will not lead to optimal (or good) solutions (themost prominent example of negative information is given by no-goods [60]).

Typically, cooperative search algorithms are given by the par-allel or interleaved execution of search algorithms. The algo-rithms can be different or they can be instances of the same algo-rithm working on different models or running with different pa-rameters setting. What characterizes the cooperation is the typeof information exchanged and how it is exploited by the agents.

The usefulness of the multiagent metaphor emerges whenconsidering design issues like: synchronization, communica-tion, information filtering and implementation. A meaningfulexample of cooperative search in a multiagent system is re-ported in [16]: in addition to search agents, there are refereeagents whose work is to filter information. This architecture per-mits to develop separately search agents (focusing on search al-gorithms), agents that collect and filter (evaluate) information,communication mechanisms, and policies.

It is important to mention that cooperative search algorithms(systems) show their effectiveness and efficiency even if theyare sequentially implemented. This means that improvementsarise from information exchange and not only from the parallelimplementation [45], [49].

Cooperative search algorithms can be described in MAGMAby appropriately defining one or more than one LEVEL-3 agentswhich rule and supervise the information exchange. Since ateach level the agents describing the algorithms have a well-de-fined interface, solutions and other pieces of information can beeasily exchanged under the coordination of a fourth level agent.Moreover, the definition of new cooperative search algorithmsis quite straightforward. Indeed, as most metaheuristics algo-rithms can be defined in MAGMA with three levels, the coop-eration of two or more metaheuristics is achieved by adding aLEVEL-3 agent.

1) Combining Metaheuristics On the Same Model: Whendifferent metaheuristics tackle the same problem they usu-ally search on different landscapes, or they explore the samelandscape with a different strategy. Empirical results (see, forinstance, [28]) show that some algorithms (agents) performbetter than others on particular kinds of problems, while theyare outperformed on other problems. It is conjectured that thisdepends upon the agent ability to exploit the fitness landscapecharacteristics. Among such properties are ruggedness, numberof local optima, distribution of local optima, and topology ofthe basins of attraction [40], [50], [57], [63]. The choice offitness function and neighborhood operator defines such char-acteristics and it is reasonable that the combination of searcheson different fitness landscapes (derived from the same orig-inal problem) can smooth the problem hardness (or, at least,effectively face it).

As an example of a cooperative search algorithm we describethe combination of a population based algorithm (memetic) withGRASP.

Suppose we have a system composed of a MA collaboratingwith GRASP. Both metaheuristics have been already describedin MAGMA, respectively, in Sections VII-A and D. Since thealgorithms have a different strategy to explore the state space,the combination of them may hopefully result in a more effec-tive algorithm. The aim of cooperative search is to let the twometaheuristics communicate and exchange results. In particular,we suppose that in every new generation of MA, or wheneverMA needs diversification, we insert the best solutions foundby GRASP. On the other hand, suggestions for building the re-stricted candidate list of GRASP are derived by the best indi-viduals of the MA. For instance, with a certain frequency (pa-rameter of the algorithm) GRASP constructs the candidate listwith an evaluation function that depends both on the solutioncomponent frequency of the best individuals found by theMA and the heuristic.

L0, L1, and L2-agents, are those described in Sections II–VIIfor MA and GRASP. Observe that the strategy of the L2-agentof GRASP has been modified in order to enable it to maintainthe best found solutions, instead of only the best one. Thetuples associated with these agents represent the model of anL3-agent who supervises and coordinates the information ex-change. The strategy defines the frequency of the solutionexchange, builds the new population for MA and the restrictedcandidate list for GRASP (see Fig. 10). Its coordination modelis defined as follows:

••

-

••

-

•-

The previous algorithm is an example of the cooperationof two different algorithms working on the same model, butsearching with different strategies.

In the following example we show how MAGMA can also de-scribe cooperative search algorithms composed of complete andapproximate algorithms, working on different problem models.

2) LNS: It is generally recognized that in local search al-gorithms the larger the neighborhood to explore, the higher theprobability of finding better solutions, but the higher the compu-tational effort needed to explore the neighborhood. Indeed, fre-quently large neighborhoods are just randomly sampled and notexhaustively explored. Complete techniques can be effectivelyintegrated into local search in the exploration of large neighbor-hoods, resulting in a technique usually referred to as LNS [61].6

The description of LNS is beyond the scope of the paper andwe just sketch the basic steps.

1) Construct initial solution through any incomplete algo-rithm.

2) Define a neighborhood by selecting variablesamong the composing the solution.

3) Exhaustively explore , and find a solution locallyoptimal w.r.t. (i.e., find the optimal solution of thesubproblem ).

4) Go to step 2.

6Similar approaches can be found in [2], [4], [12].

Page 13: Magma: A multiagent architecture for metaheuristics

MILANO AND ROLI: MAGMA: MULTIAGENT ARCHITECTURE FOR METAHEURISTICS 937

Fig. 10. Multilevel architecture version of cooperative search.

Fig. 11. MAGMA version of an instance of LNS.

Suppose that we want to solve a vehicle routing problemwhere we have trucks each starting and ending its tour in adepot. The trucks have to visit customers within a given timewindow at a minimum cost. The cost is associated to each arcin the corresponding graph. Clearly, the problem can be com-plicated by many side constraints such as truck capacity, unioncontract regulations, maximum length of each path, etc. How-ever, starting by this basic problem, we can intuitively describea LNS framework. Suppose we have found a solution by usinga local search or a metaheuristic algorithm. We fix all routes butone (or two) and we optimally solve on the free variables a trav-elling salesman problem with time windows (TSPTW). Clearly,the TSPTW is NP-hard, but if we consider small instances, thereare very efficient methods that can be used to find the optimalsolution [27].

LNS can be defined in MAGMA as the cooperation of a com-plete solver (working on a subproblem) and a local search-basedmetaheuristic, for example ILS. In Fig. 11, a very simple LNSis represented in terms of MAGMA levels. An L3-agent coor-dinates two algorithms: the one solving the subproblem (seethe left side of Fig. 11) and the one providing the first solu-tion (see the right side of Fig. 11). We suppose that ILS (imple-mented as described in Section VII -C with three agents), pro-vides the first solution to be improved by means of LNS. The

L3-agent receives the first solution from the agent of level-2 ofILS ( L2-ILS ) and defines by fixing out of variables.The L3-agent communicates to agent A-sub (the subproblemsolver) at level 0 (arrow 2) by instantiating for it the subproblemmodel in its tuple. Finally, the solution provided by A-subis returned to the coordinating agent which again decides whichvariables to fix. This process can be iterated until a terminationcondition is reached. The L2-ILS agent can also be involved,from time to time, to perturb the solution provided by the agentA-coord (arrow 4) and to activate a further ILS phase.

We did not specify how A-coord defines since manychoices are possible. For instance, the neighborhood can be se-lected by choosing critical variables (e.g., bottleneck resourcesin scheduling problems).

Note that the ILS agents and the A-sub agent should not beaware of being part of a complex search strategy. They performtheir usual tasks. Thus, the L3-agent is needed to enable the in-formation exchange in a transparent way for lower level solvers.

The LTS corresponding to LNS is as follows.L0-agent of ILS:

•••

-

Page 14: Magma: A multiagent architecture for metaheuristics

938 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004

L1-agent of ILS:

•-

••

L2-agent of ILS:

•-

•-

••

-

•-

A-sub:

•-

• -

•-

A-coord:

•-

•-

•-coord

•-

•-

A plethora of variants of the proposed algorithm is possible,and they can be described in MAGMA by changing at eachlevel and by introducing other agents in the architecture.

VIII. RELATED WORK

A multiagent perspective of metaheuristics has relations bothwith object-oriented frameworks for the algorithms implemen-tation and distributed artificial intelligence (DAI). In this sectionwe briefly discuss these issues.

Although MAGMA has been presented as a conceptualframework for metaheuristics, it can also be used as a frame-work for the algorithms implementation. In the literature, someimplementation frameworks for local search-based metaheuris-tics have been introduced [3], [15], [17], [25], [47], [51]. Theyall share the same aims: on the one hand, fast prototyping andimplementation and on the other hand, systematic comparisonof algorithms. There are different choices for the implementa-tion, but most of them provide a basic structure of two levels:

7 construction of the initial solution and further improvement.In [17] an interesting hierarchical structure of object-orientedclasses goes into the direction of MAGMA (even though re-stricted to local search-based metaheuristics). The authors de-fine four levels:

1) basic components (I/O, state, moves);2) local search problem elements (state manager, neighbor-

hood explorer);3) local search-based metaheuristics (iterative improvement,

tabu search, simulated annealing, etc.);4) local search solving strategy (simple solver, token-ring

solver, etc.).When the fourth level of this architecture composes different

algorithms, it can be interpreted as our LEVEL-3.

7As it can be expressed with our framework terminology.

Note that the model components here are put altogether inthe first level, while in MAGMA we allow to have differentmodels/components in different levels.

Research on DAI and Multiagent Systems covers several the-oretical and applicative topics. The relations with the multia-gent approach for metaheuristics emerge mainly in distributedsearch, where agents cooperate to solve a search problem [48],[56], [74]. The basic approach consists of subdividing the searchspace, assigning to each agent a subspace and let each agent “lo-cally” search in its own subspace. Protocols and algorithms havebeen developed to maintain consistency and to combine the so-lutions provided by the agents into a complete solution. Severalcooperative techniques have also been proposed and studied,beside different synchronization methods. Even if distributedsearch deals with distributed implementation of classical com-plete search algorithms, concepts and researches about the for-malization of communication and cooperation among (hetero-geneous) agents may be effectively included in the specializa-tions and implementations of MAGMA.

IX. DISCUSSION AND CONCLUSION

In this paper, we have discussed a multiagent approach tometaheuristics. We have described the multi-level architectureMAGMA, where agents are characterized by different levels ofabstraction and objectives. Agents are described through a uni-form view of tuples containing a model and a strategy. Further-more, we have provided a coordination model to formally definesynchronization and communication among agents.

We have shown that MAGMA can describe existing meta-heuristics by specializing the architecture general scheme. Inaddition, cooperative search, LNS and in general any combina-tion of metaheuristics and complete algorithms can be describedin MAGMA.

The basic characteristics of MAGMA are the multilevel struc-ture and the organization as multiagent system. We believe thatthese two properties could enable the decomposition of algo-rithms into basic components and be useful in bridging the gapbetween ad hoc algorithm design and general metaheuristicssoftware libraries/packages.

There are some relevant issues that should be mentioned.In MAGMA, all direct communications between levels areallowed. In the previous examples, only some among thepossible communications have been exploited and manypossibilities to combine and design new algorithms have to beinvestigated. The explicit presence of levels permits to clearlyidentify information flows. For example, algorithms that have acycle LEVEL-0 LEVEL-1 LEVEL-2 LEVEL-0 exhibita feedback loop. Feedback loops characterize self-adaptingsystems and emerging phenomena, e.g., ACO dynamics.

The communication patterns in MAGMA involve agents be-longing to different levels. However, the communication be-tween agents at the same level is possible as well by means ofL3-agents. In fact, L3-agents can also provide simple commu-nication links between agents at any level.

An interesting possibility is to use LEVEL-3 to performresource management for the lower levels. For instance, aL3-agent can decide the priorities among L2-agents and allot

Page 15: Magma: A multiagent architecture for metaheuristics

MILANO AND ROLI: MAGMA: MULTIAGENT ARCHITECTURE FOR METAHEURISTICS 939

TABLE IRANDOM RESTART ILS VERSUS GUIDED RESTART ILS. BEST SOLUTION

VALUE FOUND AND THE PERCENTAGE OF RUNS RETURNING THAT

VALUE ARE REPORTED

more or less CPU time to agents depending on the requirements.This strategy could be useful in case of anytime algorithms,where as soon as a solution is required, it has to be provided.L3-agents can also select lower level agents as a function of thequality of the solutions provided.

Finally, we think MAGMA can be useful as a teaching tool,because it is structured in interacting software components andlevels of abstraction. Indeed, recent successful approaches toteaching artificial intelligence are based on the multiagent de-scription of algorithms (see, for instance, [59]).

APPENDIX

Guided restart ILS (GRILS) is based on basic ILS plus arestarting mechanism that makes use of the search history ofprevious restarts and the best solution found. The L2-agentkeeps a list of solutions composed of the most recent initialsolutions and the best solution found since the beginningof the algorithm. The algorithm has been applied to tacklelarge MAXSAT problems, where solutions are represented byassignments to binary variables.

The restart list is used to compute a vector of probabilitieswhich bias the construction of the initial solution for the restart.A simple random restart would use a probability vector withall the entries fixed to 0.5, i.e., every variable has probability0.5 to be assigned to 1 in the new solution. Our aim is to biasthe solution construction in favor of solutions which are likelynot to belong to already explored regions (diversification). Foreach variable , the frequency of assignments to 1 iscomputed. This frequency is used as the probability to have theassignment in the new initial solution. Therefore, thehigher the frequency of ones in the last restart solutions, thelower the probability of having the same assignment to in thenew initial solution. This diversification mechanism is counter-balanced by considering also the best solution found

to compute the probability vector. Thefrequency is incremented if and decrementedfor the opposite assignment. Therefore, the probability vector isslightly moved toward the current most promising region (in thespirit of path relinking [32]).

Preliminary results on large random MAXSAT instancesshow that GRILS finds the best known solution with higherfrequency than ILS with simple random restart. The length ofthe list of the last recently visited solution has been set to 10.GRILS parameters (e.g., ) have not been tuned by means of an

adaptive procedure, therefore the performance of GRILS canbe further improved. In Table I, results concerning MAXSATinstances with 1000 variables and 10 000 clauses are reported.8

For every instance, we report the best solution value foundby the algorithms (which is also the best solution known, atour knowledge) in terms of number of unsatisfied clauses. Foreach algorithm, we show the percentage of runs that returnedthe best solution value. As we can observe, GRILS returns thebest known value with higher frequency than RRILS in all butone instances. For one instance GRILS was also able to find abetter solution than RRILS. The algorithms run 50 times foreach instance, and they have been stopped after 60 s.9

ACKNOWLEDGMENT

The authors would like to thank P. Torroni, A. Omicini, C.Blum, and the anonymous reviewers for useful comments andsuggestions on this work. The information provided is the soleresponsibility of the authors and does not reflect the communi-ties opinion. The community is not responsible for any use thatmight be made of data appearing in this publication.

REFERENCES

[1] E. H. L. Aarts and J. K. Lenstra, Eds., Local Search in CombinatorialOptimization. New York: Wiley-Interscience, 1997.

[2] R. K. Ahuja, Ö Ergun, J. B. Orlin, and A. P. Punnen, “A survey of verylarge-scale neighborhood search techniques,” Discrete Applied Math.,vol. 123, pp. 75–102, 2002.

[3] A. A. Andreatta, S. E. R. Carvalho, and C. C. Ribeiro, “An object-ori-ented framework for local search heuristics,” in Proc. 26th Conf.Technol. Object-Oriented Languages Syst., Piscataway, NJ, 1998, pp.33–45.

[4] D. Applegate and W. Cook, “A computational study of the job-shopscheduling problem,” ORSA J. Comput., vol. 3, pp. 149–156, 1991.

[5] R. Battiti and M. Protasi, “Reactive search, a history-based heuristic forMAX-SAT,” ACM J. Experimental Algorithmics, 1997.

[6] , “Solving MAX-SAT with nonoblivious functions and his-tory-based heuristics,” in Satisfiability Problem: Theory and Appli-cations. ser. Number 35 in DIMACS: Series in Discrete Mathematicsand Theoretical Computer Science, D. Du, J. Gu, and P. M. Pardalos,Eds. New York: Amer. Math. Soc., Association Comput. Machinery,1997, pp. 649–667.

[7] V. Bachelet and E. G. Talbi, “Cosearch: A co-evolutionary metaheuris-tics,” in Proc. Congr. Evolutionary Comput., 2000, pp. 1550–1557.

[8] C. Blum and A. Roli, “Metaheuristics in combinatorial optimization:Overview and conceptual comparison,” ACM Comput. Surveys, vol. 35,no. 3, 2003.

[9] C. Blum, A. Roli, and M. Dorigo, “HC-ACO: The hyper-cube frame-work for ant colony optimization,” in Proc. Meta-Heuristics Int. Conf.,vol. 2, Porto, Portugal, 2001, pp. 399–403.

[10] C. Blum, “ACO applied to group shop scheduling: A case study on inten-sification and Diversification,” in Proc. Ant Algorithms Third Int. Work-shop, vol. 2463, M. Dorigo, G. Di Caro, and M. Sampels, Eds., 2002,pp. 14–27.

[11] P. Calegari, G. Coray, A. Hertz, D. Kobler, and P. Kuonen, “A taxonomyof evolutionary algorithms in combinatorial optimization,” J. Heuristics,vol. 5, pp. 145–158, 1999.

[12] Y. Caseau and F. Laburthe, “Effective forget-and-extend heuristics forscheduling problems,” in Proc. CP-AI-OR’99-Workshop Integration AIOR Techniques Constraint Programming Combinatorial OptimizationProblems, 1999.

[13] I. Charon and O. Hudry, “The noising methods: A generalization of somemetaheuristics,” Eur. J. Operational Res., vol. 135, pp. 86–101, 2001.

8Instances can be retrieved from http://rtm.science.unitn.it/inter-tools/sat/benchmark.html

9They run on a Pentium III at 500 MHz, with 256 MB of RAM and 512 KBof cache memory.

Page 16: Magma: A multiagent architecture for metaheuristics

940 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004

[14] D. Corne, M. Dorigo, and F. Glover, Eds., New Ideas in Optimization.ser. Advanced Topics in Computer Science Series. New York: Mc-Graw-Hill, 1999.

[15] B. De Backer, V. Furnon, and P. Shaw, “An object model for meta-heuristic search in constraint programming,” in Proc. CP-AI-OR’99-Workshop Integration AI OR Techniques Constraint Programming Com-binatorial Optimization Problems, 1999.

[16] J. Denzinger and T. Offerman, “On cooperation between evolutionaryalgorithms and other search paradigms,” in Proc. Congr. EvolutionaryComput., 1999, pp. 2317–2324.

[17] L. Di Gaspero and A. Schaerf, “EASYLOCAL++: An object-orientedframework for the design of local search algorithms and metaheuris-tics,” in Proc. Meta-Heuristics Int. Conf., vol. 1, Porto, Portugal, 2001,pp. 287–292.

[18] , “Multi-neighborhood local search for the course timetablingproblem,” in Proc. 4th Int. Conf. Practice Theory of AutomatedTimetabling, Gent, Belgium, Aug. 2002, pp. 21–23.

[19] M. Dorigo and G. Di Caro, “The ant colony optimizationmeta-heuristic,” in New Ideas in Optimization, D. Corne, M. Dorigo,and F. Glover, Eds. New York: McGraw-Hill, 1999, pp. 11–32.

[20] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms fordiscrete optimization,” Artificial Life, vol. 5, no. 2, pp. 137–172, 1999.

[21] M. Dorigo and L. M. Gambardella, “Ant colony system: A coopera-tive learning approach to the travelling salesman problem,” IEEE Trans.Evolutionary Comput., vol. 1, pp. 53–66, Apr. 1997.

[22] A. E. Eiben and Z. Ruttkay, “Constraint satisfaction problems,” inHandbook of Evolutionary Computation, T. Bäck, D. Fogel, and M.Michalewicz, Eds. Oxford, U.K.: IOP, Oxford Univ. Press, 1997.

[23] T. A. Feo and M. G. C. Resende, “Greedy randomized adaptive searchprocedures,” J. Global Optimiz., vol. 6, pp. 109–133, 1995.

[24] J. Ferber, Multi-Agent Systems: An Introduction to Distributed ArtificialIntelligence. Reading, MA: Addison-Wesley, 1999.

[25] A. Fink and S. Voss, “Reusable metaheuristic software components andtheir application via software generators,” in Proc. Meta-Heuristics Int.Conf., vol. 2, Porto, Portugal, 2001, pp. 637–641.

[26] F. Focacci, F. Laburthe, and A. Lodi, “Local search and constraint pro-gramming,” in Handbook of Metaheuristics, F. Glover and G. Kochen-berger, Eds. Norwell, MA: Kluwer, 2002.

[27] F. Focacci, A. Lodi, and M. Milano, “A hybrid exact algorithm for theTSPTW. INFORMS,” J. Comput., vol. 14, no. 4, pp. 403–417, 2002.

[28] C. Fonlupt, D. Robilliard, P. Preux, and E. G. Talbi, “Fitness landscapesand performance of meta-heuristics,” in Meta-Heuristics: Advances andTrends in Local Search Paradigms for Optimization, S. Voss, S. Martello,I. Osman, and C. Roucairol, Eds. Boston, MA: Kluwer, 1999.

[29] E. Freuder and R. Wallace, “Partial constraint satisfaction,” ArtificialIntell., vol. 58, pp. 21–70, 1992.

[30] L. M. Gambardella, È. D. Taillard, and M. Dorigo, “Ant colonies for thequadratic assignment problem,” J. Operational Res. Soc., vol. 50, pp.167–176, 1999.

[31] R. J. van Glabbeek, “The linear time—branching time spectrum. Thesemantics of concrete, sequential processes,” in Handbook of ProcessAlgebra, J. A. Bergstra, A. Ponse, and S. A. Smolka, Eds. Amsterdam,The Netherlands: North-Holland, 2001.

[32] F. Glover, “Scatter search and path relinking,” in New Ideas in Opti-mization, Advanced Topics in Computer Science Series, D. Corne, M.Dorigo, and F. Glover, Eds. New York: McGraw-Hill, 1999.

[33] F. Glover and M. Laguna, Tabu Search. Boston, MA: Kluwer, 1997.[34] M. Hansen, “Tabu search for multiobjective optimization: MOTS,” in

Proc. 13th Int. Conf. Multiple Criteria Decision Making, 1997.[35] P. Hansen and N. Mladenovic, “An introduction to variable neighbor-

hood search,” in Meta-Heuristics: Advances and Trends in Local SearchParadigms for Optimization, S. Voss, S. Martello, I. Osman, and C. Rou-cairol, Eds. Boston, MA: Kluwer, 1999, ch. 30, pp. 433–458.

[36] , “Variable neighborhood search: Principles and applications,” Eur.J. Operational Res., vol. 130, pp. 449–467, 2001.

[37] A. Hertz and D. Kobler, “A framework for the description of evolu-tionary algorithms,” Eur. J. Operational Res., vol. 126, pp. 1–12, 2000.

[38] T. Hogg and A. Huberman, “Better than the best: The power of cooper-ation,” in SFI 1992 Lectures in Complex Systems. Reading, MA: Ad-dison-Wesley, 1993, pp. 163–184.

[39] T. Hogg and C. P. Williams, “Solving the really hard problems with co-operative search,” in Proc. AAAI93, 1993, pp. 213–235.

[40] W. Hordijk, “A measure of landscapes,” Evolutionary Comput., vol. 4,no. 4, 1996.

[41] D. S. Johnson and L. A. McGeoch, “The traveling salesman problem: Acase study,” in Local Search in Combinatorial Optimization, E. H. Aartsand J. K. Lenstra, Eds. New York: Wiley, 1997, pp. 215–310.

[42] T. Jones, “Evolutionary algorithms, fitness landscapes and search,”Ph.D. dissertation, Univ. New Mexico, Albuquerque, NM, 1995.

[43] , “One operator, one landscape,” Santa Fe Inst., Santa Fe, NM,Santa Fe Institute Tech. Rep. 95-02-025, 1995.

[44] L. P. Kaelbing, M. L. Littman, and A. W. Moore, “Reinforcementlearning: A survey,” J. Artif. Intell. Res., vol. 4, pp. 237–285, 1996.

[45] S. A. Kauffman and W. Macready, “Technological evolution and adap-tive organizations,” Complexity, vol. 26, no. 2, pp. 26–43, Mar. 1995.

[46] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simu-lated annealing,” Science, vol. 220, no. 4598, pp. 671–680, May 1983.

[47] F. Laburthe and Y. Caseau, “SALSA: A language for search algorithms,”in Proc. Principle Practice Constraint Programming, vol. 1520, LectureNotes in Computer Science, M. Maher and J. F. Puget, Eds., New York,1998, pp. 310–324.

[48] V. R. Lesser, “An overview of DAI: Viewing distributed AI as distributedsearch,” J. Jpn. Soc. Artif. Intell.-Special Issue on Distributed ArtificialIntelligence, vol. 5, no. 4, pp. 392–400, 1990.

[49] W. G. Macready, S. A. Kauffman, and A. G. Siapas, “Criticality andparallelism in combinatorial optimization,” Science, vol. 271, pp. 56–59,Jan. 1996.

[50] P. Merz and B. Freisleben, “Fitness landscapes and memetic algorithmdesign,” in New Ideas in Optimization, Advanced Topics in ComputerScience Series, D. Corne, M. Dorigo, and F. Glover, Eds. New York:McGraw-Hill, 1999.

[51] L. Michael and P. van Hentenryck, “Localizer: A modeling language forlocal search,” INFORMS J. Comput., vol. 11, pp. 1–14, 1999.

[52] P. Mills and E. Tsang, “Guided local search for solving SAT andweighted MAX-SAT problems,” in Proc. SAT, I. Gent, H. van Maaren,and T. Walsh, Eds., 2000, pp. 89–106.

[53] P. Moscato, “Memetic algorithms: A short introduction,” in New Ideasin Optimization, F. Glover, D. Corne, and M. Dorigo, Eds. New York:McGraw-Hill, 1999.

[54] E. Nowicki and C. Smutnicki, “A fast taboo search algorithm for thejob-shop problem,” Manag. Sci., vol. 42, no. 2, pp. 797–813, 1996.

[55] G. Pesant and M Gendreau, “A view of local search algorithm for thejob-shop problem,” in Principles and Practice of Constraint Program-ming, E. C. Freuder, Ed. Berlin, Heidelberg: Springer-Verlag, 1996,vol. LNCS 1118, pp. 353–366.

[56] N. V. N. Prasad, S. E. Lander, and V. R. Lesser, “Cooperative learningover composite search spaces: Experiences with a multi-agent designsystem,” in Proc. Thirteenth National Conf. Artif. Intell., vol. 1, 1996,pp. 68–73.

[57] C. Reeves, “Landscapes, operators and heuristic search,” Ann. Opera-tions Res., vol. 86, pp. 473–490, 1999.

[58] A. Roli, C. Blum, and M. Dorigo, “ACO for maximal constraint sat-isfaction problems,” in Proc. Meta-Heuristics Int. Conf., vol. 1, Porto,Portugal, 2001, pp. 187–191.

[59] S. J. Russell and P. Norvig, Artificial Intelligence. A Modern Ap-proach. New York: Simon & Schuster, 1995.

[60] T. Schiex and G. Verfaillie, “Nogood recording for static and dynamicconstraint satisfaction problems,” Int. J. Artif. Intell. Tools, vol. 3, no. 2,pp. 187–207, 1994.

[61] P. Shaw, “Using constraint programming and local methods to solve ve-hicle routing problems,” in Principle and Practice of Constraint Pro-gramming, M. Maher and J. F. Puget, Eds. New York: Springer, 1998,vol. 1520, Lecture Notes in Computer Science.

[62] L. Sondergeld and S. Voss, “Cooperative intelligent search using adap-tive memory techniques,” in Meta-Heuristics: Advances and Trendsin Local Search Paradigms for Optimization, S. Voss, S. Martello, I.Osman, and C. Roucairol, Eds. Boston, MA: Kluwer, 1999, ch. 21,pp. 297–312.

[63] P. F. Stadler, “Landscapes and their correlation functions,” J. Math.Chem., vol. 20, pp. 1–45, 1996.

[64] T. Stützle, “Iterated local search for the quadratic assignment problem,”FG Intellektik, TU Darmstadt, Tech. Rep. aida-99-03, 1999.

[65] , “Local search algorithms for combinatorial problems—Analysis,algorithms, and new applications,” DISKI—Dissertationen Zur Künst-liken Intelligenz. Infix, 1999.

[66] T. Stützle and M. Dorigo, “ACO algorithms for the quadratic assignmentproblem,” in New Ideas in Optimization, D. Corne, M. Dorigo, and F.Glover, Eds. New York: McGraw-Hill, 1999, pp. 33–50.

[67] T. Stützle and H. H. Hoos, “MAX –MIN ant system,” Future Gen-eration Comput. Syst., vol. 16, no. 8, pp. 889–914, 2000.

[68] E. D. Taillard, L. M. Gambardella, M. Gendreau, and J. Potvin, “Adap-tive memory programming: A unified view of meta-heuristics,” Eur. J.Operational Res., vol. 135, pp. 1–16, 2001.

Page 17: Magma: A multiagent architecture for metaheuristics

MILANO AND ROLI: MAGMA: MULTIAGENT ARCHITECTURE FOR METAHEURISTICS 941

[69] M. Toulouse, T. G. Crainic, and B. Sansò, “An experimental study of thesystemic behavior of cooperative search algorithms,” in Meta-Heuris-tics: Advances and Trends in Local Search Paradigms for Optimization,S. Voss, S. Martello, I. Osman, and C. Roucairol, Eds. Boston, MA:Kluwer, 1999, ch. 26, pp. 373–392.

[70] M. Toulouse, K. Thulasiraman, and F. Glover, “Multi-level cooperativesearch: A new paradigm for combinatorial optimization and an applica-tion to graph partitioning,” in Proc. Euro-Par, 1999.

[71] S. Voss, S. Martello, I. H. Osman, and C. Roucairol, Eds., Meta-Heuris-tics—Advances and Trends in Local Search Paradigms for Optimiza-tion. Boston, MA: Kluwer, 1999.

[72] L. N. Van Wassenhove and L. F. Gelders, “Solving a bicriterion sched-uling problem,” Eur. J. Operational Res., vol. 4, no. 1, pp. 42–48, 1980.

[73] M. Viroli and A. Omicini, “Modeling agents as observable sources,” J.Universal Comput. Sci., vol. 8, no. 4, pp. 423–451, 2002.

[74] G. Weiss, Ed., Multiagent Systems. A Modern Approach to DistributedArtificial Intelligence. Cambridge, MA: MIT Press, 2000.

[75] D. H. Wolpert and W. G. Macready, “No free lunch theorems for opti-mization,” IEEE Trans. Evolutionary Comput., vol. 1, pp. 67–82, Feb.1997.

[76] M. Wooldridge and N. Jennings, “Intelligent agents: Theory and prac-tice,” Knowl. Eng. Rev., vol. 10, no. 2, 1995.

Michela Milano received the Ph.D. degree in computer science and electronicengineering from the University of Bologna, Bologna, Italy, in 1998.

She has been an Associate Professor with the Department of Electronics,Computer Science, and Systems, University of Bologna (Italy) since 2001. Herresearch interests are in the areas of artificial intelligence and, in particular,of search algorithms, constraint programming, and its integration with integerlinear programming.

Andrea Roli received the Ph.D. degree in computer science and electronic en-gineering from the University of Bologna, Bologna, Italy, in 2003.

He has been a Research Associate at the University “G.D’Annunzio,” Chieti,Italy, since 2003. He is also is a fellow at the Department of Electronics, Com-puter Science, and Systems, University of Bologna. His current research in-terests are in the field of metaheuristics, constraint satisfaction problems, andcomplex systems.