Top Banner
Coalition as resolution for Space Occupation Problems M. Sadgal, J. Boussaa and A. El-Fazziki Computer sciences department, Faculty of Science Semlalia, University Cadi Ayyad, Bd Prince My Abdellah BP 2390, Marrakech, {sadgal,elfazziki}@ucam.ac.ma, jamilaboussaa.gmail.com Abstract The space occupation is a recurring problem in many fields requiring constraint satisfaction as well as optimization of certain factors recognized essential for the performance. The used approaches tend to privilege one of the two aspects (optimization or satisfaction) without to lead to a general solution. In spite of the success of few methods for space occupation problems under constraints, it can be interesting to consider new ways of resolution, in particular methods resulting from Artificial Intelligence (I.A) techniques. The problem is NP-complex, one possibility of overcoming this complexity is to distribute it through several calculating units and to adopt an adequate decisional form for “the best” solution. In order to build and to evaluate possible solutions for this problem category, we propose in this paper a general architecture which can nestle several resolution approaches through agglomerates of specialized solvers. On this basis, a general model of agent solver is provided. Competences and the interactions of agents will be studied and classified according to the space occupation problem types. One case is presented here, the resolution by coalition. Keywords: Space Occupation, Constraints, Satisfaction, Optimization, Coalition, Artificial Intelligence, Multi-agent, DSCSP. Introduction The Space Occupation Problem (SOP) consists of the establishment of a set of predefined objects in a preset space while respecting a certain number of imposed constraints (declared) and others (not declared) which appear at the time of the installation itself. Of
26
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Sop in a Multijcsis

Coalition as resolution for Space Occupation Problems

M. Sadgal, J. Boussaa and A. El-Fazziki

Computer sciences department, Faculty of Science Semlalia,

University Cadi Ayyad, Bd Prince My Abdellah BP 2390, Marrakech,

{sadgal,elfazziki}@ucam.ac.ma, jamilaboussaa.gmail.com

Abstract

The space occupation is a recurring problem in many fields requiring constraint satisfaction as well as optimization of certain factors recognized essential for the performance. The used approaches tend to privilege one of the two aspects (optimization or satisfaction) without to lead to a general solution. In spite of the success of few methods for space occupation problems under constraints, it can be interesting to consider new ways of resolution, in particular methods resulting from Artificial Intelligence (I.A) techniques. The problem is NP-complex, one possibility of overcoming this complexity is to distribute it through several calculating units and to adopt an adequate decisional form for “the best” solution. In order to build and to evaluate possible solutions for this problem category, we propose in this paper a general architecture which can nestle several resolution approaches through agglomerates of specialized solvers. On this basis, a general model of agent solver is provided. Competences and the interactions of agents will be studied and classified according to the space occupation problem types. One case is presented here, the resolution by coalition.

Keywords: Space Occupation, Constraints, Satisfaction, Optimization, Coalition, Artificial Intelligence, Multi-agent, DSCSP.

Introduction

The Space Occupation Problem (SOP) consists of the establishment of a set of predefined objects in a preset space while respecting a certain number of imposed constraints (declared) and others (not declared) which appear at the time of the installation itself. Of course such occupation must be optimal. The mathematical models which tried to deal with this problem often bring back to emphasize the aspect optimization (reduction of occupied space for example) but not directly the satisfaction aspect of constraints. In these models, the enormous difficulty encountered in the expression of the constraints is circumvented by the search for an evaluation function. The modeling of such problems cannot be reduced to an assignment model that aims at assigning objects to a set of places.

The space may be simplified in fixed zones (places) and, in this case, one privileges only the assignment cost of objects in the available places.Many of these problems, undertaken by traditional Data processing, are confronted with a realistic representation of the constraints and objectives. Models such as linear programming where it is necessary to translate the constraints in the form of inequations or of the statistical models which deal with the problem in term of classification or of the models based on physical phenomena (Annealing simulated) (Barra et al. 1984; Sechen, 1988) which observe the energy minimization, confirm all this difficulty of expression of constraints. More recently, Thierry Petit and al. (Petit et al., 2011) study the propagation of side constraints to

Page 2: Sop in a Multijcsis

solve problems. They provide a theoretical and experimental comparison of two main approaches for encoding over-constrained problems with side constraints. Even if their work is oriented constraint programming, the resolution is still problem-dependent Certainly, one takes in account the constraints but under an imposed numerical structure not allowing to express a great number of them nor making to cooperate several specialized procedures. Indeed, constraints like “the

object has which uses water must be placed further possible from any electrical appliance” or “to leave a way between the object has and B” or “the object has must be seen by the object B”, etc. which introduce inaccuracy and ambiguity claim for representation techniques and reasoning supported by approaches in Artificial intelligence (I.A) (Haralick et al., 1980; Lhomme , 1993; Sapena, 2008). In this sense, the authors in (Xidonas, 2009) provide a

system expert approach. The proposed methodology is employed for selecting the attractive equities, through the evaluation of the

overall corporate performance of the corresponding firms.

The problem is NP-complex, we can’t postulate that only the algorithm performance with “parallel” version will be able to overcome complexity, although some efforts as in (Hamadi, 2009). One realized that models based on elementary behaviors of reactive agents (i.e ACO and PSO (Pour et al., 2006)) had successes for certain problems. Whereas other more complex models (cognitive agents) taking as a starting point the human behaviors like the negotiation, the co-operation, the game theory still offer a range of approaches to overcome the problem difficulty. These approaches (Andre, 1986; Rossi, 1986; Sadgal, 1989; Stephan, 2000) are promising for several reasons such as the expression power that allows the integration of several types of constraints, the introduction of the qualitative aspect in the solution decision, the need to cooperate several types of knowledge and modeling using the paradigm “Agents” is one way to lead architectures more adapted to the resolution process. The current trend is to introduce negotiation and argumentation (Hsairi, 2009) to resolve problems that we can’t express directly with evaluation functions.

Generally, the SOP is compared to a Constraints Satisfaction Problem (CSP) widened by methods that treat constraints on a scale of “severities” thus incorporating the preferences.

In this article we present a multi-agent architecture allowing to express easily and naturally the requests of the user (demands and preferences) and to accommodate several communities of agents. Each community uses an approach of resolution. The objective is double, to solve problems CSP with optimization (an approach will be explored) on the one hand and to develop “the infrastructure” according to this architecture to integrate several other approaches and models of resolution on the other hand.

In the section which follows, we present the State of the art in this field. Section 3 exposes the detailed description of the suggested approach. A presentation of the negotiation and the cooperation as a way to resolve the problem by a community of agents based on the coalition will be given in section 4.Section 5 provides an example to illustrate our matter. Finally we conclude on a note from optimism for work to come.

State of the art

Space Occupation Problem (SOP)

Often, the space occupation problem is expressed using a CSP or SCSP (Space CSP) (Baykan and al. 1991 ; Lhomme, 1993;Duchêne, 2004), the objects are identified with multidimensional variables. A variable would be, for example, a vector of: position, orientation and dimensions. In the most approaches, the constraints are expressed

using geometrical relations. But there exists, naturally, other constraint types of a topological and/or functional nature. According to several authors (Lhomme, 1993), the main difficulties to solve this problem come from several aspects like the presence of constraints and objectives (satisfaction and optimality) together. The difficulty to optimize antagonistic criteria and to

Page 3: Sop in a Multijcsis

obtain a discrete formulation of the problem leads to NP-Complexity or worse.

Resolution Methods- Traditional techniques To try to solve this problem, several

approaches were used. One can quote them here in three main categories: The constructive approach which is a top-down approach type according to Fletcher (Flecher 1965) and Fox (Baykan and al 1991).The iterative approach tries to improve an occupation of space starting from an earlier one by moving an object or by permuting two objects (Shahookar, 1991).The hybrid approach is a coupling, more or less extremely, of two preceding approaches Watanabe and Al (Watanabe, 1985) Donikian and Hégron (Donikian and al. 1991).The basic algorithm is the chronological Backtrack (Golomb 1965).But this mechanism poses a selectivity problem. Indeed, in the event of failure, one reconsiders the last choice carried out, without worrying to know if this choice has any responsibility in the current failure.

- The distribution aspect - Resolution by SMA/DSCSP

A DCSP uses the traditional definition of a SCSP, by adding the assumption that the variables (or constraints) are managed by agents which seek to affect values to satisfy constraints. Constraints can exist between variables of the same agent (intra-agent constraints) or variables of different agents (inter-agents constraints). To solve the problem is usually seen like carrying out the coherence or the consistency of a multi-agent system (Bessiere et al., 2001).

The resolution in a distributed environment allows a parallelism; therefore there is a possibility to save time. But this implies to use communication mechanisms efficiently in order to ensure the system coherence during the resolution.

In its work, Yokoo (Yokoo, 2000a) adapted several algorithms: DBA, ABT, AWCS and ERA. The agents are considered charged to maintain the update of environment. Then, these approaches lead to obtain partial solutions quickly, which can be interesting for dynamic problems requiring a great reactivity.

Another approach based on the cooperation was imagined by (Mailler, 2006) in APO. The agents have a priority and cooperate during mediation meetings. When an agent cannot find value consistent with the more priority agents, it launches a mediation meeting or it changes its value and transmits it to its neighbors. Method ADOPT (Modi, 2003) has for principal application, distributed optimization under constraints. Each constraint is associated with a cost and each agent has to minimize the function ‘global objective’ (total costs of the constraints).

Since the finality of the problem is a configuration (assignment of the variables) by satisfying the constraints, (Georgé, 2004) has imagined to reach this solution by emergence with agents’ auto-organization. Thus, the artificial system must fulfill an adequate function. To change function, just change the organization of the components of the system. These mechanisms are specified by rules governing the auto-organization between the components and not depending on the knowledge of the collective function (Picardy, 2007). Several other authors propose the resolution of certain particular cases containing organization and the coalition (Pour et al., 2006; Guerra-Hernández et al., 2004; Hirayama et al., 1995).

The approaches, cited above, improve the traditional algorithms by introducing the distribution and often under the parallelism aspect while admitting according to Yokoo (Yokoo, 2000b) certain assumptions such as communication by messages. But, the majority of these systems encounter a problem on the level of the communication where it is necessary to manage a great number of messages generated by the agents. The idea of the emergence of the solution is interesting but the research of the specific adequate functions is very difficult. We then seek by our approach to solve the problem by a collective decision of all agents by using the coalition and deliberation mechanisms.

Page 4: Sop in a Multijcsis

Suggested approach

General architecture

We present a MAS architecture that allows to adapt the resolutions to the problems by using several types of agents, from a simple reactive(“reflex”)agent to a cognitive one more complex. The recourse to a multi-Community architecture (see figure 1) of contextual agents depending on the CSP categories is justified by several arguments:1) the resolution is perceived like the effort of several units, each one contributes by a partial or total solution.2) The interpretation of the problem expressed by the user belongs to the resolution. The presence of the highly cognitive agents is useful for the clarification of the user requests that are often too general.3) Often the adaptation of the algorithm to the problem (or the reverse) influences the solution quality.4) The question of parallelism is reconsidered under the

Distributed AI.MAS Paradigm offers new possibilities to converge towards solutions adapted via: competition, cooperation, negotiation, organization…

Our long-term objective is to offer a system model able to receive several communities of agents (solvers ( Boussaa, 2009)), each one is specialized in the resolution of a class of problems. The community has its own mode of behavior and has its own resolution methods. The Interface agent (Supervisor) deals with the interpretation of the initial problem and the contexts for the specialized communities. The solutions suggested by one community can be retained by the supervisor according to some evaluation criteria. A definite decision will be concerted with the user (figure 1).

Figure1.The general architecture of the SMA

a. Types of agents- User Agent (UA)The User agent represents the user. He is

the initiator of the problem in form of descriptions concerning occupation space, objects to be placed and the demands (constraints and objective).He communicates with the Supervisor agent through an interface.

The dialog also relates to intermediate and final decisions about solutions.

- Supervisor Agent (SA)The Supervisor agent is a “mediator”

between the communities of the solvers and the User agent. It represents descriptions and demands into a Base of facts and rules in logic

Page 5: Sop in a Multijcsis

of predicates. By using rules of transformation, the requests are converted into “severe” constraints which must be respected obligatorily and into preferences which one hopes to carry out as well as possible (see Example section 5).

The SA is an agent has the heavy task to select the community “able” to solve the problem by using a set of problem categories and task announcements. For that this agent has a diagram even an ontology enabling it to classify the problem (placement, cutting, routing,…) and to suggest the community of suitable solvers.

- The Community of agentsA community consists of homogeneous

agents equipped with competences and are specialized in the resolution of one or more classes of problems according to a resolution model (competition, co-operation,…, to see section 4).

b. EnvironmentsThe choice of the environment and its

properties is closely related to the CSP type and to the singularity of the community. In general, within the framework of a closed

system (environment with known limits and agents), it is possible to determine for each agent its neighbors. One way to formulate neighbors (agents linked by constraints for example) is to fix them initially with the creation of the agents. The choices made during the construction of the neighbors can direct a model in any direction. Galam and Zucker (Galam, 2000), for example, obtain different results according to the number of agents which interact together in their vote model of a group of individuals.

c. Interaction of the agents within communityThe interactions inter-agents and how those

are organized make agents to coordinate themselves, to cooperate or to negotiate. Coordination is an essential point, especially with respect to a data-processing implementation of the multi-agents model. Indeed, to determine which does, what and when is a non-trivial problem that can have infinity of solutions. Each one of these solutions can appreciably modify the results obtained of simulations (Lawson, 2000).Figure 2 recapitulates what will be to integrate in the system.

Figure 2.Diagrams of interaction

The co-operation, the negotiation and the coordination are entertained by certain theories which proved reliable in several fields.

The decision theory, where the agent tries to maximize a criterion (called utility), is rather close to the game theory. One supposes that agents make rational choices and choose among the alternatives with a greater utility. The difference between decision and game theories is that the game theory takes into account the current situation and also the future choices of agents. The results: the game theory tends to create equilibrium situations

where each agent does not have interest to deviate (Querou, 2000).This type of model can function by rounds where the agents have successively the possibility to make a decision.

The coalition formation is another approach used in interactions between agents. It acts, for a group of agents confronted with a request, to make individual compromises in order to reach a consensus for all parts (ideal case).If more than two agents are implied, alliance mechanisms can be introduced (see section 4) to lead to that consensus more quickly. Then, difficulty will define the communication protocol

AGENT INTERCTION SCHEMES

COOPERATION NEGOCIATION COORDINATION

DECISION THEORY GAME THEORY COALITION

FORMATION SUPERVISION

Page 6: Sop in a Multijcsis

in an adequate way (Vauvert, 2000).The protocol must make agents to exchange their current choices and to modify these choices until the consensus. There is no ideal solution as long as the decisions on how the system will self-organize are numerous.

Basic concepts common to agent communities

In order to provide a general structure for the resolution of the SOP, we present here a description based on the following definitions:

A placement space of two or three dimensions in which will be placed geometrical

objects with possibly of functional and topological characteristics. The installation is governed by a whole of constraints on the characteristics of the objects and space.

The goal is to occupy space by the whole of objects by satisfying the constraints as well as possible carrying out objectives expressed by the application.

In the distributed version of a CSP, one traditionally distributes the variables or the constraints on agents. Each agent is given the responsibility to locally solve its problem while contributing to the total resolution.

We consider that each agent deals with object to place in the space of occupation (figure 2).

Figure 3.Occupation of space by agents (objects)

3.1.1. Notations and definitions

In the model that we propose, the physical world is abstract in the form of States, Actions and State Transitions caused by the actions.

A State isthe whole of assignments of objects (agents) in places in the occupation space.A State transition relates to the passage fromone state to other.Lastly, an action is the way by which a transition will be carried out.

The resolution will be seen like series of transitions to pass from an initial State to a final State.The final State is characterized by the satisfaction of all the constraints and the realization at best of the objectives.

Let us consider the following definitions:

E:space in 2D or 3D (e.g rectangle (x0, y0, xd, yd))

A = {A1,…An} set of agents (each agent corresponds to an object)

S= {S0,…,Sm} set of StatesC = {c1,…,Cp} set of constraintsB= {b1,…,bq} set of objectivesAC = {a1,…,ar} set of actions of the agents

(operators of displacement)An action ajis regarded as the joint action of all the agents aj= (aj1,…,aji,…, ajn) where ajiis the agent action

P= {p1,…,pt} set of plansA plan piis a whole of joint actions:pi = {a0,

a1,…,ak}

- ActionsAn individual action “a” of an agent is a

change of its place in space E. This change can be carried out using combination of geometrical operators like the translation and rotation (in certain cases of design, the action can also a change of geometry).

For example in the plan (figure 3):a = (tu, tv,rw) R3, if Pi= place occupied

by Ai in E, Pi = (xi, yi, ti):location(xi, yi) E and orientation ti of the local reference mark of the object compared to the absolute reference mark of the Space of occupation.a(Pi) = P’i :Change of place and orientation

of Ai by the application of the action a = (tu, tv, rw).

Then P’i = (x’i, y’i, t’i) that x’i = x + tu, y’i = yi+ tv and t’i = ti + rw

a0 = (0, 0, 0) is the action identity: the agent does not move.

- Place

unsatisfied Agent

satisfied Agent E

Objects

Constraints/objectives

Page 7: Sop in a Multijcsis

A place for an agent is defined by a geometrical position Pi(Coordinated, Orientation) and dimensions. We note this

place Pli = (Pi, Dgi), where Dgi are geometrical dimensions (e.g. length and width for a rectangular object in 2D).

Figure4.An object in a reference mark related to space E

- StateWe define a State Sk like the triplet <PLk, Ck,

Bk>where:PLk= {Plk1,…,Plki,…., Plki} is the whole of places occupied by agents in E (at this state k, Plki is place of agent Aiin Sk).Ck is the subset of satisfied constraints in the State Sk

Bk is the subset of objectives achieved in the State Sk

- States’ Transition A transition on States is defined by the (Sk,

Sl)pair, and noted SkSl, State Sl is regarded as a consequence of the action causing the SkSl

transition on Sk. We note (SkSl) /aj to indicate the transition and its aj cause.

For example following, transition S1S2(figure 4) is carried out by the joint action a = (a1, a2, a3, a4) such as a2=a4=(0,0,0), a1= (+15,-3,0)and a3= (0,0, +90).

Figure 5.The displacement of A1and the rotation of A3change the situation fromS1toS2

Admissibility and optimality

- Admissibility

C is the set of “severe” constraints in the SOP with cardinal (C) =|C|= p.A constraint is seen like a relation (arcs on figure 3) between one or more agents.

A State S = <PLs, Cs, Bs>is known as admissible if and only if Cs = C (situation where all the constraints are satisfied).Note:The relation “same set of satisfied constraints” is a relation of equivalence on the set of States S. Thereby, the admissible States constitute a class of equivalence by:

Page 8: Sop in a Multijcsis

“they have C like set of satisfied constraints”. Let Ea this class, Ea= {SS/S = <PLs, C, Bs.>}.

Agent level satisfaction:If CAi indicates the set of constraints of agent Ai, then: iCAi = C,AiAState S = <PLs, Cs, Bs>satisfying the agent Ai is said Ai-admissible if and only ifCAiCs.Generally, S = <PLs, Cs, Bs>satisfies a group of agents GA if iCAi = C, Ai GA

Notice (based on individual satisfactions of the agents):For any State S if i, CAiCs then S is

admissible.

- OptimalityThe goal is to find an admissible State optimizing the objectives (preferences): These preferences can be regarded as less severe constraints, but will pose a difficult problem: the optimization Most of applications propose optimization by seeking more adequate models. It is the case of the high integration in electronics components in a very limited surface (VLSI) (12).On our side, we suppose the existence of an evaluation function measuring the realization degree of the objectives. Thus, our problem is to an admissible State maximizing this evaluation function based on appreciation of each agent.

- Appreciation.An agent can evaluate any State S according to the satisfaction of its own constraints, the remainder of constraints and the objectives. This evaluation is defined by the function ju:.ju :A x S(0,1)

ju (Ai, S) expresses a satisfaction degree (constraints and preferences) by agent Ai for State S. This value is calculated by the agent Ai

taking into account its position in PLs, the subset of satisfied constraints and the objectives carried out, locally and globally..Elements of resolution

To solve the problem, the existence of Ea (admissible solutions) is primordial. In this case, procedures have to determine the optimal solution within the meaning of the objectives in Ea.

Given the problem complexity, we emit two assumptions:

- To prove the existence of Ea is not necessary to begin the resolution.

- The search for the optimal solution is evaluated in terms of convergence on Ea (objectives carried out on the acceptable States)

The search for an admissible State

- Search for an Ai-k-admissible State:

A State is known as Ai-k-admissible is a State where k constraints are not satisfied (k called degree of non-satisfaction) for the agent Ai.Thus, an Ai-admissible State is an Ai-0-admissibleState.We will say that Ai and Aj are neighbors if there is at least a constraint between Ai and Aj.Let AGi the Ai neighbors’ set. We present here a Pseudo-Algorithm to filter the Ai-k-admissible States with the smallest degree k:

Page 9: Sop in a Multijcsis

/********************* for each agent ***********************/Rechercher_Ai_k_admissibles(Ai, Sa) {kai ;current degree of dissatisfaction in Safor the agent Ai

k=0, ;degree of dissatisfactionESi = vacuum;ESi;list to contain possible Have-K-acceptable StatesPi = vacuum ;Pi lists plans to reach the Have-K-acceptable States.

1-find Zik(Sa);zone of possible places with knot satisfied constraints of Ai,If Zik(Sa) != vacuum do:

- For each Plj(possible place) for Aiin Zik (Sa) do: determine the pjk plan and Sjk such as (SaSjk)/pjk with Sjk=<PLi, Ci, Bi> ESi = ESi {Sjk};Pi = Pi {pjk} 

- End ForIf ESi != vacuum go to 2 end if

elsek = k+1 if k ≥ kai to go to 2else go to 1end ifend if

2-if ESi = vacuum, ESi= {Sa}, k=kai

elseconsider Sjk,and record:

the degree of dissatisfaction k,the pjk plans to reach Sjk from Its,the evaluation of Sjk by ju.

end if- end Forreturn k; }

Notes:-All agents are carried out in parallel; the algorithm provides the possible places when the field is discrete. To avoid an intensive CPU time, the agent can with save only the envelope of the Zik(Sa) zone, the determination of place and associated plan will be carried out at the convenient period.it answer of Ai is done by Sik relates to only satisfaction. It is the ju evaluation which takes into account the objectives on the Ai-k-admissibles States.L' algorithm can be wide: Every SiESi for Ai is also validated by its neighbors (AGi)

The search for an optimal State on for a group AG of close agents

-Ai-Optimal State

The first task of the agent is the generation of plans leading, at least, with Ai-k-admissible solutions. But taking into account the presence

of the objectives, the agent, will seek to propose the “best plan” within the meaning of the appreciation ju.The Ai-optimal State is Sopi such as:

Sopi = arg(Max(ju (Ai,Si)))Si Ai-k-admissibleESi

One will also note uji= ju (Aj,Si)for very SiESi : the Si evaluation by the Aj agent

Resolution Techniques

-Multi-criteria traditional techniques

With a utility function, the problem can be solved in a centralized way according to mathematical models and using the multi-criterion techniques (Roy et al.,1993). These techniques establish an order on the

Page 10: Sop in a Multijcsis

possibilities offered and the decision maker (human) have to arbitrate. Thus, in our case, the agents will be able to replace the decision makers and the problem will be multi-criterion and multi-decision makers. These techniques establish an aggregation of the utilities in order to release “the best” plan (solution) by a good choice of weightings.  Indeed, the numerical measurement of the agent utility is already a strong assumption compared to the simple classification of the available choices. The comparison of the utility of two individuals is even difficult.Why a plan, appreciated with 0.8 by an agent and 0.5 by another, would be preferred with that respectively appreciated 0.4 and 0.9?When there are several decision makers, it is difficult to represent the importance of a decision maker by a weight in order to respect the general structure of these techniques. The disadvantage is that the aggregation procedures and criteria transformation into constraints are delicate to carry out especially when certain decision makers and interlocutors have no scientific culture.

-Negotiation / Co-operation

In order to build a model of negotiation by a multi-agent system, some elements must be defined. In (Jennings, 2001), the authors identify three components as being most fundamental:

1- Negotiation Protocol: It is a set of rules managing the interaction.

2- Negotiation Object: It consists of attributes on which agents wish to find an agreement

3- Decision Strategy or model of the agents: It is a reasoning process that agents use, in agreement with the

negotiation protocol, to make decisions to achieve their goals. The decision types to be taken are influenced by the protocol and the nature of the negotiation object.

In the literature, there exist three great approaches of negotiation in the multi-agent systems (Rahwan, 2003), based on:

The game theory which studies the behavior (real or justified a posteriori) of a “rational” agent confronted with one (many) adversary during a game, in order to find an optimal strategy maximizing its own utility. Several protocols were studied (Rubinstein, 1982).The game theory also uses strong concepts of convergence based on the Nash equilibrium or the Pareto's optimum.

Heuristics: The lack of resources and time does not make it possible to elaborate a better policy by analysis with game theory. In order to mitigate these limits, the heuristic approaches try to reach acceptable approximations with the theoretical optimal results found by the game theory (Faratin, 2002).

Argumentation: The argumentation is an adequate model to represent the internal reasoning of an agent, and it is based on the construction of arguments. It considers the model interactions of multi-agent (like the negotiation (Amgoud, 2007)) in form of dialogs (Amgoud, 2002).For example through an argument, an agent can give the precise reasons for which it refuses an offer; this makes its adversary, consequently, to modify its next proposals.

A proposition: resolution by forming coalition among agents

In our community of cooperative agents, the agent uses a coalition when it unable to satisfy all its constraints, i.e. when it cannot find Ai-0-acceptableStates.We present here an example of resolution based on the negotiation using an analysis by the game theory among different other models for negotiation (cited above) that will be integrated in the system. As it is difficult

to incorporate the utilities of the agents, the agent will seek an accepted plan (or plans) by all its neighbors. This plan must produce a State which is “better” or at least equivalent with the Actual State for each agent. The solution must then carry out a Pareto's optimum (in the game theory): an agent cannot increase its utility if at least another utility agent is decreased. The protocol of negotiation is based on this principle (Caillou, 2002).The agent initializing the negotiation seeks the plans which

Page 11: Sop in a Multijcsis

it prefers. It transfers them by grouping and ordering to an agent close to its choice. The agent receiving the plans, filters those “better” than the actual state, reorders them according to its utility, and sends them according to the same procedure to an agent of its choice. The last agent selects one plan (or plans) most interesting that will constitute the Pareto's optimum.

Definitions

- Coalition:A coalition is a subset S ⊆ A= {A1,…, An}, where A is the set of all agents.(2n − 1 coalitions are possible)S = { Ai}:coalition of only one agent (singleton)S = A : coalition of all the agents (great coalition)In contexts DCSP and SOP, the resolution by coalition was applied to several problems such as the task allocation, resource allocation,… But the CSP differ from those problems that defining coalition structure a priori. Indeed in CSP there is a strong dependence, the agent choices are not fixed and change permanently by the others choices.Each constraint implies all agents that must satisfy it. Then, agents form a coalition around one or several constraints to find a solution. When there is more than solution, agents choose one optimizing the objectives.Several neighbor agents (have at least a joint constraint) form a coalition and provide concerted action plans to satisfy the joint constraints as well as possible.- Set of coalitions: a set representing a

solution with the problem of coalitions’ formation. The agents form coalitions to satisfy the constraints with the objectives. It is about the set of plans which provides a State solution in our case. Then we denote by Group a set of coalitions’ sets.

- Context: the parameters taken into account in the problem (must be stable during the negotiation).

- Utility function: the utility function can be ordinal or cardinal. The cardinal associates a utility with a set of coalitions and a given context. The ordinal permits to compare two sets in a given context. In this case, to measure the utility of a State means to compare it with a reference State (see section 4.4).

- Reference State: The agents must know if they accept “States solutions”, so it is necessary that they can compare a State with what they are able to obtain during the negotiation. This minimum is the reference State.

- Pareto's optimum: A Pareto's optimum is a situation where it is not possible to improve the situation of an agent without deteriorating a situation of another (at least one).

Negotiation Algorithm

Each negotiation proceeds in three phases: initialization of negotiation and transfer of constraints, negotiation, and the solution transmission. Three behaviors are distinguished: the agent that initiates the negotiation, agents intermediate and the last agent to be taken part. It will be noted that the order of the agents is not obligatorily the same for all negotiations. But the order is stable during a negotiation.

- Phase 1: Initialization of negotiation and transfer of constraintsAn unspecified agent of the SMA must initiate the negotiation. The initiating agent informs all the others that it begins a new negotiation. Any agent which will want to begin another from them will have to await the end of the actual negotiation. The initiating agent calculates all the possible coalitions (possible plans).It gathers them in group of solution sets and sent it to itself (or the agent which must begin the negotiation, if there exists a fixed order) to begin the negotiation.

- Phase 2: NegotiationWhen an agent receives a group of sets:The agent preferably classifies by order (with its utility) the received sets in homogeneous groups. It classifies only the sets at least equivalent, with its reference State. The others not classified are abandoned. Then, groups are sent to the following agent by a decreasing order.If all agents already took part in the negotiation (the agent is thus the last).So at least one of the sets received is acceptable, it considers the best set (or group of sets).This group is Pareto's optimum. One set could be selected by optimizing objectives (or randomly) and it will be suggested to agents.

- Phase 3: Transmission of the solution

Page 12: Sop in a Multijcsis

Once the last agent identified a Pareto's optimum, it transmits this set to all agents which accept it as solution of the negotiation.

Resolution Steps for SOP

Because of the problem complexity, each agent works initially to satisfy its constraints (formation of coalition singleton).For those not satisfied, it will form a coalition with the agents implied in these same constraints. In other words the structure of the coalition is not fixed not a priori, but it is formed in the dead end cases. Moreover, our problem looks like a “repetitive game”, the solution of the problem can be obtained using several negotiation rounds.

1. A starting State So is given:Several heuristic can be used here, for

example: So is the first space occupation

without constraints. As there is dependence, one can

proceed by a sequential occupation: an order is established and each agent will seek the plan, by regarding the placed agents.

2. With the reference State (So at the beginning), the algorithm (section 3.3.1) is carried out. Any agent Ai which does

not able to propose an Ai-0-admissible State will seek to form a coalition with its neighbors.

3. A coalition solves the problem by negotiation according to algorithm 4.2:

Plans provided into 2. are considered and evaluated by each member of the coalition, the initiating agent of the negotiation orders plans by groups according to its utility and sends them to an agent of its choice, this one retains only those provide “better” States (according to its utility and looking to the reference state) and so on, to the last agent. Plans retained by the last will constitute the solution. If the State obtained is Ai-0-acceptable for each Ai, then the negotiation is finished. If not the actual State will be regarded as reference State one begins again since 2.

4. Without improvement and at the end of a certain number predefined of negotiation rounds an agent decides to stop the formation of coalitions.

Determination of agent utility function (case of constraint satisfaction)

We specify here how to calculate utility value uij by Aj agent for a plan pi suggested by the agent Ai.Although it is difficult to model this evaluation using a quantitative function, it is necessary to take account of certain indices: Satisfaction rate of constraints relating to the agent Potential utilization ratio. Total satisfaction rate. Satisfaction neighborhood rate

a. Rates definition Let:S actual state,E: Occupation Space, C: set of constraints, CAj: set of constraints of agent Aj, ZS

Aj: satisfaction zone of Aj in S, Cj

Si: satisfied set constraints of Aj in Si

ai is action of Ai with ai(S) = Si; (or (SSi)/ai)

We call: Relative satisfaction rate in Si: rij = 1 - |Cj

Si|/|CAj| Potential utilization ratio in Si: zij = |ZS

Aj|/|E|

Total satisfaction rate in Si: gi = 1 - |CSi|/|C| Satisfaction neighborhood rate: ni = 1 - |CAgj

Si |/|C| ; Agj is the set of Aj neighbors

b.UtilityWe can model the utility uij that is a Aj judgment on the Ai action by using a linear combination as follow:uij = w1j*rij + w2j*zij + w3j*gi (we use gi when all agents are linked by constraints, otherwise ni)The term: w1j*rij + w2j*zij expresses the personal interest of the Aj agentThe term: w3j*gi expresses the global interestLet:up

ij =rij + zij

ugij= gi

If w1j = w2j = j and w3j = j then uij = j*upij +

j *ugij

With j = 1 - j: more one privilege the personal interest, more it ignores the global interest and vise-versa (j(0,1)).Finally: uij = j*up

ij + (1 - j)*ugij

Page 13: Sop in a Multijcsis

Each Aj agent adopts its own strategy (choice of j) to calculate its preference (i.e. j= ½ is a neutral strategy)

Discussion

In our open architecture (see 3.1.), we can use any form of cooperation. In a future work, we will study the Supervisor Agent (with rules) to justify and to adapt methods to problems. So, we use a Generic Cooperation-based Method definition (Picard, 2006) that is held on: - Cooperation can be viewed as a generic concept manipulated by problem solvers.- It transcends to all the CSP methods- Taking inspiration from biological and socio-economic notions of cooperation- An agent is unable to find alone the global solution and consequently it has to interact locally with its neighbors in order to find its current actions able to reach its individual goals and help its neighbors

Thus, it can produce some categories of Cooperation-based algorithms as the Population-based approaches inspired by evolution and comportment of insects, birds, … Their principles are: A population is a set of individuals (agents)Each agent is able to find a solution to the problemAn agent knows the whole set of variables that define the problem Agents coordinate to find a solutionThe common problem of this class is how to coordinate several concurrent searches to efficiently find a good solution? Several methods are essentially used in optimization problems: Evolutionary algorithms,

genetic algorithms (GA) (Holland, 1993), Particle Swarm Optimization (PSO)(Kennedy, 1995) and Ant Colony Optimization (ACO) (Dorigo, 2004)

In ACO, The pheromone deposited by ants gives relevant information about the region of the search space and modifies later the behavior of the other antsIn PSO, Particles are influenced by the velocity and position of the local and global bests: cooperative information exchange allowing efficient exploration p&haseThe fitness function of GA determines at a time the better individuals which will share their genes with other members of the population to produce new relevant offspring

The essential difference with the coalition resolution: the cooperation is based on negotiations using game theory. Agent has a pseudo-global vision (must know its neighbors) and not a local vision. It offers, accepts or rejects solutions in concert with its neighbors. The utility function takes into account local and global aspects (see 4.4.).

The population-based approaches define local functions (pheromones, .... Velocity, fitness ..) and the designer has to implement the general strategy of resolution.

Cooperation through negotiation can use several methods such as the argument ... and can also solve problems known as purely CSP opposed to current population-based approaches that are used primarily for the optimization aspect.

4.5 . An illustrative example

The case of N-queens Problem can be regarded as a problem of space occupation. Each queen has N places (possible squares) on its column.To simplify, we consider 4-queens problem. There is no optimization of Space. Only rigid constraints to satisfy, namely: “neither jointed line nor diagonal with the occupied squares”. We will restrict the definition of the utility as follows: ui

p= number of satisfied constraints for one agent Ai, and ui

g: number of satisfied constraints for all agents.

Each agent Ai represents one queen i. All agents are neighbors.With i=1/2, then, for the agent Ai we have: ui = 1/2(ui

p +uig).

Let us consider for example, the S0(starting situation)= (line1, line2, line3, line4) = (1,1,1,1) positions of the 4 agents (linei is the line occupied by agent Ai).Let us notice that for two agents, the number maximum of non-satisfied constraints is 1.Thus, for an agent, the number maximum of non-satisfied constraints is 3.A1cannot satisfy all its constraints (whatever its position), then it forms a coalition with its neighbors (A2, A3 and A4).So is the reference State at the beginning.

X X X X

Page 14: Sop in a Multijcsis

So=

The utilities of agents are identical:  U(So) = (u1, u2, u3, u4) = (0,0,0,0) 

The plans (thus States obtained by these plans) suggested with the first round:

By A1:S11= (2,1,1,1) Up (S11) = (2,0,1,1) from where u1 = ½ (2 + (2+0+1+1))=3, u2=2, u3=u4=5/2S21= (3,1,1,1) Up (S21) = (2,1,0,1) from where u1 =3, u2=5/2, u3=2, u4=5/2S31= (4,1,1,1) Up (S31) = (2,1,1,0) from where u1 =3, u2=5/2, u3=5/2, u4=2By A2:S12= (1,2,1,1) Up (S12) = (0,1,0,1) from where u1 =1, u2=3/2, u3=1, u4=3/2S22= (1, 3,1,1) Up (S22) = (1,2,1,0) from where u1 =5/2, u2=3, u3=5/2, u4=2S32= (1,4,1,1) Up (S32) = (1,3,1,1) from where u1 =7/2, u2=9/2, u3=7/2, u4=7/2By A3:S13= (1,1,2,1) Up (S13) = (1,0,1,0) from where u1 =3/2, u2=1, u3=3/2, u4=1S23= (1, 1,3,1) Up (S23) = (0,1,2,1) from where u1 =2, u2=5/2, u3=3, u4=5/2S33= (1,1,4,1) Up (S33) = (1,1,3,1) from where u1 =7/2, u2=7/2, u3=9/2, u4=7/2By A4:S14= (1,1,1,2) Up (S14) = (1,1,0,2) from where u1 =5/2, u2=5/2, u3=2, u4=3S24= (1, 1,1,3) Up (S24) = (1,0,1,2) from where u1 =5/2, u2=2, u3=5/2, u4=3S34= (1,1,1,4) Up (S34) = (0,1,1,2) from where u1 =2, u2=5/2, u3=5/2, u4=3

All these Sij solutions can be retained by the Aj

agents because uj (Sij) > uj (So)A1which initiates the negotiation, it then forms 6 groups of plans ordered (decreasing order) according to its utility u1:

Groups in decreasing order:  G1= (S32, S33); G2= (S11, S12, S13);G3= (S22, S14, S24); G4= (S23, S34);G5= (S13) et G6= (S12). These groups are sent in this order to A2

Figure 6. Graph representing the utilities of the agents A1 and A2

G1 is sent in first to A2agent (chosen by A1, heuristics can be used), A2 evaluates G1

according to its utility here:G1= (S32, S33), u2 (G1) = (9/2, 7/2), better than:u1 (G1) = (7/2, 7/2)A2 will then keep G1 in entirety and sends ittoA3

(according to its choice), this one will keep G1becauseu3(G1) = (7/2,9/2), and sends it finally toA4, u4 (G1) = (7/2, 7/2) and decides to keep it because it is better than u4 (S0) =0.The satisfaction solution is SfwithU(Sf) = (3,3,3,3) is not reached yet, the agents will

decide on S32or S33, let us suppose that S32.(because there is no objectives for comparison)The reference State then is changed: S0 S32, with:u1(S0) =7/2, u2 (S0) =9/2, u3(S0) =7/2 and u4(S0) =7/2,Agents proceed to another round (2nd) of negotiation, we have two solutions:S2

11= (2,4,1,1) U (S211) = (3,3,2,2),

S224 = (1,4,1,3) U (S2

24) = (2,3,2,3),

XX

XX

G6S12 G5

S13

G4S23 S34

RefSo

G3S22

S14

S24

G2S11

S12

S13

G1S32

S33

7/20 u11

3/2

32 5/23/2

1

2

5/2

3

7/2

9/2

4

u2 Optimal Groupes

Page 15: Sop in a Multijcsis

One will choose S211, according to the same

process of negotiation;S0 S2

11, with:U(S0) =(3,3, 2, 2)With the last round (3rd):the State S3

14 = (2,4,1,3) proposed by A4is accepted by all the agents:

U(S314) = (3,3,3,3) which is a solution

(satisfaction of all the constraints) and end of negotiation.

S314 =

This problem was used by all CSP algorithms for tests like:

-The nogoods (conflictual configurations) and potential solutions communicated by agents to their neighbourhood in ABT or AWCS help agents to cooperatively solve a DisCSP-The min-conflict heuristic used in AWCS or ERA is a means to represent the fact that agents cooperatively act by minimising the negative impact of their actions- Population-based approaches (ACO, PSO, GA, )

In this example, we just want to present the utility function and Pareto-optimal mechanism and not solving a real problem of space occupation. We want to show that CSP solution can be obtained under game theory as a

pareto-optimal or nash-equilibrium. By comparison, we quote here the model ERA (Environment, Reactive rules and Agents) (Liu, 2002) to have an idea of utility function that is reduced to the number of constraint violations: In solving a CSP with ERA method, each agent represents a variable and its position corresponds to a value assignment for the variable. The environment for the whole multi-agent system contains all the possible domain values for the problem, and at the same time, it also records the violation numbers for all the positions. An agent can move within its row, which represents its domain. Three reactive behaviors (rules) were introduced: better-move, least-move, and random-move. The move of an agent will affect the violation numbers of other rows in the environment.

(a) (b) (c)

Figure 7. (a) The representation of domain values for a 4-queen problem. (b) Four agents dispatched into the 4-queen environment. (c) Updated violation numbers corresponding to the positions of the four agents.

(a) (b)

Figure 8. (a) Violation numbers at the initialization step. (b) Violation numbers updated having placed a1 at (3, 1).

At the initialization step, the domain values will be recorded as e(i,j).value (see Figure. 7(a)) and the violation numbers for all positions will

be set to zero (see Figure 8(a)). After that, agents will be randomly placed into different rows. For instance, if agent a1 is placed at

Page 16: Sop in a Multijcsis

position (3,1), the violation numbers in the environment will be updated accordingly, as shown in Figure 7(b).

ERA was tested in several applications like n-queen problems and coloring problems and

compared with earlier algorithms. Although its success, this approach suffers from a lack of explicit communication (agents communicate by environment) and cooperation mechanisms. But, the use of these concepts can accelerate the convergence.

Conclusion

We presented a Community SMA architecture that allows receiving several types of agent societies. The objective is to develop resolution models for constraint satisfaction problems and optimization. Two communities were studied. The first, not described here, relates to the implementation of a deliberation process based on the principle of influence and

change of the agent conviction. The other implements a resolution using the coalition.

The coalition approach permits all agents to participate and to treat proposals, which guarantees to select admissible solution according to the Pareto's optimum. The negotiation provides an environment to realize solutions by coalition and to avoid the problem complexity.

The use of coalition approach permits to all agents to treat all the proposals and to participate to the decision, so that guarantees a “best” solution according to the Pareto's optimum.

An implementation of our approach based on BDI agents (23) made possible to check certain assumptions and to adapt certain resolutions. Agents BDI constitute a favorable environment to widely express constraints and objectives.

The system is open and able to integrate several types of communities. Moreover, it is possible to compare the solutions using predicates representing the objectives.

Work to come will relate to the development of the supervisor role in the interpretation and the backtracking of certain solutions. We will lean especially on the use of other interaction modes between the agents for resolution like the concept of emergence or argumentation.

References

Andre JM (1986). Towards an intelligent assistance system for space installation. CADOO, CIIAM86-Artificial Intelligence, pp. 31-47.

Barra JR, Becker M, Kouka EFM, Tricot M (1987) Application of Data Analysis Methods and of Simulated Annealing for the Automatic Layout of Circuits. Comput. Syst. Sci. Eng. 2(1): 3-15.

Baykan C, Fox M (1991). Constraint satisfaction techniques for space planning. In Intelligent CAD Systems III - Practical Experiment and Evaluation, pp. 187-204.

Donikian S, Hégron G (1991). Towards has declaratory method for 3D scene sketch modeling. Technical carry forward, MADE IRIDESCENT Rennes.

Fletcher J (1965). A program to solve the pentomino problem by the recursive use of macros. Comm. ACM, 8 (10): 621-623.

Golomb S, Baumert L (1965). Backtrack programming. J. ACM, 12:516 - 524.

Haralick R, Elliott G (1980). Increasing tree search efficiency for constraint satisfaction problems. Artificial Intelligence. 14:263-313.

Lhomme O (1993). Consistency techniques for numeric CSPs. In Proceedings of the 13th International Joint Conference On Artificial Intelligence (IJCAI93), pp. 232-238.

Rossi G (1986). Use of PROLOG in implementation of Expert Systems, New Generation Computing. 4: 321-329.

Page 17: Sop in a Multijcsis

Sadgal M (1989). Contribution to the problems of placement and routing. Thesis of Doctorate, Claude-Bernard University, Lyon 1.

Sechen C. (1988).Chip planning, total placement, and routing of macro/custom concealment integrated circuits using simulated annealing. In 25th Design Automation Conference, pp. 73-80.

Shahookar K., Mazunder P (1991).Technical VLSI concealment placement.ACM Computing Surveys, 23 (2).

Watanabe T, Nagai Y, Yasunobu C, Luzika Y, Sasaka K (1985). An expert system for computer room facility layout. Acts of Seventh International Conference, Expert Systems and their applications.

Duchêne C (2004). Cartographic Generalization by communicating agents: the model CartACom. PhD thesis, university Pierre and Marie Curie Paris VI, COGIT laboratory.

Stephan S, Olivier L, Veronique G, Herve L (2000). Resolution of a problem of space installation using a genetic algorithm. AFIG' 00, Grenoble. IMAG Grenoble, pp. 113-122.

Bessiere C, Maestre A, Messeguer P (2001). Distributed Dynamic Backtracking. Proc.Workshop on Distributed Constraint of IJCAI01.

Yokoo M (2000a). Algorithms for distributed constraint satisfaction problems:With review. Autonomous Agents & Sys Multi-Agent. 3:198-212.

Yokoo M (2000b). Distributed Constraint Problems Satisfaction. Springer Verlag.

Picard G, Gleizes MP, Glize P (2007). Distributed Frequency Assignment Using Co-operative Coil-Organization. First IEEE International Conference one Coil-Adaptive and Coil-Organizing Systems (SASO' 07), Boston, Farmhouse, the USA, July 9-11.

Mailler R, Lesser VR (2006). Asynchronous Partial Overlay: A New Algorithm for Solving Distributed Constraint Satisfaction Problems. 25:529-576

Modi PJ, Shen W, Tambe M, Yokoo M (2003). An Asynchronous Supplement Method for Distributed Constraint Optimization, Proc.Autonomous Agents and Systems Multi-Agent, Melbourne, Australia, pp. 161-168.

Georgé JP, Edmonds B, Glize P (2004). Making Coil-Organizing Adaptive Multi-Agent Systems Work - Towards the engineering off emergent multi-agent systems, Methodologies and Software Engineering for Systems Agent, F. Bergenti, M-P.Gleizes,

and F. Zambonelli, editors, Kluwer Publishing,.

Boussaa J, Sadgal M (2009). A cognitive Agent for solving problems of occupation of space. Proceedings of the 3rd International Conference on Communications and information technology, December 29-31, Vouliagmeni, Athens, Greece, pp. 146-152.

Guerra-Hernández A, El Fallah-Seghrouchni A, Soldano H (2004). Distributed Learning in Intentional BDI Systems Multi-Agent, in Proc. ENC, pp. 225-232.

Roy B, Bouyssou D, (1993). Multicriterion Assistance with the Decision:Methods and Case. Edition Economica.

Hirayama K, Toyoda J (1995). Forming coalitions for Breaking Deadlocks, ICMAS 95, pp. 155-162

Pour HD, Nostary M (2006). Solving the facility and layout and hiring problem by ant-colony optimization-meta heuristic. International Newspaper of Research Production. 44(23): 5187-5196.

Galam S, Zucker JD (2000). From Individual Choice to Group Making Decision. In Physica has, 287(3-4): 644-659.

Lawson B, Park S (2000). Asynchronous Time Evolution in year Artificial Society. Newspaper of Artificial Societies and Social Simulation. 3(1).

Querou N, Tidball M, Jean-Marie A (2000). Conjectural Balances and functions of reaction in the static and dynamic plays. International Workshop:Modelling Agents Interactions in Natural resources and Environment Management, Montpellier.

Vauvert G. (2000). Formation of coalitions for rational agents. Proceedings of the JLIPN , VIIIèmes Days of the L.I.P.N. - Systems Multi-Agents & Formal Specifications and Software Technologies, Villetaneuse, France.

Amgoud L, Dimopoulos Y, et Moraitis P (2007). With unified and general framework for argumentation-based negotiation. Proc.6th Intl. Joint Conf on Autonomous Agents and Multi-Agent Systems (AAMAS' 07), Hawaii.

Amgoud L, Cayrol C (2002). Inferring from inconsistency in preference-based argumentation frameworks, Intl. Newspaper of Automated Reasoning. 29:125-169.

Faratin P, Sierra C, Jennings NR (2002). Using similarity criteria to make trade-offs in automated negotiations. Artificial Intelligence, 142 (2): 205-237.

Jennings NR, Faratin P, Lomuscio AR, Parsons S, Wooldridge M, Sierra C. (2001). Automated negotiation: Prospective customers methods

Page 18: Sop in a Multijcsis

and challenges. Intl Newspaper of Group Decision and Negotiation, 10 (2): 199-215.

Rahwan I, Ramchurn SD, Jennings NR, Macburney P, Parsons S, Sonenberg L (2003). Argumentation-based negotiation. 18 (4): 343-375.

Rubinstein A (1982). Perfect equilibrium in The Knowledge Review Engineering,has bargaining model. Econometrica. 50:97-109.

Caillou P, Aknine, S, Pinson S (2002). How to Form and Restructure Multi-agent Coalitions. National Conference on Artificial Intelligence (AAAI 02) Workshop on Coalition Formation, Edmonton, Canada, AAAI Press, pp. 32-37.

Petit T, Poder E (2011). Global propagation of side constraints for solving over-constrained problems, Annals of Operations Research. 184(1): 295-314, DOI:10.1007/s10479-010-0683-4.

Xidonas P, Ergazakis E, Ergazakis K, Metaxiotis K, Askounis D, Mavrotas G, Psarras J, (2009). On the selection of equity securities: An expert systems methodology and an application on the Athens Stock Exchange. Expert Systems with Applications. 36 (9): 11966-11980.

Hamadi Y, Jabbour S, Sais L (2009). ManySAT a parallel SAT solver. In Journal on Satisfiability, Boolean Modeling and

Computation, JSAT, IOS Press, 6(Spec. Issue on Parallel SAT): 245-262.

Sapena O, Onaindia E, Garrido A, Arangu M (2008). A distributed CSP approach for collaborative planning systems. Original Research Article, Engineering Applications of Artificial Intelligence, 21(5): 698-709

Hsairi L, Ghedira k, Alimi AM, BenAbdellhafid A (2009). Argumentation Based Negotiation Framework for MAIS-E2 model. Chapter VI in book : Open Information Management: Applications of Interconnectivity and Collaboration, Publisher: Information Science Reference , edited by S. Niiranen, J. Yli-Hietanen and A. Lugmayr, Tampere Univ, ISBN: 978-1-60566-246-.

Picard G, Glize P (2006). Model and Analysis of Local Decision Based on Cooperative Self-Organization for Problem Solving. Multiagent and Grid Systems (MAGS), 2(3): 253-265.

Holland JH (1993). Adaptation in Natural and Artificial Systems. MIT Press.

Kennedy J Eberhart RC (1995). Particle swarm optimization. In Proceedings of IEEE International Conference on Neural Networks, Piscataway, NJ, pp. 1942-1948.

Dorigo M, Stützle T (2004). Ant Colony Optimization. MIT Press.

Liu J, Jing H, Tang YY (2002). Multi-agent Oriented Constraint Satisfaction. Artificial Intelligence, 136(1):101-144.