Top Banner
258 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008 Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization Shubham Agrawal, Yogesh Dashora, Manoj Kumar Tiwari, and Young-Jun Son, Member, IEEE Abstract—This paper proposes an interactive particle-swarm metaheuristic for multiobjective optimization (MOO) that seeks to encapsulate the positive aspects of the widely used approaches, namely, Pareto dominance and interactive decision making in its solution mechanism. Pareto dominance is adopted as the criterion to evaluate the particles found along the search process. Nondomi- nated particles are stored in an external repository which updates continuously through the adaptive-grid mechanism proposed. The approach is further strengthened by the incorporation of a self- adaptive mutation operator. A decision maker (DM) is provided with the knowledge of an approximate Pareto optimal front, and his/her preference articulations are used to derive a utility function intended to calculate the utility of the existing and upcoming solu- tions. The incubation of particle-swarm mechanism for the MOO by incorporating an adaptive-grid mechanism, a self-adaptive mutation operator, and a novel decision-making strategy makes it a novel and efficient approach. Simulation results on various test functions indicate that the proposed metaheuristic identifies not only the best preferred solution with a greater accuracy but also presents a uniformly diverse high utility Pareto front without putting excessive cognitive load on the DM. The practical rele- vance of the proposed strategy is very high in the cases that involve the simultaneous use of decision making and availability of highly favored alternatives. Index Terms—Metaheuristic, multiobjective optimization (MOO), Pareto dominance, particle-swarm optimization (PSO). I. I NTRODUCTION M OST real-world problems employ the optimization of several objectives which are often conflicting in nature. Owing to the computational complexity involved, past decade has witnessed a significant increase in the use of evolution- ary search strategies, like genetic algorithm (GA), simulated annealing algorithm, tabu search, differential evolution (DE), particle-swarm optimization (PSO), ant colony optimization, etc., to optimize problems involving multiple objectives. Ma- jority of research in this field is concentrated toward generating a nondominated or a Pareto optimal set of solutions [1], [2]. Manuscript received February 15, 2006; revised September 27, 2006. This paper was recommended by Associate Editor J. Lambert. S. Agrawal and Y. Dashora are with the Operations Research and Industrial Engineering Group, Department of Mechanical Engineering, Cockrell School of Engineering, University of Texas, Austin, TX 78705 USA (e-mail: shubham. [email protected]; [email protected]). M. K. Tiwari is with the Department of Industrial Engineering and Management, Indian Institute of Technology, Kharagpur-721302, India (email: [email protected]). Y.-J. Son is with the Department of Systems and Industrial Engi- neering, University of Arizona, Tucson, AZ 85721-0020 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMCA.2007.914767 These solutions are all optimal as no solution can be obtained, giving a better performance in terms of any of the objective without a simultaneous decrease in the performance related to other objectives. In the dynamic situations which lack prefer- ence information, all these solutions can be considered equally efficient. However, any of the aforementioned approaches do not consider an effective mechanism to choose a satisfactory solution from the list of all efficient solutions. In order to alleviate such inefficiency, Hwang and Masud [3] established the necessity of decision making and classified the existing decision-making approaches to optimization into three cate- gories: a priori, a posteriori, and progressive articulation of preferences. Interactive approaches, which come under the broad category of progressive methods, progressively utilize the preference information provided by the decision maker (DM) during the search process and result in a solution that is much in congruence with his/her preference [4]. Although these ap- proaches exhibit advantage over other existing decision-making approaches, the major hurdles in their implementation are the computational time involved and the excessive cognitive load on the DM. Another major drawback of this method is that they require a lot of problem information from the DM without providing him/her any idea of the shape and nature of the resulting Pareto optimal front (POF) [5], thereby decreasing the reliability of the obtained solutions. None of the existing papers (to the best of the authors’ knowledge) has promisingly addressed this aspect of decision making in their approaches. Thus, with the aim of palliating any deficiency in the existing approaches, this paper proposes an interactive metaheuristic for multiobjective optimization (MOO). The major aims of this paper are as follows: 1) to obtain an efficient set of solutions covering the entire Pareto front and modified to an extent based on the DM’s articulations; 2) to facilitate the process of decision making by providing knowledge of the Pareto front to the DM and, thus, increase the reliability of the decisions taken by him/her; 3) to obtain a compromise solution in consonance with the DM’s preferences; 4) to reduce the time involved in decision making to a minimum; 5) to decrease the cognitive load on the DM. Owing to the high speed of convergence offered by the PSO [6], the proposed metaheuristic utilizes the PSO as its search mechanism. The approach is strengthened by the incorporation of a novel self-adaptive mutation operator that is specially 1083-4427/$25.00 © 2008 IEEE
20

Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

Mar 08, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

258 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

Interactive Particle Swarm: A Pareto-AdaptiveMetaheuristic to Multiobjective Optimization

Shubham Agrawal, Yogesh Dashora, Manoj Kumar Tiwari, and Young-Jun Son, Member, IEEE

Abstract—This paper proposes an interactive particle-swarmmetaheuristic for multiobjective optimization (MOO) that seeksto encapsulate the positive aspects of the widely used approaches,namely, Pareto dominance and interactive decision making in itssolution mechanism. Pareto dominance is adopted as the criterionto evaluate the particles found along the search process. Nondomi-nated particles are stored in an external repository which updatescontinuously through the adaptive-grid mechanism proposed. Theapproach is further strengthened by the incorporation of a self-adaptive mutation operator. A decision maker (DM) is providedwith the knowledge of an approximate Pareto optimal front, andhis/her preference articulations are used to derive a utility functionintended to calculate the utility of the existing and upcoming solu-tions. The incubation of particle-swarm mechanism for the MOOby incorporating an adaptive-grid mechanism, a self-adaptivemutation operator, and a novel decision-making strategy makesit a novel and efficient approach. Simulation results on varioustest functions indicate that the proposed metaheuristic identifiesnot only the best preferred solution with a greater accuracy butalso presents a uniformly diverse high utility Pareto front withoutputting excessive cognitive load on the DM. The practical rele-vance of the proposed strategy is very high in the cases that involvethe simultaneous use of decision making and availability of highlyfavored alternatives.

Index Terms—Metaheuristic, multiobjective optimization(MOO), Pareto dominance, particle-swarm optimization (PSO).

I. INTRODUCTION

MOST real-world problems employ the optimization ofseveral objectives which are often conflicting in nature.

Owing to the computational complexity involved, past decadehas witnessed a significant increase in the use of evolution-ary search strategies, like genetic algorithm (GA), simulatedannealing algorithm, tabu search, differential evolution (DE),particle-swarm optimization (PSO), ant colony optimization,etc., to optimize problems involving multiple objectives. Ma-jority of research in this field is concentrated toward generatinga nondominated or a Pareto optimal set of solutions [1], [2].

Manuscript received February 15, 2006; revised September 27, 2006. Thispaper was recommended by Associate Editor J. Lambert.

S. Agrawal and Y. Dashora are with the Operations Research and IndustrialEngineering Group, Department of Mechanical Engineering, Cockrell Schoolof Engineering, University of Texas, Austin, TX 78705 USA (e-mail: [email protected]; [email protected]).

M. K. Tiwari is with the Department of Industrial Engineering andManagement, Indian Institute of Technology, Kharagpur-721302, India (email:[email protected]).

Y.-J. Son is with the Department of Systems and Industrial Engi-neering, University of Arizona, Tucson, AZ 85721-0020 USA (e-mail:[email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSMCA.2007.914767

These solutions are all optimal as no solution can be obtained,giving a better performance in terms of any of the objectivewithout a simultaneous decrease in the performance related toother objectives. In the dynamic situations which lack prefer-ence information, all these solutions can be considered equallyefficient. However, any of the aforementioned approaches donot consider an effective mechanism to choose a satisfactorysolution from the list of all efficient solutions. In order toalleviate such inefficiency, Hwang and Masud [3] establishedthe necessity of decision making and classified the existingdecision-making approaches to optimization into three cate-gories: a priori, a posteriori, and progressive articulation ofpreferences.

Interactive approaches, which come under the broad categoryof progressive methods, progressively utilize the preferenceinformation provided by the decision maker (DM) duringthe search process and result in a solution that is much incongruence with his/her preference [4]. Although these ap-proaches exhibit advantage over other existing decision-makingapproaches, the major hurdles in their implementation are thecomputational time involved and the excessive cognitive loadon the DM. Another major drawback of this method is thatthey require a lot of problem information from the DM withoutproviding him/her any idea of the shape and nature of theresulting Pareto optimal front (POF) [5], thereby decreasingthe reliability of the obtained solutions. None of the existingpapers (to the best of the authors’ knowledge) has promisinglyaddressed this aspect of decision making in their approaches.Thus, with the aim of palliating any deficiency in the existingapproaches, this paper proposes an interactive metaheuristic formultiobjective optimization (MOO). The major aims of thispaper are as follows:

1) to obtain an efficient set of solutions covering the entirePareto front and modified to an extent based on the DM’sarticulations;

2) to facilitate the process of decision making by providingknowledge of the Pareto front to the DM and, thus,increase the reliability of the decisions taken by him/her;

3) to obtain a compromise solution in consonance with theDM’s preferences;

4) to reduce the time involved in decision making to aminimum;

5) to decrease the cognitive load on the DM.

Owing to the high speed of convergence offered by the PSO[6], the proposed metaheuristic utilizes the PSO as its searchmechanism. The approach is strengthened by the incorporationof a novel self-adaptive mutation operator that is specially

1083-4427/$25.00 © 2008 IEEE

Page 2: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 259

designed to suit the search process. It imparts high exploratorycapabilities to the particles at various stages of the search byvarying the probability of mutation with the requirements ofthe search. In order to store nondominated solutions foundalong the search process, the proposed method uses an extendedmemory referred here as repository. Decision-making module isused to assign utilities, resembling the DM’s preferences, to thenondominated particles stored in the repository. The utilities areassigned with the purpose of finding the compromise solutionsought by the DM as well as modifying the Pareto front. Thedecision-making module is invoked when the repository getsfilled, i.e., the knowledge of an approximate front is obtained.Preference articulations of the DM are then used to derive autility function, which has been formulated as the solution to alinear program (LP) using the GA. Based on such derived utilityfunction, the preferability of the new and existing individualsis estimated. Adaptive grid [7], in combination with the utilityfunction, is used to modify the Pareto front, thus enabling themetaheuristic to find preferred particles on diverse regions ofthe Pareto front. Particles close to the compromise solution areprogressively generated via their cognitive component which ismodified based on the utility of the particles obtained, whereasthe social component is responsible for guiding the searchtoward the diverse regions. The algorithm finally provides acompromise solution and a Pareto set of solutions having higherutilities in their corresponding grids.

The efficiency and the applicability of the proposed inter-active PSO (IPSO) are established using several standard testfunctions available in the literature. The results are comparedin terms of the quality of the Pareto front, the scaled deviationof the compromise solution from the optimal solution, and thecognitive load on the DM as well. The remainder of this paperis organized as follows. Section II focuses on various MOOmethods. It is followed by a detailed discussion of the proposedalgorithm in Section III. Section IV provides the computationalresults for the undertaken test problems. Finally, Section Vconcludes this paper.

II. MOO: A SURVEY

MOO closely resembles practical situations as [8]: “theyare often characterized by several noncommensurable andoften competing measures of performance or objectives.” AMOO problem, in its most generic sense, can be described asthe simultaneous minimization of several objectives fk, k =1, . . . , n, subjected to a number of constraints, i.e.,

Min f(x) = (f1(x), . . . , fn(x)) (1)

satisfying p inequality constraints

ui(x) ≥ 0, i = 1, 2, . . . , p (2)

and q equality constraints

vi(x) = 0, i = 1, 2, . . . , q. (3)

In the absence of any preference information, a set of so-lutions is obtained, where each solution is equally significant.

The obtained set of solutions is called nondominated or Paretooptimal set of solutions, which is represented here as S. Anysolution x = (x1, . . . , xn) dominates y = (y1, . . . , yn) if andonly if x is partially less than y(x <p y), i.e., ∀i ∈ 1, . . . , n

fi(x) ≤ fi(y) ∧ ∃i ∈ {1, . . . , n} : fi(x) < fi(y). (4)

The front obtained by mapping the Pareto optimal set (S)into the objective space is called POF

POF ={

�f = (f1(x), . . . , fn(x)) |x ∈ S}

. (5)

The determination of a complete POF is a very difficult task,owing to the computational complexity involved in its com-putation due to the presence of a large number of suboptimalPareto fronts. By considering the existing memory constraints,the determination of the complete Pareto front becomes infea-sible and, thus, requires the solutions to be diverse coveringits maximum possible regions. Nondominated sorting GA II(NSGA II) [9], MICROGA [10], [11], SPEA II [12], PAES[7], multiobjective PSO (MOPSO) [13], [14], Pareto DE (PDE)[15], [16], multiobjective DE (MODE) [17], etc., are someof the highly competitive evolutionary MOO algorithms thataimed to obtain a set of all efficient or noninferior solutions.

The aforementioned methods are widely used due to theirability to generate the POF in a comparatively small time and,thus, provide a set of alternatives that are equally efficient, incase that there is no preference information available. However,these approaches are marked by their inability to provide acompromise solution, which is best suited to the needs of thesituation. In order to obtain a compromise solution, decisionmaking is introduced in the search mechanism. Based on thestrategic inclusion of decision making, three basic approachesexist in the literature as follows:

1) a priori articulation of preferences, which involves theaggregation of objective functions using weights and,thereafter, the determination of optimal solution;

2) a posteriori articulation of preferences, in which decisionmaking is applied after an efficient set of solutions is ob-tained, and the best solution among the obtained solutionsis chosen;

3) progressive articulation of preferences, which involvesthe process of decision making along with the searchprocess to guide the search effort toward the regionsof the Pareto front where the probability of getting thecompromise solution is maximum.

Some of the interactive extensions [18], [19] ask the DMto specify unacceptable or goal levels and restrict the solutionspace to be searched. However, a number of interactive methods[4], [20], [21] make pairwise comparisons of solutions in accor-dance with the preferences of the DM so as to converge towarda highly preferred solution. One of the marked irregularities inthe use of interactive methods is their high dependence on theDM for an efficient performance. Thus, the interactive methodsare generally not preferable due to the time requirements andthe inability to provide the complete set of nondominatedsolutions. In addition, the aforesaid interactive approaches donot detail the complete knowledge of the POF to the DM; thus,

Page 3: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

260 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

reliability is questionable. Since decision making is a subjectivetask, only when a sufficient knowledge of the optimal solutionsis available to the DM that a proper and reliable decisionmaking can be achieved [5]. The proposed algorithm has triedto alleviate this deficiency and, thereby, augment the reliabilityof decision making.

Particle swarm [6] is a recently proposed metaheuristic, in-spired by the behavior of a bird flock. In the proposed approach,the PSO has been utilized as a search mechanism due to itshigh speed of convergence. The movement of each particleof the swarm depends upon its cognitive and social compo-nents [22]. The cognitive component motivates the particle toattain the best position found by it so far, whereas the socialcomponent encourages it to move toward the global optimum.The combined effect of these two components ensures a betterbalance between the speed of convergence and the search-spaceexploration, thus making the process particularly suitable forthe MOO [13], [14].

Several extensions to the PSO for handling multiple ob-jectives have been proposed. The algorithm proposed in [23],the swarm metaphor [24], the algorithm of Parsopoulos andVrahatis [25], the dynamic neighborhood PSO [26], and theMOPSO [14] can be a representative set of research. Animproved version of MOPSO, which was proposed by Coelloet al. [13], is a revision of the original MOPSO algorithm, withenhanced operators as mutation to improve its search potential.The algorithm uses the PSO as its search mechanism and thenondominance as its evaluation criterion. The nondominatedparticles found along the search process are stored externallyin a secondary repository. The proposed IPSO adopts a searchmechanism similar to the aforesaid approach with various mod-ifications in order to make it compatible with the interactiveenvironment.

III. IPSO: A DETAILED INSIGHT

As already mentioned, the proposed algorithm iterativelyinteracts with the DM to find a complete front by continu-ally updating its members as per the preference informationprovided by him/her. The development proposed aggregatesvarious attributes of MOO and decision-making strategy and,thus, attempts the optimization from the DM’s point of view.The PSO, particularly that is designed to handle multiple objec-tives, has been used to explore the search space. The nondom-inated particles found along the search process are stored in anexternal repository which continuously updates itself throughthe proposed adaptive mechanism. Mutation operator aids thesearch by imparting high exploration potential to the particlesat the beginning of the search. The mutation operator is ini-tialized with a high probability of occurrence which decreasesexponentially with the addition of particles in the repositoryuntil half of the repository fills up. Again, the mutation prob-ability increases as the search again needs to be exploratory.Furthermore, at the later stages, the mutation probability adaptsitself to lesser values so as to make the search converging andexploratory. Thus, the proposed mutation operator is inherentlyself-adaptive and orients itself with the dynamic requirementsof search. Here, it is important to mention that the basis of

nondominance always provides more preferable solutions assuggested by the following lemma.Lemma 1: A nondominated solution (u) is always more

preferable (having a higher utility) than any solution (v),in the set of solutions, it dominates, i.e., if u <p v, thenUDM(u) > UDM(v).

Proof: Provided in Appendix I. �Lemma 1 thus implies that the process of applying the deci-

sion making after the discovery of an optimal set of solutionsdoes not lead to the loss of any potential solution. After thediscovery of an approximate Pareto front, a set of solutionslying on the distinct regions of the efficient frontier is sent to theDM. The process tends to generate a clear picture of the Paretofront in the mind of the DM and helps him take decisions whichare more reliable. Pairwise comparisons by the DM are thenused to derive a utility function (through an LP) which closelyresembles the DM’s own utility function. A GA is then appliedto solve the aforementioned LP and determine weights, usingwhich an estimated utility function is formulated. In spite of thefact that the use of the GA does not provide optimal solutions,but suboptimal, it is applied to reduce the time requirement(due to the exponential time complexity of exact methods [27])and give a competitive performance. Fitness of newly generatedand existing solutions is calculated based on the utility functionsuch derived. Interactions with the DM are then actuated bythe particles having a utility higher than the preferred solution.Cognitive load on the DM is proposed to reduce by storinga list of unsuccessful candidates. Any particle that tends tostrike a comparison with the DM is first compared with theunsuccessful challengers present in the list. Search progresseswith utility as a parameter and continuous modifications inthe Pareto front are made, thus obtaining particles with higherfitness in their corresponding grids. In order to partially divertthe search toward finding the compromise solution, the cog-nitive components of the particles are inclined to search theregions preferable to the DM, whereas their social componentsare used to explore the Pareto front. The mechanism offers thedesired compromise solution (or, in fact, a solution that is veryclose to it) and a fully distributed set of solutions which areequally efficient, where, of course, the time requirements arebeing taken care of. The fact that the proposed strategy notonly provides the Pareto front but also identifies the favorablecompromise solution makes the contribution significant anddistinct among the peers. The generic flowchart of the proposedIPSO is shown in Fig. 1, and the algorithm is further elaboratedin the following discussion.

First, the particles are initialized with random positions andvelocities. In the PSO, the values of decision variables for aparticular problem are represented through the particle’s posi-tions, whereas the velocities depict the tendency of a particleto move from one position to another. The objective values areevaluated utilizing the particle’s position which thereby formsthe basis for calculating its nondominance. After the calcula-tion of objective values for each particle, they are evaluatedfor nondominance, and those which are nondominated againstevery other one are stored in the repository. The nondominatedparticles are then located using objective-function values ascoordinates in the explored objective function space, which

Page 4: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 261

Fig. 1. Flowchart of the algorithm.

itself is divided into small hypercubes. Each particle locatedin the space is thus assigned to a particular unique hypercube.Hypercubes are intended to measure the amount of crowdingin the obtained Pareto front and reduce it to the most possibleextent using the modified adaptive-grid procedure. Particle’svelocity and position are directed using swarm properties. Itentails modification in velocity and, thereby, position based onits deviation from globally best as well as its previous bestposition, which is given as

vel[i] = W ⊗ vel[i] ⊕ Rand(0, 1) ⊗ (pmem[i] − par[i])

⊕ Rand(0, 1) ⊗ (rep[h] − par[i]) .

The particle’s position is thereby updated using the followingexpression:

par[i] = par[i] + vel[i] (6)

where vel[i] and par[i] denote the velocity and the position ofthe ith particle, respectively, pmem[i] is the best position storedin the particle’s memory which acts as the basis of its cognitivecomponent, and rep[h] denotes the selected global best from therepository of solutions. In the proposed approach, the selectionof global best from a number of comparable solutions is donethrough a mechanism incorporating fitness sharing [14]. Forthis purpose, each hypercube containing particles is assignedwith fixed resources. Those hypercubes containing more parti-cles confront the division of the allocated resource into smallerfractions, which thereby decreases its fitness. The hypercubesare selected using roulette wheel selection with a probabilitythat is directly proportional to the fitness of the concerned hy-percube. A particle is then randomly chosen from such selectedhypercube. The particle’s memory is initialized by its currentposition and is updated accordingly, in case that the particle

Page 5: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

262 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

attains a better position during the iterative search process.The particles are, thereafter, again evaluated and checked fornondominance with respect to other particles in the populationas well as with respect to all the particles present in therepository. The nondominated particles found are then grantedaccess to the repository, provided that it is empty. However,in case that the repository is filled, a decision-making moduleis invoked which is utilized with adaptive-grid procedure toprune the Pareto set and, thereby, induce the required diversity.Thereafter, cognitive memory updating followed by mutationtakes place. The aforementioned processes of velocity updating,objective evaluation, nondominance check, decision making,memory updating, and mutation are carried out iteratively untilthe completion of a specified number of fitness evaluationswhich is initially decided as the termination criteria.

Thus, during the iterative search process, the particles areguided toward the global front exploiting their past experiencesas well as the position of the globally best particle. The follow-ing discussion explicitly details various submodules used in theproposed algorithm.

A. External Repository

External repository is used to store the nondominated parti-cles found along the search process. If a nondominated particledominates any member among the nondominated membersstored in the repository, the corresponding dominated mem-bers are deleted from the repository. On the other hand, ifa nondominated solution found along the search process isdominated by any particle present in the repository, then it isdenied access to the repository. By considering the memoryconstraints and the computational time involved in the process,the size of the repository is limited, as the computational timeis directly proportional to the number of comparisons madeagainst the members of the repository. When the repository hasreached its maximum allowable capacity, the decision-makingmodule is invoked. The utility function based on the DM’spreference is derived and coupled with the modified adaptive-grid procedure to reduce crowding and ensure the inclusion ofpotential population members in the repository. The procedureto formulate and update the utility function is described in thefollowing section.

B. Formulation and Updating of Utility Function

Utility function is calculated based on the preference infor-mation received from the DM. It is used as a tool to reducecognitive load on the DM by approximating solutions agreeablewith the DM. Utility values for new and existing solutionspartially (pertaining to the partial availability of information onthe DM’s preference) determine their suitability for the DM.The advantage of incorporating such a utility-based mechanismcan be directly seen as a reduced interaction of only the poten-tial candidate solution with the DM which implicitly reducesthe time required in decision making and makes the proposedapproach quite competitive compared with the other existingapproaches involving interactive heuristics. The utility functionis formulated once the repository gets filled, thus providing an

approximate knowledge of the Pareto front to the DM. A set ofsolutions lying on different regions of the efficient frontier issent to the DM in order to obtain preference information basedon the comparisons performed by him/her. Feedback of the DMis then used to construct the utility function as described next.

Consider that the DM prefers particle x over y,i.e., xDM > y. Based on the preference information obtainedfrom the DM, middlemost weight technique [4] can beutilized to determine the weights wi for ith objective in theutility function. The technique finds the weights for multipleobjectives by solving an LP that seeks to maximize the utilityof the candidate solution declared by the DM to be the bestone. The LP is formulated as

Max ∆ (7)

subjected to constraints

∆ ≤n∑

i=1

wi(fi(y) − f�

i )m

(fpf

i (max) − fpfi (min)

)m

−n∑

i=1

wi(fi(x) − f�

i )m

(fpf

i (max) − fpfi (min)

)m

∀x >DM y (8)n∑

i=1

wi =1 (9)

wi ≥∆ ∀i = 1, . . . , p (10)

where f�i denotes the ideal point for objective i, and fpf

i (max)and fpf

i (min) denote the points on Pareto front having max-imum and minimum values of objective i, respectively. Thevalue of m is chosen based on the form of utility functionconsidered. In this paper, linear utility function has been ac-counted, and thereby, the value of m is taken to be one. Theaforementioned LP has been solved using GA [28] in orderto determine the estimated weights wi. The GA is inspired bythe process of evolution of living organisms [29]. It basicallymaintains a population of chromosomes. Each chromosomeis an encoding (genotype) of a solution (phenotype) of theproblem being solved. Fitness values of chromosomes decideits worth and are often given by the objective-function value atthe point of search space defined by the decoded chromosome.The GA emerged as a pioneering optimization technique due tothree main reasons [28] as follows:

1) its quality to couple itself with specific heuristics to thegiven problem;

2) the power to perform a global search in the solution spacerather than a local one through genetic operations on thechromosomes;

3) the adaptability to different kinds of constraints, namely,linear or nonlinear, and any type of cost functions,namely, continuous, discrete, single criterion, or multipleobjectives.

The purpose of using GA is to lessen the time required in find-ing the estimated weights as compared with the cases when the

Page 6: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 263

Fig. 2. Pseudocode for updating utility function.

LP is solved using exact methods. Weights are coded in binaryrepresentation as chromosomes. The value of ∆ is assumed asfitness, and the equation to compute ∆ is the fitness function. Auniform binary crossover and a bit-flip mutation are applied toform new offspring, the fitness of which is higher as comparedwith its parents. After obtaining the weights, the fitness of newand existing solutions is evaluated using the utility function asfollows:

U est(x) = −n∑

i=1

westi

(fi(y) − f�i )m

(fpf

i (max) − fpfi (min)

)m . (11)

The LP formulated in (7) may not be feasible every time. Thus,to obtain the utility function, constraint violations have beencountered by adding penalties proportional to the extent ofconstraint violation. Sorting operation (pseudocode in Fig. 2)performed in the later part of the process guarantees that nopreference information is lost. Various symbols used in thepseudocode are listed in Appendix II for a quick reference.Utility of each member of the repository is calculated, andthe particles are sorted in a decreasing order of utility. If anyparticle in the sorted population has a utility higher than thecurrently preferred one (pref), the DM is asked to compare itwith the preferred individual. The feedback sent by the DM isused to update either the preferred solution and/or the utilityfunction. The aforementioned procedure is iteratively utilizeduntil the solution with the highest utility eventually comes outto be the DM’s favorite. The interaction with the DM, whichis a time-consuming process and whose magnitude, in practicalsituations, depends upon the availability of the DM and his/herconsent with a particular solution, is required to be reducedto the most possible extent without degrading the quality offinal solution obtained. In the proposed work, to decrease thenumber of comparisons made by the DM, a list of unsuccessfulcompetitors is maintained. Any potential solution that tends tostrike a comparison with the DM is first compared with theelements of this list. Only those solutions are compared bythe DM that are different from the members of the list witha minimum threshold value often decided by the DM. Such aprocedure ensures the comparison to be actuated by only thosehigh achievers having the required potential.

C. Adaptive Grid

Pseudocode of the mechanism is shown in Fig. 3. To producea well-distributed Pareto front, this paper proposes an adaptive-grid mechanism. Its base lies in the adaptive grid proposed in[13] with modifications to make it more efficient and adap-tive with the DM’s articulations. The adaptive-grid mechanismproceeds by the division of search space into hypercubes, thecoordinates of which are defined by function space. The parti-cles in the hypercubes are located using their objective-functionvalues. If any particle is found eligible to enter the repositoryand has coordinates lying outside the current bounds of the grid,the grid is recalculated and relocated accordingly. For effectivefunctioning of the grid, the number of divisions of the grid aswell as the repository size should be prespecified by the DM.Once the repository gets filled with the given mechanism, thesubsequent eligible entries are first checked for their location.If the solution lies in a less crowded region (the number of parti-cles in the hypercube is lesser), it is entered into the repository,and a corresponding lowest utility particle lying in the hyper-cube containing a maximum number of particles is removed.

D. Maintenance of Variables in Search Space

To restore feasibility and direct the search effort towardpotential regions, variables are maintained in the search spacein case they go beyond their prescribed bounds (the pseudocodeof the procedure is shown in Fig. 4). This is achieved intwo ways.

1) By limiting the maximum value of velocity that can beattained. If the velocity crosses this threshold value, thenthe variable is assigned a velocity that is equal to thethreshold.

2) By limiting the particle at the boundary of search spacein case it tries to cross the limit and is given a randomvelocity in the opposite direction.

E. Self-Adaptive Mutation Operator

High convergence speed offered by the PSO justifies the useof a mutation operator that delays convergence and enablesthe particles to explore the whole search space [13]. In thecontext of MOO, the significance of such an operator increases

Page 7: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

264 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

Fig. 3. Pseudocode for adaptive grid.

Fig. 4. Pseudocode for maintaining in search space.

manifolds, owing to its ability to prevent the particles fromconverging to the local Pareto front (equivalent to local optimain case of single objective optimization). This paper employsa mutation operator that is quite suited to the metaheuristicproposed. The operator is self-adaptive with varying probabil-ities of mutation. Fig. 5 shows the variation in probability ofmutation with the number of particles in the repository for avery ideal case when the repository fills with a very uniformrate. However, in simulation conditions, the predictability ofprobability of mutation is very low and cannot be predeter-mined. The equation that describes the variation in probabilityof mutation (P (m)) with the number of particles in repository(nrep) is as follows:

P (m) = e−θ θ =nrep

cnrep <

repsize2

= eθ θ =nrep − repsize

cnrep ≥ repsize

2

=nocycle − iter

nocycle − iterrpflnrep = repsize (12)

where repsize denotes the size of the repository, iter is thecurrent number of iterations, iterrpfl is the number of iterationat time when the repository is full, and nocycle symbolizes thetotal number of iterations before termination and is prespec-ified by the DM. Initially, when the repository is empty, themutation probability is very high, approaching almost unity.This high probability of mutation at the initial stage impartshigh exploratory capabilities in the particles and enables themto explore a vast region of the feasible search space. It also

Fig. 5. Self-adaptive mutation operator.

reduces the chances of convergence at the local Pareto frontand, thus, motivates the particles to search for the globaloptimum. Probability is then decreased exponentially with anincreasing number of particles in the grid and reduces to a verylow value (approximately zero) when the repository is filled tohalf its capacity. The procedure provides the particle enoughtime to discover new regions, based on their own knowledgeand the social behavior of the swarm. Probability is then again

Page 8: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 265

increased and made to approach unity when the repositoryis filled (i.e., when the decision making is performed). Theidea of making such a phase change at the time when halfof the repository gets filled is taken intuitively in order toprovide better chances for both exploration and exploitation.However, the point of transition may be a factor for futureresearch. The motive behind such a variation of probabilityis to test the global behavior of the Pareto front obtained bythe process and motivate the particles once again to searchfor new solutions compatible with the DM. Solutions obtainedwith higher fitness in each grid lead to modification of theresulting Pareto front. The modifications in the Pareto front, incombination with the memory-updating mechanism, discovera better compromise solution as also validated by the resultsprovided in the following section.

F. Particle’s Memory

The memory of a particle is the basis of its cognitive searchcomponent. The best position ever attained by a particle isstored in its memory. The cognitive component directs thesearch effort of a particle toward the positions stored in itsmemory and is used in combination with the social component(which searches toward the global optima) to provide a veryfast convergence. In order to find solutions acceptable to theDM, the proposed algorithm uses the inherent potential ofthe component to drive the search in the regions preferableto him/her. As soon as the utility function is formed and theutility is assigned to each particle, for the memory updating,the utility becomes an additional criterion besides the Paretodominance which is the primary one. Further continuation ofthe process ensures highly preferable positions in the memoryand increases the probability of finding a solution that is muchmore satisfactory to the DM.

The proposed interactive particle-swarm methodology thuspossesses two important properties: 1) a positive variationkernel and 2) a strong elite preservation strategy, which areconsidered essential for attaining convergence to the optimalfront. The positive variation kernel is ensured by the presenceof self-adaptive mutation operator with a probability of muta-tion P (m) ∈]0, 1[. The selection of global best (to guide thesearch) from the external repository through the roulette wheelselection leads to an effective generation of new solutions andfurther adds positiveness in the variation kernel. Additionally,the approach incorporates an external repository to store thenondominated solutions which is a very strong elite preserva-tion strategy and, thereby, ensures elite conservation during thesearch. Due to the presence of these two properties, theoreti-cally, the algorithm will converge to the optimal Pareto frontin a finite number of iterations, as proved by Rudolph [30]. Tofurther validate the convergence characteristics of the proposedstrategy, experiments have been conducted; the results of whichare detailed in the next section.

IV. COMPUTATIONAL RESULTS

This section contains the computational results obtained overa few benchmark test functions taken from the literature. In

Fig. 6. Convergence metric γ.

order to facilitate the quantitative assessment, the analysis ofresults is performed considering the two possible criteria, i.e.,quality of the obtained Pareto front and satisfaction of DMwith the compromise solution. For a better appraisal of thealgorithm performance, this section has been divided into sev-eral sections. Section IV-A contains the criteria for evaluatingthe Pareto front, followed by Section IV-B which contains theadopted standards of measuring the DM’s contentment withthe compromise solution. Thereafter, Section IV-C contains abrief description of the algorithm utilized for comparison withthe proposed approach. Section IV-D contains the parametersetting used for conducting simulations. Later sections containthe chosen unconstrained and constrained functions and theobtained results.

A. Assessment of Pareto Front

The knowledge of Pareto front of a problem provides analternative for selection from a list of efficient solutions. It thushelps in taking decisions, and also, the knowledge gained canbe used in situations where the requirements are continuallychanging. The Pareto front is judged, taking into account theextent to which it represents the possible solution space of theproblem concerned. The two possible criteria for the evaluationof Pareto front comprise the measurement of its closeness withthe optimal Pareto front and the avoiding of crowding, therebymaintaining a uniform diversity. For an in-depth analysis ofPareto front, certain criteria are considered, and suitable metricsare adopted.1) Extent of Convergence: In order to measure the extent

of convergence of the obtained set of solutions, convergencemetric γ (Fig. 6) is adopted [9]. The calculation of this metricis possible, in case that the knowledge of optimal front is avail-able. This paper has used standard test functions which have aknown set of Pareto optimal solutions, and therefore, the calcu-lation of this metric has been utilized. Initially, a set S for com-parative analysis whose dimensions are approximately five tosix times repository size is created. It contains uniformly spacedsolutions lying on the Pareto front. The distance metric γcalculates the average of distance of each solution with thenearest member in the set S. The smaller is the value of γ, thebetter is the convergence toward the POF.

Page 9: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

266 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

Fig. 7. Diversity metric δ.

2) Diversity Metric: It is used to measure the spread ofsolutions lying on the POF. One of the aims of the proposedalgorithm is to find a set of solutions that cover the entirePareto front. The Euclidean distance di between consecutivesolutions on the Pareto front is calculated, and the aver-age of all such distances is taken. Extreme solutions on thePareto front are calculated by separately optimizing each objec-tive. Then, the following metric [9] is used for the calculationof diversity:

δ =df + dl +

∑N−1i=1 |di − d̄|

df + dl + (N − 1)d̄(13)

where df and dl are the Euclidean distances between the ex-treme and the boundary solutions of the obtained nondominatedset (Fig. 7), di is the distance between consecutive points,i = 1, 2, . . . , (N − 1), and d̄ is the average of all distances di.

Competitiveness of the obtained Pareto front is measuredagainst three state-of-the-art multiobjective evolutionary algo-rithms: NSGA II, MOPSO, and MODE.3) Percentage Increase in the Utility of the Entire Pareto

Front: Objectives of the proposed approach also include find-ing a nondominated front containing particles with higher util-ities as compared with the ordinary Pareto front along withimproving the level of satisfaction of the DM with the obtainedcompromise solution. In order to validate that the Pareto frontobtained through the proposed approach has a utility that issignificantly better as compared with the approach withoutdecision making, percentage increase in aggregate utility ofthe obtained front is taken as the criterion. Simulation runsof the algorithm are performed, taking the same initial pop-ulation, with and without the decision-making module. Themechanisms that use decision making or utility function fortheir efficient performance are either eliminated or changed(using random selection rather than utility-based selection). Theequation that describes the criterion is

ΘPF =

n∑i=1

U(xDM) −n∑

i=1

U(xNDM)

n∑i=1

U(xDM)× 100 (14)

where U(xDM) and U(xNDM) refer to the utilities of thePareto front when decision making is involved and not involved,respectively. Larger values of ΘPF indicate an increased utilityof the obtained front. Due to the continuous increase in util-ity, during the iterations, particles are more inclined towardobtaining positions (as the search process is guided by thepast experiences of the particle) having higher utilities, whichimplicitly drives the generation of solution that is consistentwith the DM’s preference.

B. Analysis of the Compromise Solution

This part of the analysis quantitatively measures the DM’sconsent with the final solution obtained. Two performance cri-teria have been taken to assess the performance of the algorithmfor compromise solution: DM’s satisfaction with the compro-mise solution (i.e., the quality of obtained preferable solution)and cognitive load on the DM. For the measurement of the firstcriterion, i.e., solution quality, the scaled deviation percentage[4] of the obtained solution from the optimal solution is takenas the measure. It is described by the following equation:

ε =U(xOPT) − U

(xpf

Best

)

U(xOPT) − U(xpf

Bad

) (15)

where xpfBest and xpf

Bad represent the solutions having thebest and the least utilities in the obtained Pareto front,respectively. xopt is the optimal solution obtained by sepa-rately optimizing the DM’s actual utility function (described inSection IV-D), i.e.,

MAXx∈XU(x). (16)

The second criterion is simply measured by the number ofcomparisons made by the DM during a single run of thealgorithm. The algorithm is considered successful in generatingan efficient compromise solution if a low value of ε can beobtained in relatively few interactions.

C. Compared Algorithms

1) Nondominated Sorting GA II (NSGA II): It was proposedby Deb et al. [9]. This algorithm is a revised and an enhancedversion of the NSGA proposed by Srinivas and Deb [31].The NSGA was mainly criticized for its high computationalcomplexity (O(nN3)) [32], where n is the number of objec-tives, and N is the population size. Other drawbacks associatedwith the NSGA are its nonelitist approach and the need forspecifying a sharing parameter.

However, the NSGA II incorporates a fast nondominatedsorting approach which requires low O(nN2) comparisons. Italso replaced the use of sharing function with the new crowdedcomparison approach, thereby eliminating the need of any user-defined parameter for maintaining the diversity among thepopulation members. Initially, a population double in size (2N )is randomly generated and then sorted on the basis of nondom-inance. Thereafter, the selection of best N members from the

Page 10: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 267

population is carried out. Crossover and mutation are performedin order to utilize these genetic operations to produce betteroffspring. Offsprings are then combined with the parents toform the initial population for the next generation, and theaforementioned procedure is repeated until the adopted termi-nation criterion is achieved.2) Multiobjective PSO (MOPSO): It was first proposed by

Coello and Lechunga [14] and later revised and enhancedby Coello et al. [13]. The algorithm exploits the potentialof PSO extensively designed to handle the needs of MOO.In MOPSO, particles are randomly initialized with randompositions and velocities. By utilizing their cognitive and socialsearch components, the particles move across the search spaceto look for better positions. Pareto dominance is adopted asthe evaluation criterion, and the nondominated positions foundalong the search process are stored externally in an externalrepository, thus ensuring elitism. An adaptive-grid mechanismis utilized for pruning of the nondominated set and, thereby,generates a diverse nondominated set which can be consideredas the representative of the optimal Pareto front. The MOPSOalso incorporates a mutation operator to avoid a premature con-vergence at the initial stages of the search, thereby ensuring anextensive exploration of the search space for potential regions.3) Multiobjective DE (MODE): It utilizes DE, which is an

evolutionary optimization technique developed by Storn andPrice [33]. Several multiobjective extensions of DE have beenproposed in the literature: PDE [15], self-adaptive PDE (SPDE)[16], MODE [17], etc.; Pareto-frontier DE algorithm solvedmultiobjective problem by incorporating the Pareto dominancewith the DE. The SPDE extended the PDE with self-adaptivecrossover and mutation. Babu et al. [17] later proposed theMODE which was shown to perform better than the NSGA.In MODE, initially, a population is randomly generated. Allthe particles in the initial population are checked for nondomi-nance, and thereby, the dominated particles are removed, keep-ing only the nondominated ones to participate in recombination.Recombination is performed to create offsprings until the pop-ulation size is full after which the population is again evaluatedfor nondominance. The procedure is iteratively repeated untilthe specified termination criteria are met.

D. Parameter Settings

Simulation runs were performed on several test problemstaken from the literature on MOO. The results are based on theDM’s articulation for the actual utility function given as

Normalized Utility Function :

U(x) = −n∑

i=1

wacti

fi(x) − f�i

fPFi (max) − fPF

i (min). (17)

Here, wacti ∀i ∈ {1, 2, . . . , n} denotes the actual weights for

the DM’s function. Its values were randomly generated in theinterval [0, 1] and then normalized to sum to one. For a singlerun of the algorithm, the value of wact

i is kept constant. In thepresented approach, the aforementioned utility function is usedto perform the pairwise comparisons of the solutions.

Several combinations of values for the parameters used inthe IPSO were tested, and the results are reported for theparameter values that were found best. Tested values wereall combinations of population size ∈ {30, 50, 100, 200} parti-cles, repository size ∈ {50, 100, 200}, and adaptive-grid divi-sions ∈ {15, 25, 30, 40}. For IPSO, a swarm size of 100, arep_size of 100, and 30 divisions of the adaptive grid have beenused. To derive the estimated utility function, the GA run usesbinary-coded strings, a population size of 100, a crossover rateof 0.9, and a mutation rate of 0.01, and the number of fitness-function evaluation is limited to 10 000. Simulation runs forMOPSO were performed using the parameters depicted bestfor its functioning [13], i.e., a population of 100 particles, amutation rate of 0.5, 30 divisions for the adaptive grid, and areal number representation. NSGA II [9] was run using a popu-lation size of 100, a crossover rate of 0.8, and a mutation rate of0.01. While for MODE the parameter settings utilized are thepopulation size of 100, an external archive size of 100 (max-imum), and a crossover constant of 0.9, the scaling factor israndomly generated in accordance with the specifications givenin [17]. The Pareto fronts obtained after 25 000 fitness-functionevaluations of each approach have been considered for analy-sis. All the algorithms were coded in C++ and implementedon a 2.40-GHz Pentium IV processor with a 256-MB RAM.In further simulations, independent runs were performed toevaluate the Pareto fronts and to determine the deviation fromthe preferred solution. The following section describes the firstset of test problems (i.e., problems without constraints) used inthe simulation runs.

E. Test Problems Without Constraints

For the sake of clarity and ease of presentation, the testproblems considered in this paper have been divided into twocategories—test problems without constraints and test prob-lems with constraints. This section contains simulation resultsof the proposed IPSO and its comparisons with the other peerstrategies found in literature. The unconstrained problems usedin this paper are given in Table I along with the number ofvariables, variable bounds, Pareto optimal solutions, and natureof the POF. These problems are characterized by two objectives,each with variable bounds but no constraints. Four standard testproblems have been considered for the experimental purpose,as utilized by Veldhuizen [34] and previously used. These areSchaffer’s function [35], Fonseca and Fleming’s function [36],Poloni’s function [37], and Kursawe’s function [38].

Results for the convergence metric obtained using thefour algorithms, namely, IPSO, MOPSO, NSGA II, andMODE, are given in Table II. From the result, it is evidentthat the proposed IPSO is found to converge better in mostof the test functions. In addition, the best and least values forthe experiments have mostly been least for the IPSO. Fromthe results, the IPSO emerges out to be the best strategy interms of convergence metric; however, the MODE with externalarchive and MOPSO also demonstrated competitive resultsand nearly outperformed the NSGA II. MOPSO is found toperform exceptionally well in Kursawe’s function while IPSOalso demonstrated a comparable performance. NSGA II can be

Page 11: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

268 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

TABLE ITEST PROBLEMS

TABLE IIMEAN AND DEVIATION RESULTS FOR CONVERGENCE METRIC

TABLE IIISIMULATION RESULTS FOR DIVERSITY METRIC

figured out to be least effective of all the four strategies on thismetric, although it performed reasonably well.

Next, Table III contains the results for diversity metric δ. Asclaimed, the performance of IPSO was found better in mostof the cases, thus justifying its potential. For this metric, the

performance of MOPSO was found comparatively inferior toIPSO while comparative with respect to NSGA II and MODE.However, no strategy did ever dominate IPSO in general.

Further comparisons were made on the basis of Pareto frontsobtained. In order to clearly visualize the quality of solutions

Page 12: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 269

Fig. 8. Nondominated solution with (from left) IPSO, MOPSO, NSGA II, and MODE on Schaffer’s function.

obtained, various competitive figures have been plotted forthe obtained Pareto fronts with optimal fronts. First, the frontsfor Schaffer’s function have been plotted (Fig. 8). Although theperformances of all the four strategies have been competitivelygood over this function, the front obtained from IPSO is foundto be more uniformly distributed, supporting the results ofTable III. In addition, the more difficult regions of the searchhave been thoroughly explored by IPSO, giving it a qualita-tive edge. The high extent of coverage and uniform diversityobtained through IPSO can be attributed to the novel mutationoperator which helps in the exploratory search.

The Pareto fronts for Fonseca and Fleming’s function [36]are shown in Fig. 9. This function has a nonconvex POF. Inall the test runs, the IPSO was found to converge to the optimalPareto front, whereas the NSGA II converged to the Pareto frontin only 14 out of 20 test runs; in addition to providing morediverse results than NSGA II, MOPSO converged to the front in18 runs. MODE showed comparable results with MOPSO andproduced a diverse Pareto front for the same number of runs asMOPSO.

The problem by Poloni [37] is characterized by a nonconvexdisconnected POF. The Pareto front obtained using the IPSO

[Fig. 10 (left)] is far better as compared with that obtainedthrough other two approaches. Unfortunately, both MOPSO andNSGA II were never able to find the complete Pareto front, asshown in Fig. 10. However, MODE demonstrated better resultsand identified several solutions in the region (disconnectedregion) that is not explored by MOPSO and NSGA II.

Finally, Fig. 11 shows the Pareto front for Kursawe’s function[38] that is obtained by the four algorithms considered. Asevident from the figure, the IPSO explored the Pareto front,which the other strategies failed to explore. In terms of qualityand number of particles converged, MOPSO was found betterwhile NSGA II and MODE with external archive also producedcompetitive results.

The better quality of Pareto fronts obtained by the proposedIPSO metaheuristic as compared with the other peer strategiesis attributed to the specific combination of established as wellas novel multiobjective tools utilized. The use of particle-swarm heuristic, in the proposed IPSO approach, providedhigh exploratory potential to the search as compared with theNSGA II and the MODE which utilize GA and DE, re-spectively. In addition, the coupling of search with a novelself-adaptive mutation operator maintained a stringent balance

Page 13: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

270 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

Fig. 9. Nondominated solution with (from left) IPSO, MOPSO, NSGA II, and MODE on Fonseca’s function.

between the exploration and exploitation capabilities and,thereby, leads to a high solution quality. Adaptive-grid proce-dure, as utilized in the proposed approach, is responsible formaintaining the diversity of the Pareto front, and it emerged tobe more effective as compared with the other Pareto-diversity-maintaining strategies.

Furthermore, to have deeper insights to the solution quality,the results concerned to the percentage increase in utility havebeen presented in Table IV. It varies between 4% and 6% witha standard deviation of around 1%. This can be considered asignificant increase in utility which, if reinterpreted along withthe diversity obtained, becomes more pronounced.

The results regarding the scaled deviation of preferred solu-tion from the optimal solution for the test functions illustrated inTable I are presented in Table V. For each test problem, 20 sim-ulation runs were performed. Average ε value remains below1% for all the test functions, and the number of comparisonsrequired on an average is less than five for all the problemswith significantly low values of standard deviation. This highrate of resemblance with the preferred solution is primarily dueto a partial inclination of the search toward preferred regions,owing to the effective incorporation of decision making. In the

last stages of the search, this effect predominates, and thus, asolution that is very close to that with highest utility is obtained.Low value of deviation also indicates that the use of GA givesa utility function having properties similar to the actual utilityfunction used.

F. Handling Constraints

This section presents the comparative results of the proposedstrategy for the constrained functions. The standard mechanismadopted for handling constraints in MOO has been adoptedfrom [9]. When two individuals are evaluated for nondomi-nance, their constraints are checked. The constraints are treatedas hard objectives which have to be satisfied prior to theoptimization of remaining soft objectives (objective functions).If both individuals satisfy their constraints, nondominance isapplied to decide the winner. If either of them violates any of theconstraint, the other one is declared the winner. In case both didnot satisfy the constraints, the solution which violates higherpriority constraint (if some preference among the constraints isdefined by the DM) is treated to be a loser. Otherwise, the extentof constraint violation is used as a criterion to decide the winner.

Page 14: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 271

Fig. 10. Nondominated solution with (from left) IPSO, MOPSO, NSGA II, and MODE on Poloni’s function.

The authors have chosen three test problems to test theconstraint-handling strategy adopted for the proposed algorithm(Table VI). The first problem taken is CONSTR function [9]. Inthis case, the Pareto front obtained is a concatenation of thePOF of the objectives and the constraint boundary. The secondproblem under consideration is SRN function [9]. For thisfunction, the Pareto front for the constrained problem lies insidethe front obtained in the absence of constraints. Finally, thethird problem, the TNK, has a discontinuous Pareto front [39].The Pareto front for the problem lies entirely on the regionsinside the objective space at the first constraint boundary.

For all the problems, ten simulation runs were performedwith the same parameter settings, as previously mentioned.Quality of results obtained has been compared on the basisof the Pareto fronts. The Pareto fronts obtained with IPSO,MOPSO, NSGA II, and MODE on the CONSTR functionare shown in Fig. 12. Search for the optimal Pareto front isa difficult task, as maintaining the diversity in the solutionsobtained at the constraint boundary requires the modification ofthe particles in a stringent manner through a constraint function.However, the IPSO found the optimal Pareto in nine out of

ten simulation runs performed. MODE also performed verywell and converged to the optimal front in nine runs, whichis equivalent to IPSO. Results obtained using MOPSO wereoptimal for six runs, and NSGA II converged to the optimalfront only thrice.

For the SRN function, the IPSO was able to obtain thePareto optimal quite easily (Fig. 13). For the test function,other algorithms such as MOPSO, NSGA II as well as MODEalso yielded good results; thus, no claims about superioritycan be figuratively judged, although the equivalent competitiveperformance can be established.

Fig. 14 shows the Pareto front obtained for the TNK testfunction. In this case, a number of infeasible solutions werefound for the NSGA II even after the accomplishment oftermination criteria. Here, also, MODE and MOPSO depicteda better performance than NSGA II and found diverse op-timal solutions over the optimal Pareto front. However, theperformance registered with IPSO is much better as comparedwith both MOPSO and MODE. These examples illustrate thatthe adopted constraint-handling strategy suits well with theproposed algorithm for the determination of the Pareto front.

Page 15: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

272 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

Fig. 11. Nondominated solution with (from left) IPSO, MOPSO, NSGA II, and MODE on Kursawe’s function.

TABLE IVSIMULATION RESULTS FOR PERCENTAGE INCREASE

IN THE UTILITY OF THE PARETO FRONT

For all the test problems covered in this section, the valueof ε, i.e., scaled deviation of the obtained solution from thepreferred one, is in close agreement with that obtained in theprevious section. The average value of ε lies below 1% forall the test problems, and the average number of comparisonsdone by the DM is below five for the normalized linear utilityfunction (Table VII).

The time taken by the proposed algorithm on all the testfunctions considered in this paper is provided in Table VIII. Asis evident from the results, for all the test problems considered,the IPSO converged to the optimal solution in a very lesstime (< 5 s), thus validating its applicability to solve complexproblems in the stipulated time. From the results, it can be seenthat the average time taken by the proposed approach is directly

proportional to the number of interactions with the DM. Sincethe goal of the proposed IPSO metaheuristic is the identificationof both the compromise solution preferable to the DM as wellas the complete Pareto front of the problem concerned, whichis much more diverse as compared with the other existingapproaches, the time results are not directly comparable withthe methods that seek to enumerate efficient set of solutionsonly. In fact, in a practical scenario, the time taken by the pro-posed metaheuristic may vary depending upon the decisionalcomplexity of the problem as well as the availability of the DM.However, equipped with tool as decision-making module andparticle swarm, the proposed strategy reduces the time taken indecision making appreciably as well as produces efficient set ofsolutions in a minimum time. Thus, in summary, the IPSO canbe well claimed to be a competent strategy for MOO.

V. CONCLUSION AND INSIGHTS

This paper proposed an interactive particle-swarm meta-heuristic for MOO. Multiobjective problems are often encoun-tered in practice and require the simultaneous knowledge ofalternatives and cooperation of DM to determine the solution

Page 16: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 273

TABLE VSIMULATION RESULTS FOR THE SCALED DEVIATION (ε) FROM THE PREFERRED SOLUTION

TABLE VITEST PROBLEMS WITH CONSTRAINTS

Fig. 12. Nondominated solution with (from left) IPSO, MOPSO, NSGA II, and MODE on the CONSTR function.

Page 17: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

274 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

Fig. 13. Nondominated solution with (from left) IPSO, MOPSO, NSGA II, and MODE on the SRN function.

in close agreement with a person’s preference. The proposedIPSO intends to find a compromise solution along with a set ofPareto optimal solutions having a higher utility.

One of the major advantages lying with the approach is itsadaptivity (with slight modifications) to the practical situationswith a known preference and a set of preferences dependingupon the DM. In general, manufacturing organizations andenterprises often encounter situations with conflicting objec-tives pertaining to cost minimization, time minimization, betterquality, etc. Thus, the proposed strategy is able to provide thebest set of solutions preferable to the DMs and could be ofassistance in making critical decisions.

To generate an initial base for the strategy, the proposedalgorithm has been tested on seven test functions consideredto be hard and generally made target to testing in the literature.The performance of the algorithm was found to be better inmost of the situations, and it provided the satisfactory solutionthat is very close to the best one. The algorithm claims tocompletely utilize the partial preference information availablefrom the DM to derive the utility function and estimate thepotentially best solution. Simulation results indicate that it iscapable of exploring the feasible space in much less time andrequires very few interactions in order to assist the DM to locate

the satisfactory solution. Other established strategies, althoughcompetitive, were not able to dominate the IPSO in general.

With the properties like fast convergence, storage and con-tinuous modifications of potential solution in external reposi-tory, self-adaptive mutation operator, effective decision-makingstrategy, and a simple yet efficient constraint-handling method-ology, IPSO can evolve as an alternate and preferable toolfor the modern day DMs. The future research should be di-rected to explore the applications and performance of IPSOfor the practical industry applications. The utilization of peerdecision-making strategies and more effective adaptations ofthe algorithm could also be a focus of research.

APPENDIX IPROOF OF LEMMA 1

Consider UDM to be the utility function of DM based onwhich decisions are taken. Without the loss of generality, itcan be assumed to be equal to the aggregate of the normalizeddeviation from the ideal point, i.e.,

UDM(y) = −n∑

i=1

westi

(fi(y) − f�i )m

(fpf

i (max) − fpfi (min)

)m (18)

Page 18: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 275

Fig. 14. Nondominated solution with (from left) IPSO, MOPSO, NSGA II, and MODE on Tanaka’s function.

TABLE VIISIMULATION RESULTS FOR THE SCALED DEVIATION (ε) FROM THE PREFERRED SOLUTION

TABLE VIIICOMPUTATIONAL TIME OF IPSO ON TEST FUNCTIONS

if u <p v (i.e., u is partially less than v)

fi(u) ≤ fi(v) ∀i ∈ 1, . . . , n. (19)

fi(u) < fi(v) ∃i ∈ 1, . . . , n. (20)

From (18)

UDM(u) = −n∑

i=1

westi

(fi(u)−f�i )m

(fpf

i (max)−fpfi (min)

)m (21)

UDM(v) = −n∑

i=1

westi

(fi(v)−f�i )m

(fpf

i (max)−fpfi (min)

)m (22)

= −n∑

i=1

westi

(fi(u)−f�

i )m

(fpf

i (max)−fpfi (min)

)m

− (fi(v)−f�i )m

(fpf

i (max)−fpfi (min)

)m

. (23)

Page 19: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

276 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 38, NO. 2, MARCH 2008

As stated in the lemma, the utility of u is always greater thanthe utility of v

⇒ UDM(u) − UDM(v) ≥ 0. (24)

Proof of (24) is done with the help of mathematicalinduction.

For m = 1

UDM(u) − UDM(v) = −n∑

i=1

wi(fi(u) − f�

i − fi(v) + f�i )(

fpfi (max) − fpf

i (min))

(25)

= −n∑

i=1

wi(fi(u) − fi(v))(

fpfi (max) − fpf

i (min))

(26)

= −n∑

i=1

wi(−ve value)

. . . [from (19) and (20)] (27)> 0. (28)

Thus, (24) is true for m = 1 as (28) shows. Let us assumethat it is true for m = k, i.e., when m = k + 1

UDM(u) − UDM(v)

= −n∑

i=1

westi

(fi(u) − f�

i )k+1

(fpf

i (max) − fpfi (min)

)k+1

− (fi(v) − f�i )k+1

(fpf

i (max) − fpfi (min)

)k+1

(29)

UDM(u) − UDM(v)

= −n∑

i=1

westi

(fi(u) − f�

i )k (fi(u) − f�i )(

fpfi (max) − fpf

i (min))k+1

− (fi(v) − f�i )k (fi(v) − f�

i )(fpf

i (max) − fpfi (min)

)k+1

(30)

fi(u) ≥ f�i and fi(v) ≥ f�

i ∀i ∈ 1, . . . , n

UDM(u) − UDM(v)

= −n∑

i=1

westi

(fi(u) − f�

i )k (+ve)(fpf

i (max) − fpfi (min)

)k+1

− (fi(v) − f�i )k (+ve)(

fpfi (max) − fpf

i (min))k+1

(31)

⇒ UDM(u) − UDM(v) > 0. (32)

Thus, (24) holds true for all values of m as proved by themathematical induction. It, thus, validates our lemma.

TABLE IXSYMBOLS USED IN PSEUDOCODE

APPENDIX IISYMBOLS

Various symbols used in the pseudocode are listed for a quickreference in Table IX along with their meanings.

REFERENCES

[1] E. L. Ulungu and J. Teghem, “Multiobjective combinatorial optimizationproblems: A survey,” J. Multi-Criteria Decis. Anal., vol. 3, pp. 83–104,1994.

[2] M. Ehrgott and X. Gandibleux, “An annotated bibliography of multiobjec-tive combinatorial optimization,” OR Spectr., vol. 22, pp. 425–460, 2000.

[3] C. L. Hwang and A. S. M. Masud, “Multiple objective decisionmaking—Methods and applications,” in Lecture Notes in Economics andMathematical Systems. Berlin, Germany: Springer-Verlag, 1979.

[4] S. Phelps and M. Koksalan, “An interactive evolutionary metaheuristic formultiobjective combinatorial optimization,” Manage. Sci., vol. 49, no. 12,pp. 1726–1738, Dec. 2003.

[5] K. Deb and S. Chaudhuri, “I-EMO: An interactive evolutionary multi-objective optimization tool,” in Proc. 1st Int. Conf. PReMI, Lecture Notesin Computer Science, vol. 3776, pp. 690–695, 2005.

[6] J. Kennedy and R. C. Eberhart, Swarm Intelligence. San Mateo, CA:Morgan Kaufmann, 2001.

[7] J. D. Knowles and D. W. Corne, “Approximating the non-dominated frontusing the Pareto archived evolution strategy,” Evol. Comput., vol. 8, no. 2,pp. 149–172, 2000.

[8] C. M. Fonseca and P. J. Fleming, “Multiobjective optimization and mul-tiple constraint handling with evolutionary algorithms—Part I: A unifiedformulation,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 28,no. 1, pp. 26–37, Jan. 1998.

[9] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput.,vol. 6, no. 2, pp. 182–197, Apr. 2002.

[10] C. A. Coello Coello and G. T. Pulido, “A micro-genetic algorithm formultiobjective optimization,” in Proc. 1st Int. Conf. Evol. Multi-CriterionOptim., vol. 1993, pp. 126–140, 2001.

[11] C. A. Coello Coello and G. T. Pulido, “Multiobjective optimization usinga micro-genetic algorithm,” in Proc. GECCO, San Francisco, CA, 2001,vol. 8, pp. 274–282.

[12] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA II: Improving the strengthpareto evolutionary algorithm,” in Proc. EUROGEN, Evol. Methods Des.,Optim. Control With Appl. Ind. Problems, Athens, Greece, 2001.

[13] C. A. Coello Coello, G. Pulido, and M. Lechuga, “Handling multipleobjectives with particle swarm optimization,” IEEE Trans. Evol. Comput.,vol. 8, no. 3, pp. 256–279, Jun. 2004.

[14] C. A. Coello Coello and M. S. Lechuga, “MOPSO: A proposal for multi-ple objective particle swarm optimization,” in Proc. CEC, Honolulu, HI,2002, vol. 1, pp. 1051–1056.

[15] H. A. Abbass, R. Sarkar, and C. Newton, “PDE: A pareto-frontier differ-ential evolution approach for multi-objective optimization problems,” inProc. Congr. Evol. Comput., 2001, vol. 2, pp. 971–978.

[16] H. A. Abbass, “The self-adaptive pareto differential evolution algorithm,”in Proc. CEC, 2002, vol. 1, pp. 831–836.

[17] B. V. Babu, P. G. Chakole, and J. H. S. Mubeen, “Multiobjective differ-ential evolution (MODE) for optimization of adiabatic styrene reactor,”Chem. Eng. Sci., vol. 60, no. 17, pp. 4822–4837, 2005.

[18] M. J. Alves and J. Climaco, “An interactive method for 0–1 multiobjectiveproblems using simulated annealing and tabu search,” J. Heuristics, vol. 6,no. 3, pp. 385–403, Aug. 2000.

Page 20: Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization

AGRAWAL et al.: INTERACTIVE PARTICLE SWARM: A PARETO-ADAPTIVE METAHEURISTIC TO MOO 277

[19] E. L. Ulungu, J. Teghem, and C. Ost, “Efficiency of interactive multi-objective simulated annealing through a case study,” J. Oper. Res. Soc.,vol. 49, no. 10, pp. 1044–1050, Oct. 1998.

[20] M. M. Koksalan and P. N. S. Sagala, “Interactive approaches for discretealternative multiple criteria decision making with monotone utility func-tions,” Manage. Sci., vol. 41, no. 7, pp. 1158–1171, Jul. 1995.

[21] R. E. Steur, Multiple Criteria Optimization. New York: Wiley, 1986.[22] A. Ratanweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing

hierarchical particle swarm optimizer with time-varying acceleration co-efficients,” IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 240–255,Jun. 2004.

[23] J. Moore and R. Chapman, Application of Particle Swarm to Multiobjec-tive Optimization. Auburn, AL: Dept. Comput. Sci. Softw. Eng., AuburnUniv., 1999.

[24] T. Ray and K. M. Liew, “A swarm metaphor for multiobjective designoptimization,” Eng. Optim., vol. 34, no. 2, pp. 141–153, Mar. 2002.

[25] K. E. Parsopoulos and M. N. Vrahatis, “Particle swarm optimizationmethod in multiobjective problems,” in Proc. ACM Symp. Appl. Comput.,Madrid, Spain, 2002, pp. 603–607.

[26] X. Hu and R. Eberhart, “Multiobjective optimization using dynamicneighborhood particle swarm optimizer,” in Proc. Congr. Evol. Comput.,2002, vol. 2, pp. 1677–1681.

[27] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction toAlgorithms. Cambridge, MA: MIT Press, 2004.

[28] D. E. Goldberg, Genetic Algorithms in Search, Optimization and MachineLearning. Reading, MA: Addison-Wesley, 1989.

[29] J. H. Holland, Adaptation in Natural and Artificial Systems. Ann Arbor,MI: Univ. Michigan Press, 1975.

[30] G. Rudolph, “Evolutionary search under partially ordered sets,”Dept. Comput. Sci./LS11, Univ. Dortmund, Dortmund, Germany,Tech. Rep. CI-67/99, 1999.

[31] N. Srinivas and K. Deb, “Multiobjective function optimization usingnondominated sorting genetic algorithms,” Evol. Comput., vol. 2, no. 3,pp. 221–248, Fall 1995.

[32] K. Deb, Multiobjective Optimization Using Evolutionary Algorithms.Chichester, U.K.: Wiley, 2001.

[33] R. Storn and K. Price, “Differential evolution—A simple and efficientheuristic for global optimization over continuous spaces,” J. Glob. Optim.,vol. 11, no. 4, pp. 341–359, Dec. 1997.

[34] D. V. Veldhuizen, “Multiobjective evolutionary algorithms: Classification,analyses, and new innovations,” Air Force Inst. Technol., Dayton, OH,Tech. Rep. AFIT/DS/ENG/99-01, 1999.

[35] J. D. Schaffer, “Multiple objective optimization with vector evalu-ated genetic algorithm,” in Proc. 1st Int. Conf. Genetic Algorithms,J. J. Grefensttete, Ed., 1987, pp. 93–100.

[36] C. M. Fonseca and P. J. Fleming, “Multiobjective optimization and multi-ple constraint handling with evolutionary algorithms—Part II: Applicationexample,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 28,no. 1, pp. 38–47, Jan. 1998.

[37] C. Poloni, “Hybrid GA for multiobjective aerodynamic shape optimiza-tion,” in Genetic Algorithms in Engineering and Computer Science,G. Winter, J. Periaux, M. Galan, and P. Cuesta, Eds. New York: Wiley,1997, pp. 397–414.

[38] F. Kursawe, “A variant of evolution strategies for vector optimiza-tion,” in Parallel Problem Solving From Nature, H. P. Schwefel andR. Manner, Eds. Berlin, Germany: Springer-Verlag, 1990, pp. 193–197.

[39] M. Tanaka, “G-A based decision support system for multicriteria optimiza-tion,” in Proc. IEEE Int. Conf. Syst., Man, Cybern., 1995, pp. 1556–1561.

Shubham Agrawal was born in Varanasi, India, onApril 29, 1985. He received the B.S. degree in man-ufacturing engineering from the National Instituteof Foundry and Forge Technology, Ranchi, India.He is currently working toward the degree withthe Operations Research and Industrial Engineer-ing Group, Department of Mechanical Engineering,Cockrell School of Engineering, University of Texasat Austin.

He is currently with the Graduate Program in Op-erations Research and Industrial Engineering, Uni-

versity of Texas at Austin. His current research interests include evolutionarycomputation and its applications to various fields of optimization and, particu-larly, power systems, line balancing, and supply chain management.

Yogesh Dashora received the B.Tech. degree fromthe National Institute of Foundry and Forge Tech-nology, Ranchi, India. He is currently working to-ward the M.S. and Ph.D. degrees in the OperationsResearch and Industrial Engineering, Departmentof Mechanical Engineering, University of Texas atAustin, Austin, TX.

His current research interests include evolutionarycomputation and its application to various fields ofmanufacturing and optimization.

Manoj Kumar Tiwari received the B.E. degreefrom the Visvesvaraya Regional College of En-gineering, Nagpur, India, the M.E. degree fromthe Motilal Nehru Regional Engineering College,Allahabad, India, and the Ph.D. degree from theJadavpur University, Kolkata, India.

Currently, he is with the Department of IndustrialEngineering and Management, Indian Institute ofTechnology, Kharagpur, India. He is also the Chair-man of Research Promotion Cell, NIFFT. He hasbeen working toward a collaborative research with

several faculty members, working in various universities such as LoughboroughUniversity, Leicestershire, U.K.; University of Wisconsin at Madison,Madison, WI; University of Missouri—Rolla, Rolla, Missouri; Rutgers Univer-sity, Camden, NJ; Kansas State University, Manhattan; Bath University, Bath,U.K.; Caledonian Business School, Scotland, U.K.; University of Windson,ON, Canada; and University of Hong Kong, Kowloon, Hong Kong. He hasbeen well known for training undergraduate students in several research ar-eas and involving them in meaningful research. Some of his students areworking toward doctoral and postdoctoral works in at least ten reputed uni-versities. He has published around 227 articles; among them are 103 arti-cles in international journals like the IEEE TRANSACTIONS ON SYSTEMS,MAN, AND CYBERNETICS—PART A, the International Journal of ProductionResearch (IJPR), the European Journal of Operational Research, etc., and50 articles in international conferences of repute. He is currently servingas an editorial board member for several international journals, includingIJPR, Robotics and CIM, International Journal of Advanced ManufacturingTechnology, etc.

Young-Jun Son (M’00) received the B.S. degree(with honors) in industrial engineering from PohangUniversity of Science and Technology (POSTECH),Pohang, Korea, in 1996 and the M.S. and Ph.D.degrees in industrial and manufacturing engineer-ing from Pennsylvania State University in 1998 and2000, respectively.

He is an Associate Professor with the Departmentof Systems and Industrial Engineering, The Univer-sity of Arizona, Tucson. He has authored or coau-thored over 60 publications in refereed journals and

conferences. His research interests include modeling and control of complexenterprises, multiparadigm simulations, evolutionary computation, computer-integrated control, shop floor control, etc.

Dr. Son is an Associate Editor of the International Journal of Modelingand Simulation and the International Journal of Simulation and ProcessModeling. He was the recipient of the Society of Manufacturing Engineers 2004M. Eugene Merchant Outstanding Young Manufacturing Engineer Award andthe Institute of Industrial Engineers 2005 Outstanding Young Industrial Engi-neer Award.