Adaptive acceleration coefficients for a new search diversification strategy in particle swarm optimization algorithms Guido Ardizzon, Giovanna Cavazzini ⇑ , Giorgio Pavesi Dep. of Industrial Engineering, University of Padova, Via Venezia 1, 35131 Padova, Italy article info Article history: Received 6 May 2014 Received in revised form 3 December 2014 Accepted 9 December 2014 Available online 18 December 2014 Keywords: Particle swarm optimization Swarm intelligence Optimization algorithms abstract The paper presents a novel paradigm of the original particle swarm concept, based on the idea of having two types of agents in the swarm; the ‘‘explorers’’ and the ‘‘settlers’’, that could dynamically exchange their role in the search process. The explorers’ task is to con- tinuously explore the search domain, while the settlers set out to refine the search in a promising region currently found by the swarm. To obtain this particle task differentiation, the numerical coefficients of the cognitive and social component of the stochastic acceler- ation as well as the inertia weight were related to the distance of each particle from the best position found so far by the swarm, each of them with a proper distribution over the swarm. This particle task differentiation enhances the local search ability of the parti- cles closer to gbest and improves the exploration ability of the particles as the distance from gbest increases. The originality of this approach is based on the particle task differentiation and on the dynamical adjustment of the particle velocities at each time step on the basis of the current distance of each particle from the best position discovered so far by the swarm. To ascertain the effectiveness of the proposed variant of the PSO algorithm, several benchmark test functions, both unimodal and multi-modal, have been considered and, thanks to its task differentiation concept and adaptive behavior feature, the algorithm has demonstrated a surprising effectiveness and accuracy in identifying the optimal solution. Ó 2014 Elsevier Inc. All rights reserved. 1. Introduction Particle swarm optimization (PSO) is a stochastic computation technique originally developed by Kennedy and Eberhart [27,14]. Although the original intent was to graphically simulate the unpredictable choreography of a bird flock [27], it was quickly realized [14,27,41,42] that the underlying social behavior concept driving each individual or a particle in a swarm could be a powerful population-based, iterative optimization algorithm. Similar to other evolutionary computation techniques such as genetic algorithms (GAs), in PSO the system is initialized with a population of random individuals (potential solutions, also called particles). However, unlike GAs, individuals are not manipulated according to the rule of survival of the fittest through genetic operators such as selection, crossover and http://dx.doi.org/10.1016/j.ins.2014.12.024 0020-0255/Ó 2014 Elsevier Inc. All rights reserved. ⇑ Corresponding author. Tel.: +39 049 8276800. E-mail addresses: [email protected](G. Ardizzon), [email protected](G. Cavazzini), [email protected](G. Pavesi). Information Sciences 299 (2015) 337–378 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins
42
Embed
Adaptive acceleration coefficients for a new search diversification strategy in particle swarm optimization algorithms
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Information Sciences 299 (2015) 337–378
Contents lists available at ScienceDirect
Information Sciences
journal homepage: www.elsevier .com/locate / ins
Adaptive acceleration coefficients for a new searchdiversification strategy in particle swarm optimizationalgorithms
http://dx.doi.org/10.1016/j.ins.2014.12.0240020-0255/� 2014 Elsevier Inc. All rights reserved.
The paper presents a novel paradigm of the original particle swarm concept, based on theidea of having two types of agents in the swarm; the ‘‘explorers’’ and the ‘‘settlers’’, thatcould dynamically exchange their role in the search process. The explorers’ task is to con-tinuously explore the search domain, while the settlers set out to refine the search in apromising region currently found by the swarm. To obtain this particle task differentiation,the numerical coefficients of the cognitive and social component of the stochastic acceler-ation as well as the inertia weight were related to the distance of each particle from thebest position found so far by the swarm, each of them with a proper distribution overthe swarm. This particle task differentiation enhances the local search ability of the parti-cles closer to gbest and improves the exploration ability of the particles as the distancefrom gbest increases.
The originality of this approach is based on the particle task differentiation and on thedynamical adjustment of the particle velocities at each time step on the basis of the currentdistance of each particle from the best position discovered so far by the swarm.
To ascertain the effectiveness of the proposed variant of the PSO algorithm, severalbenchmark test functions, both unimodal and multi-modal, have been considered and,thanks to its task differentiation concept and adaptive behavior feature, the algorithmhas demonstrated a surprising effectiveness and accuracy in identifying the optimalsolution.
� 2014 Elsevier Inc. All rights reserved.
1. Introduction
Particle swarm optimization (PSO) is a stochastic computation technique originally developed by Kennedy and Eberhart[27,14]. Although the original intent was to graphically simulate the unpredictable choreography of a bird flock [27], it wasquickly realized [14,27,41,42] that the underlying social behavior concept driving each individual or a particle in a swarmcould be a powerful population-based, iterative optimization algorithm.
Similar to other evolutionary computation techniques such as genetic algorithms (GAs), in PSO the system is initializedwith a population of random individuals (potential solutions, also called particles). However, unlike GAs, individuals are notmanipulated according to the rule of survival of the fittest through genetic operators such as selection, crossover and
338 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
mutation, but to each particle is assigned a randomized velocity so that the potential solutions are flown through the searchspace [14,16,17,27,41–43]. Furthermore, each particle has memory [14] allowing it to keep track of its own best positionfound so far, as well as of the global best position found so far by the swarm. Therefore, in this situation the optimal solutionmay be pursued by adjusting the trajectory of each particle toward its own best position (pbest) and toward the current glo-bal best position (gbest) discovered by the entire population [14,16,17,27,41–43]. These two acceleration contributions,linked with pbest and gbest, represent the cognitive part and the social part, respectively, of the social cooperation conceptunderlying the PSO. Nevertheless, the closeness between the crossover operation in genetic algorithms and these two accel-erating factors was argued by Eberhart and Shi [15], whereas the similarity between the mutation operator used in evolu-tionary optimization and these two factors was inferred by Angeline [3].
Particle swarm optimization includes a very simple concept of social cooperation that is easy to implement. Even thoughearly testing found the implementation to be effective in solving several kinds of real-world optimization problems[14,16,17,27,41–43], the standard PSO algorithm frequently suffers from premature convergence toward a local optimum,especially when complex multimodal problems are considered [30,53].
Lack of diversity of the population was recognized early [27] as being the main factor for a premature convergence of theswarm toward a local optimum, thus increasing diversity was mostly considered to be a useful expedient of escaping fromlocal optima [27,30,53]. Unfortunately, increasing the diversity of the swarm is detrimental to fast convergence toward theoptimal solution [27,30,53]. The issue is well known because it was demonstrated by Wolpert and Macready [48] that analgorithm cannot outperform all the others on every kind of problem. Therefore, research efforts to improve the performanceof an optimization algorithm should not be intended as the search for a general function optimizer [33,48], but rather as thesearch for a general problem-solver able to perform well on many well balanced real-world benchmark problems [22,46].
Avoiding premature convergence on a local optimum solution without compromising the fast convergence feature of theoriginal PSO formulation is the main reason why several PSO variants have been proposed until now [13]. These approachescomprise fine-tuning of the PSO parameters to control the particle velocity updating [1,4,8,9,39,49,51,53], different variantsof the PSO local formulation to take into account the best solution within a local topological neighborhood of particlesinstead of the entire population [14,28,29,33,45], and hybridization of PSO with other heuristic algorithms [10,19,20,23–25,39,40,50]. The algorithm structure of the modified PSO formulation could become rather elaborate at times, and thereviewed concepts of PSO may also lose the simplicity of the original one.
In this paper, a novel paradigm of the original particle swarm concept has been formulated by drawing inspiration froman early idea suggested by Kennedy and Eberhart [27], even if, initially, it was discarded because it did not show signs ofimprovement over the well-known PSO version. The idea is that of using two types of agents in the swarm, regarded as‘‘explorers’’ and ‘‘settlers’’ by Kennedy and Eberhart. The explorers had the task of searching outside the known region ofthe problem domain, while the settlers had the task of exploring in detail regions that had previously been found to be good.Kennedy and Eberhart implemented this idea through a different way of adjusting their own velocity [27].
It was also observed in [27] that a high value of the cognitive component of the stochastic acceleration related to pbestresulted in excessive wandering of isolated individuals through the search space, while a relatively high value of the socialcomponent of the stochastic acceleration related to the gbest brought about premature convergence toward a local opti-mum. As a result, it was thought that a satisfactory balance between such opposite tendencies could be obtained by multi-plying the stochastic cognitive and social components of the acceleration by proper constant coefficients, both of them set at2 to give them a mean of 1. However, high values of the cognitive component of the stochastic acceleration could allowperipheral particles, above all, to escape from local optima and explore new regions of the search space. On the other hand,a high numerical coefficient of the social acceleration, together with a progressive reduction in the numerical coefficient ofthe cognitive acceleration as particles get closer to gbest, could significantly increase the swarm attraction toward gbest,thus improving the convergence rate.
The novel paradigm that implements the particle swarm concept is derived from previous remarks and can be summa-rized as follows. The numerical coefficient of the cognitive component of the stochastic acceleration was not set at 2 over thewhole search space, but was progressively increased from zero, when the location of the particle coincided with gbest, to aproper maximum value when the distance of the particle from gbest reached a given value. This was then kept constant forparticles that were even more distant from gbest. The numerical coefficient of the social component of the stochastic accel-eration was also related to the distance of each particle from gbest, but a different type of distribution over the swarm wasadopted. The value of such a coefficient was kept constant and equal to its own maximum value as long as the distance of theparticle was less than a given value; then it was progressively decreased according to the distance from gbest. The inertiaweight is also dynamically adjusted at each time step by taking into account the distance between particles and gbest:the greater the distance from gbest, the greater the numerical value of the inertia weight, and vice versa.
As a result, the local search ability of a particle will be enhanced as it gets closer to gbest, while the exploration ability of aparticle will be improved as the distance from gbest increases. The particles in the outer peripheral of gbest acquire, there-fore, a role similar to ‘‘explorers’’, while the particles close to gbest acquire a role similar to that of ‘‘settlers’’ as they are ableto perform a better exploitation of the search space around local minima.
Proper values of all numerical coefficients have to be assessed by means of benchmark test functions in order to providethe PSO algorithm with adapting rules. Other adaptive PSO algorithms have recently been suggested [1,4,49,51,53], but noneof them assign different tasks to the particles of the swarm, unlike the current paper.
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 339
The novel approach outlined above has demonstrated surprising effectiveness and accuracy in searching for the optimalsolution in numerous benchmark test functions, especially when its performance is compared with that of other PSO vari-ants. Strategies for jumping out of a local minimum were also not essential to obtaining good results in many of the testfunctions.
The paper is arranged as follows: Section 2 summarizes briefly the basic concepts of the current PSO formulations thatwill be compared with the novel adaptive PSO algorithm here proposed; Section 3 describes the implementation of the novelsearch diversification strategy; Section 4 compares the novel algorithm with the standard PSO in terms of accuracy, explo-ration ability and capability to jump out of local minima; Section 5 describes the evolution of the swarm during the searchprocess in comparison with the standard PSO; Sections 6 and 7 compare the novel version of PSO with other representativePSO variants, as well as other well-known meta-heuristic algorithms, on several benchmark functions (CEC2005 [46]) to bet-ter assess the performance of the proposed PSO algorithm; Finally, Section 8 provides conclusions and additional remarks.
2. PSO basic concepts and a review of current formulations
In a swarm of N particles, each particle may be viewed as a potential solution in a D-dimensional search space. The posi-tion Xj
i and velocity Vji of the ith particle (i = 1, . . ., N) on the jth dimension (j = 1, . . ., D) are usually adjusted at each time step
as follows [15,17,18,27,43–45]:
Vjiðt þ 1Þ ¼ x � Vj
iðtÞ þ c1 � randj1i� ðpbestj
iðtÞ � XjiðtÞÞ þ c2 � randj
2i� ðgbestjðtÞ � Xj
iðtÞÞ ð1Þ
Xjiðt þ 1Þ ¼ Xj
iðtÞ þ Vjiðt þ 1Þ ð2Þ
where pbestjiðtÞ is the best position of the ith particle on the jth dimension at time t, gbestj(t) is the best position found so far
by the entire swarm, c1 and c2 are two constant coefficients balancing the relative importance of the stochastic accelerationterms related to pbesti and gbest, x is the inertia weight, and randj
1iand randj
2iare two random numbers generated uni-
formly in the range [0,1].Differently from c1 and c2, whose values are usually constant over the whole search space, the random numbers randj
1i
and randj2i
are different in each dimension. Other approaches have, however, been proposed [30,51] where each particleshared the same random value in each dimension either to reduce the wideness of the search space for obtaining better per-formance on unrotated problems [30], or to reduce the swarm diversity caused by an inertia weight variable with the particleand the time step for improving the convergence rate of the algorithm [51].
Eq. (1) represents the global version of the particle swarm optimizer (GPSO). Each particle keeps track of its best positionpbest and of the overall best position gbest of the swarm, simultaneously. However, the social attraction toward gbest oftendetermines a premature convergence of the swarm toward a local optimum if the current gbest is far from the globaloptimum.
Over the last two decades, several variants to the standard GPSO version have been proposed and significant performanceimprovements have been attained. A brief overview is given of the current PSO formulations which will be compared laterwith the novel PSO version here proposed.
To improve the GPSO performance a local version of the optimizer has been suggested [14,28] where each particle learnedfrom the best solution lbest related to a local topological neighborhood of particles, instead of the entire swarm. Differenttopological structures were considered [28], such as the ring topology (RPSO) and the Von Neumann topology (VPSO),and all may be described by the following equation [11]:
Vjiðt þ 1Þ ¼ x � Vj
iðtÞ þ c1 � randj1i� ðpbestj
iðtÞ � XjiðtÞÞ þ c2 � randj
2i� ðlbestj
iðtÞ � XjiðtÞÞ ð3Þ
where lbestji is the best position found so far by the particle i along the jth dimension in a specified local topological neigh-
borhood of particles. Eq. (2) remains unchanged.Further extensions of the local version of the PSO were also developed in which each particle could learn from the best
position found by each particle within a specified topological neighborhood of particles (Full Informed Particle Swarm Opti-mizer, FIPS) [33,29], or could learn from another particle randomly selected from the swarm with a frequency related to thepopulation size (Comprehensive Learning Particle Swarm Optimizer, CLPSO) [30]. The following equations:
Vjiðt þ 1Þ ¼ v � Vj
iðtÞ þXNi
n¼1
Uð0;uÞðpbestjnbrðnÞðtÞ � Xj
iðtÞÞNi
!ð4Þ
Vjiðt þ 1Þ ¼ x � Vj
iðtÞ þ c � randji � ðpbestj
f iðjÞðtÞ � XjiðtÞÞ ð5Þ
were proposed to update the velocity Vji of the particle i in the jth dimension when FIPS and CLPSO are chosen respectively. In
Eq. (4), v is the constriction coefficient, Ni the number of neighbors of the particle i, u the acceleration constant set to 4.1,U(0,u) a random number generated uniformly in the range [0,u], and nbr(n) the particle i’s nth neighbor [29]. In Eq. (5),however, c is the acceleration constant set to 1.49445, while fi(j) defines which particle’s pbest the particle i should followin the jth dimension [30]. As a result, pbestj
f iðjÞ might be the corresponding dimension of any particle’s pbest of the swarm,including the best position pbestj
i of the particle i itself, according to a learning probability Pcigiven by Liang et al. [30]:
340 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
Pci¼ 0:05þ 0:45
exp 10ði�1ÞN�1
� �� 1
expð10Þ � 1ð6Þ
The constriction coefficient v was first suggested by Clerc [11,12] to improve the ability of the swarm to find the optimalsolution by limiting the particle velocity, given by Eq. (1), as follows:
Usually, u is set at 4.1, so that v is equal to 0.729. As a result, the PSO algorithm with the constriction factor can be consid-ered to be equivalent to Eq. (1) with x = 0.729 and c1 = c2 = 1.49445 [17]; indeed, c1 and c2 are both set at 2 in the standardPSO algorithm to balance the opposite tendencies of the stochastic acceleration components related to pbest and gbest [27].
All the above strategies increase the swarm diversity and enhance its ability to prevent premature convergence on localoptima when solving multimodal problems, even though a larger number of function evaluations could be required toachieve similar accuracy of the standard PSO in solving unimodal functions. Several research studies have recently empha-sized the advantages both in terms of accuracy and convergence rate deriving from an adaptive control of the inertia weightand the acceleration coefficients c1 and c2 [1,4,8,9,39,45,49,51,53]. A brief overview of current adaptive PSO strategies hasbeen included to highlight and better appreciate the features of the novel adaptive approach that will be described later.
2.1. Adaptive control of the inertia weight in literature
The inertia weight x was first introduced by Shi and Eberhart [41–43], and plays an important role in the search strategyof the swarm. A large inertia weight x at the beginning of the run increases the exploration aptitude of the swarm, while asmall value of x enhances the local search near the end of the run, allowing the swarm to achieve a better approximation ofthe global optimum. The inertia weight is often reduced linearly from 0.9 to 0.4 during a run [16,43].
Over the past few years, several inertia weight strategies have been advanced [1,4,8,9,45,49,51,53] to improve the PSOperformance. For instance, in [53] the inertia weight is related to the value of an evolutionary factor f and given by:
is the mean distance of the best particle from the other particles of the swarm, and dmax and dmin represent the maximum andminimum distance, respectively, of a particle from all the others, that is:
On the other hand, in other research works the inertia weight is based on the ratio between the fitness value of the globalbest solution F(gbest) and the mean fitness values of the personal best solutions found so far by all the particles (F(pbesti))-average, and given by Arumugam et al. [4]:
x ¼ 1:1� FðgbestÞðFðpbestiÞÞaverage
ð14Þ
or it is based on more elaborate schemes such as [49]:
xti ¼ xini � að1� ht
i Þ þ bs ð15Þ
where hti , the so-called evolutionary speed factor of the particle i at time t, is given by
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 341
hti ¼
minðFðpbestt�1i Þ; Fðpbestt
i ÞÞmaxðFðpbestt�1
i Þ; Fðpbestti Þ
���������� ð16Þ
while s, the so-called aggregation degree factor, is given by
s ¼ minðFtbest; FtÞmaxðFtbest; FtÞ
���������� ð17Þ
The strategy summarized by Eqs. (10)–(13) involves a relatively large value of the inertia weight x in the exploration state,when the particles are scattered around the search space and the value of the evolutionary factor f is close to 1. However, thisinvolves a moderately small value of x during the convergence state, when all the particles are near gbest and f is close tozero.
Similarly, following the approach described by Eq. (14), the exploration aptitude of the swarm is encouraged by increas-ing the inertia weight x when the personal best fitness F(pbesti) of the particles differs noticeably from each other, namely,when the swarm is far from the optimum solution. Conversely, when they are close to each other and, on average, differ littlefrom F(gbest), the local search ability of the swarm is enhanced by decreasing the inertia weight. Unfortunately, the risk thatthe swarm gets stuck in local optima is not effectively impeded in either of the previous approaches.
The strategy summarised by Eqs. (15)–(17) assigns a proper inertia weight to each particle according to the value of thefactor ht
i , which is related to the aptitude of a particle to improve its own performance from iteration to iteration. A smallvalue of ht
i denotes that a particle is able to arrive closer to the optimal solution, so that a reduction in the inertia weightvalue is proposed to enhance the search ability of the swarm in the current search region. However, when the mean fitnessFt of all particles differs little from the optimal value Ftbest found by the swarm, the aggregation factor s becomes larger, andthe consequent increase in the inertia weight value gives the swarm a fair chance of jumping out of a local optimum.
2.2. Adaptive control of the acceleration coefficients in literature
Several strategies were also advanced to dynamically update the acceleration coefficients c1 and c2 from iteration to iter-ation [4,39,45,49,53]. A linear decreasing of both acceleration coefficients with time was first presented by Suganthan [45],even though better results were obtained when c1 and c2 were viewed as constants, but not necessarily set at 2. A linear var-iation of both acceleration coefficients with time was also suggested by Ratnweera et al. [39], but with a different strategy.This approach aimed at enhancing the exploration ability of the swarm at the beginning of the search process, and at improv-ing the convergence to the global optimum toward the end of the optimization process. As exploration and exploitation apti-tude of the swarm are related to the cognitive and social components of the stochastic acceleration respectively, thecoefficients c1 and c2 were changed with time as follows [39]:
c1 ¼ ðc1f � c1iÞt
Maxiterþ c1i ð18Þ
c2 ¼ ðc2f � c2iÞt
Maxiterþ c2i ð19Þ
where t denotes the current iteration and Maxiter the maximum number of admissible iterations. The values of c1i and c1f atthe start and in the end of the search process were set at 2.5 and 0.5 respectively, while the corresponding values of c2i and c2f
at 0.5 and 2.5 respectively.A more sophisticated adaptive particle swarm optimization (APSO) strategy was formulated by Zhan et al. [53] to update
the value of the acceleration coefficients c1 and c2 with time. Four different evolutionary states, such as exploration, exploi-tation, convergence and jumping-out, were defined in terms of an evolutionary factor f given by Eq. (11). Careful observationof the evolution of the factor f during several test cases allowed Zhan et al. [53] to propose an adequate correlation betweenthe numerical value of f and the aforementioned evolutionary states. A fuzzy classification of the evolutionary states was alsorecommended to manage the transition among themselves. Overall, it has emerged that the exploration state is character-ized by a medium to large value of the evolutionary factor f, while the exploitation state and the convergence state is markedby a shrink and a minimal value of f respectively. The largest values of the factor f described the jumping-out evolutionarystate.
As regards the acceleration coefficients c1 and c2, the following strategy was proposed in [53]: increasing c1 and decreas-ing c2 in an exploration state, increasing c1 slightly and decreasing c2 slightly in an exploitation state, increasing both c1 andc2 slightly in a converging state, decreasing c1 and increasing c2 in an jumping-out state. The adjustment of the accelerationcoefficients according to the evolutionary state were carried out gradually by limiting their maximum increment or decre-ment between two successive iterations to the interval [0.05, 0.1]. The sum of c1 and c2 was also restricted to the interval[3.0, 4.0]. Finally, a jumping-out mechanism based on an elitist learning strategy was proposed to enhance the ability ofthe swarm to escape from local minima. Such a strategy was applied to the best particle when the swarm was classifiedin the evolutionary state of convergence, and the following formulation was suggested:
Xd ¼ Xd þ ðXdmax � Xd
minÞ � Gaussianðl;r2Þ ð18Þ
342 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
where Xd denotes the randomly selected dimension of gbest which is subjected to a Gaussian perturbation, Xdmax;X
dmin
� �the
lower and upper bounds of the problem, Gaussian(l, r2) a random number drawn from a normal distribution with a zeromean l and a standard deviation r, the latter linearly decreasing with iterations in the interval [0.1, 1.0].
It is worth remarking that all the aforementioned adaptive strategies modify the value of the acceleration coefficients c1
and c2 from iteration to iteration, but at any iteration all the particles share the same values of c1 and c2. A different strategywas adopted by Yamaguchi and Yasuda [49] to update the acceleration coefficients with time, which may be summarized asfollows:
Fig. 1.is the m
Ctþ11i ¼ Ct
1i þ ati ðcbestt
1 � ct1iÞ; i ¼ 1;2; . . . ;N ð19Þ
Ctþ12i ¼ Ct
2i þ ati ðcbestt
2 � ct2iÞ; i ¼ 1;2; . . . ;N ð20Þ
where t denotes the current iteration, cbest1 and cbest2 are the acceleration coefficients c1 and c2 of the particle that was ableto update gbest in a previous iteration, at
i is set at 2/Maxiter if there is a particle whose fitness is better than f(gbest) + 1, other-wise at
i is set at zero. Unlike previous updating approaches, particles may differ from one another owing to different values ofthe acceleration coefficients c1 and c2.
3. The proposed adaptive PSO strategy
The adaptive PSO strategy proposed in this paper is based on a different concept of cooperation among the particles of theswarm. In the standard PSO algorithm, the weight of the cognitive experience of a particle is the same as that of the socialexperience acquired by the swarm to balance the attraction toward gbest with the wandering aptitude of each particle in thesearch space. Such a compromise has generally proved to be useful in avoiding premature convergence in a local optimum,even though it is very hard to prevent early convergence when solving multimodal problems.
Unlike other adaptive PSO strategies for tuning both inertia weight x and acceleration coefficients c1 and c2, the modelhere proposed is based on the following concept of particle cooperation. The particles on the external periphery of the swarmshould be mainly involved in enhancing the exploration ability of the swarm, while the particles closer to the global bestsolution discovered so far should be mainly involved in local search refinement of the current best solution, that is in theexploitation of the current promising search space. These objectives can be achieved by varying both the inertia weightx and the acceleration coefficient c1 and c2 in each spatial dimension as described in Figs. 1 and 2.
To be more precise, in the jth dimension c1 was progressively increased from zero, when the location of the particle coin-
cided with gbest, to a proper maximum value (cj1max) when the distance of the particle from gbest reached dj
max=2 (where
djmax ¼ maxð gbestj � xj
i
��� ���; i ¼ 1; . . . ;NÞ is the maximum distance of the swarm from gbest in the jth dimension) and then it
was kept constant for particles even more distant from gbest (Fig. 1a). On the other hand, in the same jth dimension, c2
was kept constant and equal to its own maximum value, as long as the distance of the particle was less than djmax=3, then
it was progressively decreased with the distance from gbest (Fig. 1b). The inertia weight was also dynamically adjustedat each time step by taking into account the distance between particles and gbest: the greater the distance from gbest,the greater the numerical value of the inertia weight, and vice versa (Fig. 2).
As a particle comes closer to gbest, the progressive reduction in c1 and the concurrent increase in c2 makes it more andmore attracted to gbest, while the concurrent reduction of the inertia weight x allows the particle itself to enhance its ownlocal search ability. On the other hand, the high value of c1 for distance from gbest greater than or approximately equal todj
max=2, together with concomitant low values of c2 and high values of x, allows the particles on the external periphery of theswarm to enhance wanderings around their pbestj
i, thus improving their exploration aptitude and making the swarm lessinclined to be trapped in a local optimum.
Distribution of the acceleration coefficients for the ith particle as a function of the particle distance dji from the global best solution: (a) c1; (b) c2. dj
max
aximum distance reached by the swarm in the jth dimension.
Fig. 2. Distribution of the inertia coefficient for the ith particle as a function of the particle distance dji from the global best solution. dj
max is the maximumdistance reached by the swarm in the jth dimension.
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 343
The discontinuity in the trend of the acceleration coefficients c1 and c2 shown in Fig. 1 was not considered in terms ofaffecting adversely the effectiveness of the algorithm, as the values of c1 and c2 are multiplied by a random number, butrather as a simple and effective expedient of achieving strong diversification in terms of particle tasks.
The velocity of each particle of the swarm will be updated at each time step by means of the following modified equation:
Vjiðt þ 1Þ ¼ xj
i � VjiðtÞ þ cj
1i� randj
1i� pbestj
iðtÞ � XjiðtÞ
� �þ cj
2i� randj
2i� gbestjðtÞ � Xj
iðtÞ� �
ð21Þ
where unlike other adaptive PSO strategies, different values of the coefficients c1, c2 and x will be assigned to each particle ineach dimension of the search space. As regards the random numbers randj
1iand randj
2i, they were assumed to be different for
each particle and each dimension. Furthermore, to allow peripheral particles of the swarm to explore a wider region of thesearch space, randj
1iwas generated uniformly in the interval [�1, 1] for particles with a distance greater than dj
max=2: The fol-lowing assignments were adopted:
randj2i2 ½0;1� ð22Þ
randj1i2 ½0;1� if dj
i 6 djmax=2
randj1i2 ½�1;1� if dj
i > djmax=2
8<: ð23Þ
The particle task differentiation carried out by means of the simple but pertinent trend of c1, c2 and x over the search space,as shown in Fig. 1 and then condensed into Eq. (21), has yielded very good results in terms of solution accuracy and conver-gence rate. Furthermore, a remarkable ability of the swarm to jump out of local optima was also detected in many multi-modal benchmark functions without resorting to proper jumping-out strategies. However, when a particle is basicallystuck in a local optimum, here computationally checked by the following condition:
f iðX!
iðt � 2ÞÞ � f iðX!
iðt � 1ÞÞ��� ��� 6 10�10� �
and ð Xiðt � 2Þ � Xiðt � 1Þj j 6 10�5Þ
this action becomes active:
Xjsðt þ 1Þ ¼ lbj þ randj
s � ðubj � lbjÞ; j ¼ 1; . . . ;D ð24Þ
where s identifies the particle trapping in a local minimum, ubj and lbj the upper and lower boundary limits in the jth dimen-sion respectively, and randj
s a random number uniformly distributed in the interval [0,1] not shared by all the dimensions ofthe search space.
A further action, however, has been introduced into the algorithm to make it much more efficient when a particle exhibitsthe best fitness displays so far by the entire swarm. The velocity Vj
iðt þ 1Þ of the newest best particle is set at zero and thenthe particle is randomly moved in a small region around its own current location:
XjiðtÞ ¼ Xj
iðt � 1Þ � randj½�0:9;1:1�
This action is aimed at enhancing the local exploitation aptitude of the swarm soon afterward; indeed, the newest best par-ticle will move slowly toward gbest, the only acceleration component being that related to the gbest, and the inertia weightx being close to its own minimum value.
The new proposal of the adaptive PSO algorithm is conceptually simple and easy to implement. The flow-chart of the pro-cedure is presented in Fig. 3.
Fig. 3. Flow-chart of the new proposal of adaptive PSO algorithm.
344 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
4. Comparison with standard PSO algorithms
The evolution of the swarm with the new proposal of adaptive PSO was compared with two versions of the standard PSOalgorithm both having c1 = c2 = 2: the first one has the inertial coefficient x linearly decreasing with iterations from 0.9 to0.4, whereas the second one has x=0.4. The parameters of the ASD-PSO algorithm (Adaptive Search Diversification inPSO) were set as follows: Cj
1max ¼ 3, Cj2max ¼ 3, Cj
2min ¼ 0:5, xmin = 0.4 and xmax = 0.9. Generally, improvements in the results
Table 1Benchmark test functions
Test function nD Search space Global Optimum f ⁄ Accuracy e Run (r) max FEs Ref
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 345
were observed when Cj2max is chosen in the range between 2.5 and 3.5. To guarantee swarm stability, as suggested by Zhan
et al. [53], the sum of Cj1i and Cj
2i should not exceed 4.0. The population size was equal to 20.Experiments on different types of benchmark test functions (unimodal and multimodal) were carried out to assess the PSO-
based algorithms in terms of accuracy, exploration ability and capability to jump out of local optima without resorting to appro-priate strategies. To compare the performance of the ASD-PSO with the standard PSOs, the experimental set-ups in terms of thenumber of runs, the maximum number of function evaluations, the search domain and the required accuracy were defined inagreement with those of the results available in the literature. The list of the considered test functions with the correspondingexperimental set-ups is reported in Table 1. Detailed information about function equations are reported in Appendix A.
Table 2 presents the results yielded by the ASD-PSO and by standard PSOs (with and without the linear decreasing of theinertial coefficient) in terms of mean performance over a fixed number of runs (see Table 1). The performance was judged onthe basis of two parameters: the mean number of function evaluations needed to reach the required accuracy e: |f � f ⁄| < e,where f is the value of the solution found by the algorithm and f ⁄ is the known global optimum of the test function; thesuccess rate, namely the percentage of successful run, in which the algorithm found a solution with accuracy e within themaximum allowed number of function evaluations. Results of standard PSO without the linear decreasing inertial coefficientwere only partially available in the literature.
The performance of the ASD-PSO is good since it achieves the best result in each test function in terms of both mean num-ber of FEs and success rate. It is able to adjust itself well both to unimodal functions (such as the Rosenbrock function) and tomulti-modal functions (such as the Rastrigin function) and, unlike the standard PSOs, the increased number of dimensionsdoes not affect the algorithm performance significantly.
As regards the accuracy reached by the ASD-PSO and by standard PSOs (with and without the linear decreasing of theinertial coefficient), Table 3 reports the mean and minimum errors yielded by the three different algorithms after 2 ⁄ 105
Table 2Performance results of the ASD-PSO and of the standard PSO with and without linear decreasing of the inertial coefficient: mean number of function evaluations(Fes) and percentage of successful runs. The best results for each function are printed in bold type.
Test function nD ASD-PSO PSO lin x = 0.9–>0.4 PSO x = 0.4
Table 3Accuracy results of the ASD-PSO and of standard PSO with and without linear decreasing of the inertial coefficient: mean and minimum error yielded by thealgorithm and standard deviation of the errors; t-statistic and t-test results. The best results for each function are printed in bold type. The significance level ofthe two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO aresignificantly better than those obtained by the other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSOperforms significantly worse than the other algorithm
ASD-PSO PSO lin PSOx = 0.9–>0.4 x = 0.4
f7 Rastrigin Mean 3.98E�14 25.24 57.64Min 0 14.92 26.86Std Dev 7.03E�14 5.21 14.58t-stat �5.00 �5.00t-test 1 1
f8 Sphere Mean 1.30E�293 7.08E�53 2.65E�161Min 8.26e�394 2.02E�56 1.57E�170Std Dev 0 1.71E�52 2.38E�161t-stat �5.00 �4.87t-test 1 1
f9 Rosenbrock Mean 18.66 25.514 11.657Min 15.22 1.44E�02 2.36E�04Std Dev 1.99 25.867 15t-stat �1.34 2.96t-test 0 �1
f10 Schwefel P2.22 Mean 1.66E�149 3.13E�34 1.20E�88Min 1.84E�155 8.57E�38 2.13E�99Std Dev 8.77E�149 6.41E�34 6.31E�88t-stat �5.00 �5.00t-test 1 1
f11 Quadratic Mean 3.59E�284 5.76E�02 7.95E�12Min 5.70e�316 2.94E�03 4.47E�14Std Dev 0 6.16E�02 1.64E�11t-stat �5.00 �5.00t-test 1 1
f12 Step Mean 0 0 2.67E�01Min 0 0 0Std Dev 0 0 4.50E�01t-stat �5t-test 0 1
f13 Schwefel Mean 0.01 1326.51 3713.26Min 0.01 829.07 2763.62Std Dev 2.63E�12 332.67 520.384t-stat �5.00 �5.00t-test 1 1
f14 NC Rastrigin Mean 1.89E�14 10 20.73Min 0 1 1Std Dev 3.45E�14 15.411 7.08t-stat �5.00 �5.00t-test 1 1
f15 Ackley Mean 9.06E�15 1.10E�14 1.601Min 7.99E�15 7.69E�15 7.69E�15Std Dev 2.49E�15 2.27E�15 1.028t-stat �0.86 �5.00t-test 0 1
f16 Griegwank Mean 2.83E�03 1.65E�02 1.51E�02Min 0 0 0Std Dev 5.27E�03 1.69E�02 1.75E�02t-stat �3.95 �3.84t-test 1 1
sig better 7 9sig equiv 3 0sig worse 0 1
346 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
function evaluations (FEs) and the standard deviations of the errors. The results yielded by ASD-PSO were compared withthose obtained by standard PSOs using a two-sample t-test [35] that is a hypothesis test on the difference in means betweentwo normal distributions with unknown variances.
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 347
The ASD-PSO achieves significantly better results in 7 out of 10 test functions in comparison with the PSO with lineardecreasing of the inertial coefficient, showing equivalent performance in the remaining 3 functions. As regards the compar-ison with the PSO with constant inertial coefficient, it achieves better results in 9 out of 10 test functions with the exceptionof the Rosenbrock function in which the standard PSO variant shows a smaller mean and minimum error, even if with amuch greater standard deviation. The very low values of the standard deviations in each test problem highlights a significantrobustness of the algorithm in terms of repeatability of the results.
The robustness of the ASD-PSO and the fastness of convergence can be further illustrated in Figs. 4 and 5, reporting themean convergence curve over the considered number of runs for each test problem, together with the two runs yielding theworst and the best fitness values. Here it is clear that in the unimodal functions the accuracy reached in each run is not only
Fig. 4. Convergence histories in different test problems: mean convergence history (solid line), best convergence history (the run yielding the best fitnessvalue plotted using a dashed line) and worst convergence history (the run yielding the worst fitness value plotted using a dotted line): (a) f7 – Rastriginfunction, (b) f8 – Sphere function, (c) f9 – Rosenbrock, (d) f10 – Schwefel P2.22, (e) f11 – Quadratic, (f) f12 – Step.
Fig. 5. Convergence histories in different test problems: mean convergence history (solid line), best convergence history (the run yielding the best fitnessvalue plotted using a dashed line) and worst convergence history (the run yielding the worst fitness value shown by the dotted line): (a) f13 – Schwefelfunction, (b) f14 – NC Rastrigin, (c) f15 – Ackley, (d) f16 – Griegwank.
348 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
very similar, as already highlighted by the small standard deviations values (Table 3), but also the convergence history ishighly repetitive. In the multi-modal functions (Figs. 4a and 5b–d), the ability of the exploring particles to identify the globalminimum is affected by the initial random generation of the swarm, thus determining a different convergence history. How-ever, this does not affect the reached accuracy that is very similar in all the runs.
Fig. 6. Search behavior of the adaptive PSO: fitness evolution (solid line) and population standard deviation (dotted line).
Fig. 7. Search behavior of the standard PSO (linear decreasing of the inertial coefficient): fitness evolution (solid line) and population standard deviation(dotted line).
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 349
5. Evolution of the particle swarm during the search process
The particle task differentiation that characterizes the ASD-PSO can also be seen in the analysis of the different swarmevolutions during the search process in comparison with the standard PSOs (Figs. 6 and 7). To compare the different search-ing behavior, the evolution parameter called population standard deviation (PSD) was considered:
where np is the population size, nD is the number of dimensions and xb is the barycentre of all the particles, respectively.Figs. 6 and 7 report the results obtained on the Rastrigin function (nD = 2) by the ASD-PSO and the standard PSO (with the
linear decreasing of the inertial coefficient). The PSD is plotted together with the corresponding fitness value to highlight theeffectiveness of the search process during the swarm evolution (Figs. 6 and 7).
The significant difference in the number of FEs (400 for the ASD-PSO vs. 6780 for the standard PSO) required to reach anaccuracy of 10�2 seems to be strictly related to the different behavior of the swarm in the two algorithms. In the standardPSO, the swarm maintains a high value of the PSD (mean PSD = 3.012) without highlighting a well-defined spatial structureduring the entire search process (Fig. 8). The particle displacements, although affected by the swarm and particle best posi-tions, are chaotic in the search domain and the solution refinement is driven to a greater degree by randomness than by anorganized refining process. This seems to be the reason for the high number of FEs needed by the standard PSO. On the otherhand, in the ASD-PSO, the swarm automatically adapts the population distribution during the search process, appropriatelycombining its refinement and exploration capabilities. Due to the particle task differentiation, during the search process, anincreasing number of particles refines the swarm current best solution, as testified by the progressive reduction of the PSDwith the fitness values (Fig. 6). However, during this refinement process, the swarm maintains its exploration capability thatmakes it possible to avoid an early convergence to local minima.
The particle task differentiation is clearly highlighted by some observations of the swarm distribution during the searchprocess (Fig. 9). Two main groups of particles are increasingly well-defined during the swarm evolution: one closer to theglobal best solution involved in a local search refinement and one on the external periphery which is still exploring thesearch domain. The resulting adaptive behavior of the swarm determines the significant enhancement of the PSO perfor-mance in comparison with the standard PSO, in which no tasks are assigned to the particles (Fig. 8).
6. Comparison with PSO variants
To validate its effectiveness, the ASD-PSO has been compared with eleven PSO variants among the most representativeones: local PSO variants with a ring topology (RPSO) and a von Neumann topology (VPSO) [28], a fully informed particleswarm algorithm (FIPS) [33], a hierarchical PSO with a time-varying acceleration coefficient (HPSO-TVAC) [39], a compre-hensive learning PSO (CLPSO) [30], a dynamic multiswarm PSO (DMPSO) [31], an adaptive PSO with an adaptive controlof inertia weight and acceleration coefficients (APSO) [53], the PSO with an aging leader and challengers (ALC-PSO) [10],the standard PSO incorporating the constriction factor (v-PSO) [12], the barebones PSO [26] and the Unified PSO [36].
On the basis of the results found in literature [10], a first comparison was carried out on 10 typical benchmark functions(f7–f16 in Table 1). To ensure a fair comparison between the results, the experimental setup was carefully replicated: popu-lation size (20 particles), maximum number of function evaluations, number of independent runs, search domain andrequired accuracy. Details are reported in Table 1.
Fig. 8. Swarm distributions at four different instants of the search process of the standard PSO. The corresponding instants are highlighted in Fig. 7.
350 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
Table 4 presents the results yielded by the ASD-PSO and by the aforementioned PSOs variants in terms of mean perfor-mance over a fixed number of runs. Even in this case, the comparison was based on two performance parameters: the meannumber of function evaluations needed to reach the required accuracy |(f � f ⁄| < e) and the success rate. Results of the APSOwere only partially available in the literature.
The ASD-PSO achieves the best result in 9 out of 10 test functions, showing a fast convergence and a high success rate inboth unimodal and multimodal functions. In the 10th test function (f16 – Griegwank) the ASD-PSO confirms the fast conver-gence characteristics with the lowest number of FEs. However, a few unsuccessful runs affect the algorithms performance,thereby lowering the final rank (3rd out of 10). For some of these test functions (f11 – Quadratic, f12 – Step, f13 – Schwefel andf15 – Ackley), the adaptive search diversification strategy makes it possible to reduce the number of FEs by at least one orderand in some cases by two orders.
The algorithms have been also compared in terms of accuracy reached. Table 5 reports the mean and minimum errorsyielded by the algorithms after 2 ⁄ 105 function evaluations (FEs) and the standard deviations of the errors. To guaranteea fair comparison between the algorithms, the results were compared by using two-sample t-tests [35].
The ASD-PSO achieves significantly better results in 5 out of 10 test functions and better or at least equivalent perfor-mance in the other two (f12 and f16). As regards the remaining ones (f7, f9 and f15), the performance of the algorithms canbe considered to be quite satisfactory. In the Rosenbrock function (f9) the algorithm presents a lower refinement capability(ranking 4) than other PSO variants, even if a good repeatability is nonetheless confirmed by the lowest standard deviationvalue. In the Rastrigin and Ackley functions (f7 and f16) the performance is rather good since in both cases the algorithm ranksthird with a mean accuracy lower than 10�14 and with some runs that find the exact solution (minimum error = global opti-mum) or solutions almost equal to the most performing algorithm.
Finally, to better ascertain the quality performance of the proposed PSO variant in comparison with the others, a furtherwide test was carried out on a well-balanced real-world benchmark function set CEC2005 on the basis of the results found inEpitropakis et al. [18]. This set is characterized by 25 benchmark functions that can be subdivided in four different groups:five unimodal functions (F1–F5), seven multimodal basic functions (F6–F12), two expanded multimodal functions (F13–F14)and eleven hybrid composition functions (F15–F25). Detailed information about functions, search domains and accuracy lev-els can be found in [46].
Table 4Performance results of the ASD-PSO and the PSO variants: mean number of function evaluations (FEs) and percentage of successful runs (SR). The best resultsfor each function are printed in bold type.
ASD-PSO RPSO VPSO FIPS HPSO-TVAC CLPSO DMPSO APSO ALC-PSO
Fig. 9. Swarm distributions at four different instants of the search process of the ASD-PSO. The corresponding instants are highlighted in Fig. 6.
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 351
Table 5Accuracy results of the ASD-PSO in comparison with 8 PSO variants: mean and minimum error yielded by the algorithm and standard deviation of the errors; t-statistics and t-test results. The best results for each function are printed in bold type. The significance level of the two-sample t-test between ASD-PSO andother algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained by theother algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the otheralgorithm.
ASD-PSO RPSO VPSO FIPS HPSO-TVAC CLPSO DMPSO APSO ALC-PSO
352 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
Tables 6 and 7 present the objective function errors (mean values and standard deviations) reached by the different PSOvariants after nD ⁄ 104 function evaluations on the 25 test problems with nD = 30 and 50 respectively. Even in this case, theresults were compared using a two-sample t-test [35].
Table 6Objective function error values reached by the algorithms after 3 ⁄ 105 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 30:mean value and standard deviation; t-test results. The significance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained by the other algorithm; ‘‘0’’ means that the twoalgorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 353
Table 7Objective function error values reached by the algorithms after 5 ⁄ 105 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 50:mean value and standard deviation; t-test results. The significance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained by the other algorithm; ‘‘0’’ means that the twoalgorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
354 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 355
As it can be seen, the ASD-PSO ranks first in 15 over 25 problems for nD = 30 and in 11 over 25 problems for nD = 50 and inthe remaining tests is overcome on average only by one different algorithm for both nD = 30 and nD = 50. None of the con-sidered PSO variants is able to statistically overcome the ASD-PSO in more than 9 over 25 test problems for nD = 30 and than8 over 25 problems for nD = 50. Moreover, in some test problems (f2, f7 and f25 for nD = 30; f2, f3, f7 and f25 for nD = 50) the ASD-PSO is able to reach objective function errors significantly smaller than the other PSO variants.
7. Comparison with other meta-heuristics approaches
PSO-based methods are a part of a rather large family of algorithms for continuous optimization, with different charac-teristics and particular features. They include other swarm-based algorithms, probability-learning methods, gradient basedalgorithms, evolutionary algorithms, and some others. To really appreciate the performance of the ASD-PSO, it is necessary tocompare it not only with PSO-variants (Section 6), but also with a proper selection of the most performing optimization algo-rithms. This selection cannot be regarded as exhaustive because of the vastness of the optimization world applied to the con-tinuous domain.
So, a wide test was carried out on the well-balanced real-world benchmark function set CEC2005 [46]. Several meta-heu-ristic approaches, belonging to different groups (Evolutionary Algorithms, Swarm-based algorithm, Genetic Algorithm, etc.),were taken into account (Table 8). Detailed information about functions, search domains and accuracy levels can be found in[46].
Table 9 reports, for each function in three dimensions (nD = 10, 30, 50), the success rate (SR) and the success performance(SP):
Table 8Meta-h
Acro
G-CML-CMEDADEBLX-DMSL-SaSPC-CoEVK-PCBLX-ABC
SP ¼ FEsucc runsr
rsuccð25Þ
where FEsucc runs is the mean of FEs required for the successful runs, r is the number of total runs and rsucc is the number ofsuccessful runs.
As it can be seen, none of the considered algorithms is able to solve all the proposed tests. It is interesting to notice any-way that in comparison with the other algorithms, the ASD-PSO is characterized by successful runs in a greater number ofproblems (17) beaten only by two improved versions of the CMA-ES, that is G-CMA-ES (30) and L-CMA-ES (21). Moreover, infour problems (f9-Shifted Rastrigin in nD = 30 and 50; hybrid f15 in nD = 30 and 50), the ASD-PSO showed the best perfor-mance and in two of these problems (hybrid f15 for nD = 30 and 50) is also the only algorithm able to reach the required accu-racy level. As regards the remaining test, the ASD-PSO is characterized by quite good performance, ranking on averagebetween the third and the forth position.
Tables 10–20 present objective function errors (mean values and standard deviations) reached by the different optimiza-tion algorithms after 103, 104, 105 and nD ⁄ 104 function evaluations on the 25 test problems with nD = 10, 30 and 50. Even inthis case, the results yielded by ASD-PSO were compared with those obtained by the other algorithms using a two-sample t-test [35].
As regards the accuracy level reached by the algorithms, it is interesting to notice that only the improved versions of theCMA-ES are able to statistically overcome the ASD-PSO for more than half of the considered problems. In the other test cases,the performance of the ASD-PSO are significantly better or at least equivalent with those of the other algorithms. Moreover,in four problems (f9-Shifted Rastrigin in nD = 30 and 50; hybrid f15 in nD = 30 and 50), the ASD-PSO is not overcome by anyalgorithm, whereas in other ten cases (f13 and f21 in nD = 10; f25 in nD = 30; f4, f8, f10, f12, f14, f15 and f17 in nD = 50) it is onlybeaten by one algorithm.
As regards the convergence history, in dimensions nD = 30 and 50, the ASD-PSO is able to maintain the same performancein time, confirming respectively the forth and third position after 103, 104, 105 and nD ⁄ 104 function evaluations. In tendimensions, the ASD-PSO is able to improve its accuracy level between 103 (ranking 4) and 104 function evaluations (ranking
euristic algorithms.
nyms Algorithm Ref.
A-ES Restart CMA Evolution Strategy with increasing population size [5]A-ES Restart CMA Evolution Strategy with an advanced search evolutionary algorithm [6]
Estimation of Distribution Algorithm based on a multivariate Gaussian Distribution [52]Differential Evolution Algorithm based on the DE/rand/1/bin scheme [47]
GL50 Real-Coded Genetic Algorithm with Female and Male Differentiation [21]-L-PSO Dynamic Multi-Swarm Particle Optimizer with Local Search [32]DE Self-Adaptive Differential Evolution Strategy [38]PNX Real-parameter Genetic Algorithm with the PNX crossover operator [7]O Evolutionary Algorithm based on a cooperative co-evolution mutation step [37]X Population-based search algorithm with independently designed search principles [44]MA Real-coded Memetic Algorithm with adaptive local search parameters [34]
Modified Artificial Bee Colony Algorithm [2]
Table 9Success Performance (SP) and Success Rate (SR), as defined in eq. (25) for the 25 test problems [46]. For some functions none of the algorithms reaches the fixed accuracy (given in the ‘Tol’ row) and the correspondingcolumns are not reported. The best results for each function are printed in bold type.
Table 10Objective function error values reached by the algorithms after 103 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 10: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained bythe other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
1000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 11Objective function error values reached by the algorithms after 104 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 10: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtainedby the other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
10,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 12Objective function error values reached by the algorithms after 105 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 10: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtainedby the other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
100,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 13Objective function error values reached by the algorithms after 103 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 30: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtainedby the other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
1,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 14Objective function error values reached by the algorithms after 104 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 30: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtainedby the other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
10,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 15Objective function error values reached by the algorithms after 105 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 30: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained bythe other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
100,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 16Objective function error values reached by the algorithms after 3 ⁄ 105 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 30: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained bythe other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
300,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 17Objective function error values reached by the algorithms after 103 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 50: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained bythe other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
1,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 18Objective function error values reached by the algorithms after 104 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 50: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained bythe other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
10,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 19Objective function error values reached by the algorithms after 105 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 50: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtained bythe other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
100,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
Table 20Objective function error values reached by the algorithms after 5 ⁄ 105 function evaluations (FE) respectively on the 25 test problems [46] in dimension nD = 50: mean value and standard deviation; t-test results. Thesignificance level of the two-sample t-test between ASD-PSO and other algorithms is 0.05. A value of t-test result equal to: ‘‘1’’ means that the results obtained by ASD-PSO are significantly better than those obtainedby the other algorithm; ‘‘0’’ means that the two algorithms are statistically equivalent; ‘‘�1’’ means that the ASD-PSO performs significantly worse than the other algorithm
500,000 FE F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 375
3), not overcome by any algorithm in 14 over 25 functions. However, in the last part of the convergence history for some testfunctions, it is not able to significantly refine the obtained objective function values, losing two positions in the final list.
In such a context, it can be concluded that the algorithm can efficiently solve real-world basic functions and producespromising results on hybrid functions compared to the state-of-art algorithms.
8. Conclusions
This paper presents a novel paradigm of the original particle swarm concept, based on the idea of having two types ofagents in the swarm: the ‘‘explorers’’ and the ‘‘settlers. The explorers have the task of continuously exploring the searchdomain, while the settlers have the task of refining the search in the known good region found by the swarm. This particletask differentiation was obtained through a different way of adjusting the particle velocities. The numerical coefficients ofthe cognitive and social component of the stochastic acceleration were related to the distance of each particle from the bestposition (gbest) found so far by the swarm, with different distributions over the swarm. The inertia weight was also dynam-ically adjusted at each time step by taking into account the distance between the particles and gbest. This particle task dif-ferentiation enhances the local search ability of the particles closer to gbest and improves the exploration ability of theparticles as the distance from gbest increases.
The particle task differentiation was also visualized for a multi-modal function and compared with the spatial distribu-tion of the particles of the standard PSO during the search process. Unlike standard PSO, the proposed variant of PSO exhib-ited a well-defined task differentiation among particles, which was reflected in the spatial distribution of the particlesthemselves.
The novel approach showed a surprising effectiveness and accuracy in identifying the optimal solution in numerousbenchmark test functions, in comparison not only with standard PSO, but also with other stochastic meta-heuristic algo-rithms. The particle task differentiation showed it was able to improve the performance of the standard PSO over a widerange of problems, achieving the best results in terms of fast convergence and accuracy. Moreover, it has also shown a goodrobustness and an ability to perform well in all the considered test problems, significantly reducing the premature convergedrawbacks of the standard PSO without losing its fast convergence characteristics.
The significance of the proposed algorithm lies not only in the quality of the performance exhibited, but also in the sim-plicity of its implementation that goes back to the origins of the PSO, with the possibility of offering an innovative interpre-tation and implementation of the explorer-settler concept.
376 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
NC Rastrigin
f ðxÞ ¼Xn
i¼1
½y2i � 10 cosð2pyiÞ þ 10�
where
yi ¼xi jxij < 0:5roundð2xiÞ
2 jxijP 0:5
(
Quadratic
f ðxÞ ¼Xn
i¼1
Xi
j¼1
xj
!2
Rastrigin
f ðxÞ ¼Xn
i¼1
½x2i � 10 cosð2pxiÞ þ 10�
Rosenbrock
f ðxÞ ¼Xn
i¼1
100ðxiþ1 � x2i Þ
2 þ ðxi � 1Þ2h i
Schwefel
f ðxÞ ¼Xn
i¼1
� xi sinðffiffiffiffixipÞ
Schwefel P2.22
f ðxÞ ¼Xn
i¼1
jxij þYn
i¼1
jxij
Shubert
f ðx1; x2Þ ¼X5
i¼1
i cosððiþ 1Þx1 þ iÞÞ ! X5
i¼1
i cosððiþ 1Þx2 þ iÞ !
Sphere
f ðxÞ ¼Xn
i¼1
x2i
Step
f ðxÞ ¼Xn
i¼1
ðbxi þ 0:5cÞ2
Tablet
f ðxÞ ¼ 104x21 þ
Xn
i¼2
x2i
References
[1] A. Ahmad Nickabadi, M.M. Ebadzadeh, R. Safabakhsh, A novel particle swarm optimization algorithm with adaptive inertia weight, Appl. Soft Comput.11 (2011) 3658–3670.
[2] B. Akay, D. Karaboga, A modified artificial bee colony algorithm for real-parameter optimization, Inform. Sci. 192 (2012) 120–142.[3] P.J. Angeline, Evolutionary optimization versus particle swarm optimization: philosophy and performance differences, in: Evolutionary Programming
VII: Proceedings of the Seventh Annual Conference on Evolutionary Programming, San Diego, CA, 1998, pp. 600–610.[4] M.S. Arumugam, A. Chandramohan A., M.V.C. Rao, Competitive approaches to PSO algorithms via new acceleration co-efficient variant with mutation
operators, in: Proc IEEE Conf. on Computational Intelligence and Multimedia Applications, 2005, pp. 225–230.[5] A. Auger, S. Kern, N. Hansen, A restart CMA evolution strategy with increasing population size, in: CEC 2005 IEEE Congress on Evolutionary
Computation, Proceedings, Edinburgh, Scotland, vol. 2, 2005, pp. 1769–1776.[6] A. Auger, S. Kern, N. Hansen, Performance evaluation of an advanced local search evolutionary algorithm, in: CEC 2005 IEEE Congress on Evolutionary
Computation, Proceedings, Edinburgh, Scotland, vol. 2, 2005, pp. 1777–1784.
G. Ardizzon et al. / Information Sciences 299 (2015) 337–378 377
[7] P.J. Ballester, J. Stephenson, J.N. Carter, K. Gallagher, Real-Parameter Optimization Performance Study on the CEC-2005 benchmark with SPC-PNX, in:Proceedings of CEC 2005 IEEE Congress on Evolutionary Computation, Edinburgh, Scotland, vol. 1, 2005, pp. 498–505.
[8] A. Chatterjee, P. Siarry, Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization, Comput. Oper. Res. 33 (2006) 859–871.
[9] D. Chen, W. Gaofeng, C. Zhenyi, The inertia weight self-adapting in PSO, in: Proceedings of the IEEE Congress on Intelligent Control and Automation,Chongqing, China, 2008, pp. 5313–5316.
[10] W.-N. Chen, J. Zhang, Y. Lin, N. Chen, Z.-H. Zhan, H.S.-H. Chung, Y. Li, Y.-H. Shi, Particle swarm optimization with an aging leader and challengers, IEEETrans. Evolut. Comput. 17 (2013) 241–258.
[11] M. Clerc, M., The swarm and the queen: towards a deterministic and adaptive particle swarm optimization, in: Proc IEEE Int Congress on EvolutionaryComputation, Washington, DC, 1999, pp. 1951–1957.
[12] M. Clerc, J. Kennedy, The particle swarm – explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evolut. Comput. 6(2002) 58–73.
[13] Y. del Valle, G.K. Venayagamoorthy, S. Mohagheghi, J.-C. Hernandez, R.G. Harley, Particle swarm optimization: basic concepts, variants and applicationsin power systems, IEEE Trans. Evolut. Comput. 12 (2008) 171–195.
[14] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proc IEEE Int. Symp. on Neural Micro Machine and Human Science, Nagoya,Japan, 1995, pp. 39–43.
[15] R.C. Eberhart, Y. Shi, Comparison between genetic algorithms and particle swarm optimization, in: Evolutionary Programming VII: Proceedings of theSeventh Annual Conf. on Evolutionary Programming, San Diego, CA, 1998, pp. 611–616.
[16] R.C. Eberhart, Y. Shi, Comparing inertia weights and constriction factors in particle swarm optimization, in: Proceedings of the IEEE InternationalCongress on Evolutionary Computation, San Diego, CA, 2000, pp. 84–88.
[17] R.C. Eberhart, Y. Shi, Y., Particle swarm optimization: development, applications and resources, in: Proceedings of the IEEE International Congress onEvolutionary Computation, Seoul, Korea, 2001, pp. 81–86.
[18] M.G. Epitropakis, V.P. Plagianakos, M.N. Vrahatis, Evolving cognitive and social experience in particle swarm optimization through differentialevolution: a hybrid approach, Inform. Sci. 216 (2012) 50–92.
[19] S.-K.S. Fan, Y.-C. Liang, E. Zahara, A genetic algorithm and a particle swarm optimizer hybridized with Nelder-Mead simplex search, Comput. Ind. Eng.50 (2006) 401–425.
[20] A. Gandelli, F. Grimaccia, M. Mussetta, P. Pirinoli, R.E. Zich, Development and validation of different hybridization strategies between GA and PSO, in:Proceedings of the IEEE International Congress on Evolutionary Computation, Singapore, 2007, pp. 2782–2787.
[21] C. Garcia-Martinez, M. Lozano, Hybrid real-coded genetic algorithms with female and male differentiation, in: Proceedings of the CEC 2005 IEEECongress on Evolutionary Computation, Edinburgh, Scotland, vol. 1, 2005, pp. 896–903.
[22] C. Garcia-Martinez, F.J. Rodriguez, Arbitrary function optimisation with metaheuristics: no free lunch and real-world problems, Soft. Comput. 16(2012) 2115–2133.
[23] N. Higashi, H. Iba, Particle swarm optimization with Gaussian mutation, in: Proceedings of the IEEE Symposium on Swarm Intelligence, 2003, pp. 72–79.[24] V. Kalivarapu, J.-L. Foo, E. Winer, Synchronous parallelization of particle swarm optimization with digital pheromones, Adv. Eng. Softw. 40 (2009) 975–985.[25] V. Kalivarapu, E. Winer, Implementation of digital pheromones in particle swarm optimization for constrained optimization problems, in: 49th AIAA/
ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Schaumburg, IL, 2008, AIAA 2008-1974.[26] J. Kennedy, Bare bones particle swarms, in: IEEE Swarm Intelligence Symposium, Indianapolis, Indiana, USA, 2003, pp. 80–87.[27] J. Kennedy, R.C. Eberhart, Particle Swarm Optimization, in: Proceeding of the IEEE International Conference on Neural Networks, Perth, Australia, 1995,
pp. 1942–1948.[28] J. Kennedy, R. Mendes, Population structure and particle swarm performance, in: Proceeding of the IEEE International Congress on Evolutionary
Computation, Honolulu, Hawaii USA, 2002, pp. 1671–1676.[29] J. Kennedy, R. Mendes, Neighborhood topologies in fully-informed and best-of-neighbourhood particles swarms, IEEE Trans. Syst. Man Cyber. C 36
(2006) 515–519.[30] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE
Trans. Evolut. Comput. 10 (2005) 281–295.[31] J.J. Liang, P.N. Suganthan, Dynamic Multiswarm Particle Swarm optimizer, In: Proceeding of the IEEE Swarm Intelligence Symposium, Pasadena, CA,
2005, pp. 124–129.[32] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer with local search, in: Proceedings of the CEC 2005 IEEE Congress on
Evolutionary Computation, Edinburgh, Scotland, vol. 1, 2005, pp. 522–528.[33] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler, maybe better, IEEE Trans. Evolut. Comput. 8 (2004) 204–210.[34] D. Molina, F. Herrera, M. Lozano, Adaptive local search parameters for real-coded memetic algorithms, in: Proceedings of the CEC 2005 IEEE Congress
on Evolutionary Computation, Edinburgh, Scotland, vol. 1, 2005, pp. 888–895.[35] D.C. Montgomery, G.C. Runger, Applied Statistics and Probability for Engineers, John Wiley & Sons, New York, NY, 2003.[36] K.E. Parsopoulos, M.N. Vrahatis, Parameter selection and adaptation in unified particle swarm optimization, Math. Comput. Model. 46 (2007) 211–224.[37] P. Posik, Real parameter optimization using mutation step co-evolution, in: Proceedings of the CEC 2005 IEEE Congress on Evolutionary Computation,
Edinburgh, Scotland, vol. 1, 2005, pp. 872–879.[38] A.K. Qin, P.N. Suganthan, Self-adaptive differential evolution algorithm for numerical optimization, in: Proceedings of the CEC 2005 IEEE Congress on
Evolutionary Computation, Edinburgh, Scotland, vol. 2, 2005, pp. 1785–1791.[39] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE
Trans. Evolut. Comput. 8 (2004) 240–255.[40] P.S. Shelokar, P. Siarry, V.K. Jayaraman, B.D. Kulkarni, Particle swarm and ant colony algorithms hybridized for improved continuous optimization,
Appl. Math. Comput. 188 (2007) 129–142.[41] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proceeding of the IEEE International Congress on Evolutionary Computation, Anchorage,
AK, 1998, pp. 69–73.[42] Y. Shi, R.C. Eberhart, Parameter selection in particle swarm optinization, in: Evolutionary Programming VII: Proceeding of the Seventh Annual
Conference on Evolutionary Programming, San Diego, CA, 1998, pp. 591–600.[43] Y. Shi, R.C. Eberhart, Empirical study of particle swarm optimization, in: Proceeding of theProc IEEE Int Congress on Evolutionary Computation,
Washington, DC, 1999, pp. 1945–1950.[44] A. Sinha, S. Tiwari, K. Deb, A population-based, steady-state procedure for real-parameter optimization, in: Proceedings of the CEC 2005 IEEE Congress
on Evolutionary Computation, Edinburgh, Scotland, vol. 1, 2005, pp. 514–521.[45] P.N. Suganthan, Particle swarm optimiser with neighbourhood operator, in: Proceeding of the IEEE International Congress on Evolutionary
Computation, Washington, DC, 1999, pp. 1958–1962.[46] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.-P. Chen, A. Auger, S. Tiwary, Problem Definitions and Evaluation Criteria for the CEC 2005 Special
Session on Real-Parameter Optimization, Technical Report, Nanyang Technological University, Singapore, May 2005.[47] J. Ronkkonen, S. Kukkonen, K.V. Price, Real-Parameter Optimization with Differential Evolution, in: Proceedings of the CEC 2005 IEEE Congress on
Evolutionary Computation, Edinburgh, Scotland, vol. 1, 2005, pp. 506–513.[48] D.H. Wolpert, W.G. Macready, Free lunch theorems for optimization, IEEE Trans. Evolut. Comput. 1 (1997) 67–82.[49] T. Yamaguchi, K. Yasuda, Adaptive particle swarm optimization: self-coordinating mechanism with updating information, in: Proceeding of the IEEE
International Conference on Systems, Man & Cybernetics, Taipei, Taiwan, 2006, pp. 2303–2308.
378 G. Ardizzon et al. / Information Sciences 299 (2015) 337–378
[50] B. Yang, Y. Chen, Z. Zhao, A hybrid evolutionary algorithm by combination of PSO and GA for unconstrained and constrained optimization problems, in:Proceeding of the IEEE Congress on Control and Automation, Guangzhou, China, 2007, pp. 166–170.
[51] X. Yang, J. Yuan, H. Mao, A modified particle swarm optimizer with dynamic adaptation, Appl. Math. Comput. 189 (2007) 1205–1213.[52] B. Yuan, M. Gallagher, Experimental results for the special session on real-parameter optimization at CEC 2005: a simple, continuous EDA, in: CEC 2005
IEEE Congress on Evolutionary Computation, Proceedings, Edinburgh, Scotland, vol. 2, 2005, pp. 1792–1799.[53] Z.-H. Zhan, J. Zhang, Y. Li, H.S.-H. Chung, Adaptive particle swarm optimization, IEEE Trans. Syst. Man Cyber. B 39 (2009) 1362–1381.