Top Banner
Int. J. Bio-Inspired Computation, Vol. 1, No. 4, 2009 259 Copyright © 2009 Inderscience Enterprises Ltd. A new hybrid multi-agent–based particle swarm optimisation technique Rajesh Kumar* and Devendra Sharma Department of Electrical Engineering, Malaviya National Institute of Technology, Jaipur, India E-mail: [email protected] E-mail: [email protected] *Corresponding author Anupam Kumar Department of Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur, India E-mail: [email protected] Abstract: This paper presents a multi-agent–based hybrid particle swarm optimisation technique. The algorithm integrates the deterministic, the multi-agent system (MAS) and the particle swarm optimisation (PSO) algorithm. An agent in hybrid multi-agent PSO (HMAPSO) represents a particle to PSO and a candidate solution to optimisation problem. All agents search parallel in an equally distributed lattice-like structure to save energy and computational time. The best solution is obtained through bee decision making process. Thus making use of deterministic search, multi-agent and bee PSO, the HMAPSO realises the purpose of optimisation. The proposed algorithm has been tested on various optimisation problems. The experimental results obtained show the robustness and accuracy of proposed HMAPSO. It also has been concluded that the proposed HMAPSO is able to generate a unique and optimal solution than the earlier reported approaches and hence can be a better option for real-time online optimisation problems. Keywords: particle swarm optimisation; PSO; multi-agent system; MAS; bee algorithm. Reference to this paper should be made as follows: Kumar, R., Sharma, D. and Kumar, A. (2009) ‘A new hybrid multi-agent–based particle swarm optimisation technique’, Int. J. Bio-Inspired Computation, Vol. 1, No. 4, pp.259–269. Biographical notes: Rajesh Kumar received his PhD in Intelligent Systems from University of Rajasthan, India in 2005. He is currently a Reader at the Department of Electrical Engineering, Malaviya National Institute of Technology, Jaipur. He has authored more than 100 research papers in the area of intelligent control, evolutionary algorithms, fuzzy and neural methodologies, power electronics, electrical machines and drives. He received the Career Award for Young Teachers in 2002. He is a member of IEEE, IE (India), fellow IETE, member IEANG and Life Member of ISTE. Devendra Sharma is a Research Student at the Department of Electrical Engineering, Malaviya National Institute of Technology, Jaipur. His areas of interests include robotics, electronics and software. Anupam Kumar is a Research Student at the Department of Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur. His areas of interests include robotics, human-computer interfacing, electronics and software. 1 Introduction In the last decades a vast algorithms are being find out for optimisation process. Most of them mimicked the natural species behaviour. These group decisions processes can be mimicked and can be used for finding out solution for various optimisation problems and their applications in various engineering fields (Deb, 2002). The ant, bees, birds are best example of such cases. The group decision making process used by bees and ants for searching out the best food resources among the various possible solutions are best and robust examples of swarm-based decision methods (Yanfei and Passino, 2002). Many classical methods have been developed and are being used for optimisation problem. Golden section search, Fibonacci search, Newton’s method and Secant method are some one dimension search method. Gradient methods, Newton’s method, conjugate direction method and neural
11

A new hybrid multi-agent-based particle swarm optimisation technique

May 08, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A new hybrid multi-agent-based particle swarm optimisation technique

Int. J. Bio-Inspired Computation, Vol. 1, No. 4, 2009 259

Copyright © 2009 Inderscience Enterprises Ltd.

A new hybrid multi-agent–based particle swarm optimisation technique

Rajesh Kumar* and Devendra Sharma Department of Electrical Engineering, Malaviya National Institute of Technology, Jaipur, India E-mail: [email protected] E-mail: [email protected] *Corresponding author

Anupam Kumar Department of Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur, India E-mail: [email protected]

Abstract: This paper presents a multi-agent–based hybrid particle swarm optimisation technique. The algorithm integrates the deterministic, the multi-agent system (MAS) and the particle swarm optimisation (PSO) algorithm. An agent in hybrid multi-agent PSO (HMAPSO) represents a particle to PSO and a candidate solution to optimisation problem. All agents search parallel in an equally distributed lattice-like structure to save energy and computational time. The best solution is obtained through bee decision making process. Thus making use of deterministic search, multi-agent and bee PSO, the HMAPSO realises the purpose of optimisation. The proposed algorithm has been tested on various optimisation problems. The experimental results obtained show the robustness and accuracy of proposed HMAPSO. It also has been concluded that the proposed HMAPSO is able to generate a unique and optimal solution than the earlier reported approaches and hence can be a better option for real-time online optimisation problems.

Keywords: particle swarm optimisation; PSO; multi-agent system; MAS; bee algorithm.

Reference to this paper should be made as follows: Kumar, R., Sharma, D. and Kumar, A. (2009) ‘A new hybrid multi-agent–based particle swarm optimisation technique’, Int. J. Bio-Inspired Computation, Vol. 1, No. 4, pp.259–269.

Biographical notes: Rajesh Kumar received his PhD in Intelligent Systems from University of Rajasthan, India in 2005. He is currently a Reader at the Department of Electrical Engineering, Malaviya National Institute of Technology, Jaipur. He has authored more than 100 research papers in the area of intelligent control, evolutionary algorithms, fuzzy and neural methodologies, power electronics, electrical machines and drives. He received the Career Award for Young Teachers in 2002. He is a member of IEEE, IE (India), fellow IETE, member IEANG and Life Member of ISTE.

Devendra Sharma is a Research Student at the Department of Electrical Engineering, Malaviya National Institute of Technology, Jaipur. His areas of interests include robotics, electronics and software.

Anupam Kumar is a Research Student at the Department of Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur. His areas of interests include robotics, human-computer interfacing, electronics and software.

1 Introduction

In the last decades a vast algorithms are being find out for optimisation process. Most of them mimicked the natural species behaviour. These group decisions processes can be mimicked and can be used for finding out solution for various optimisation problems and their applications in various engineering fields (Deb, 2002). The ant, bees, birds are best example of such cases. The group decision making

process used by bees and ants for searching out the best food resources among the various possible solutions are best and robust examples of swarm-based decision methods (Yanfei and Passino, 2002).

Many classical methods have been developed and are being used for optimisation problem. Golden section search, Fibonacci search, Newton’s method and Secant method are some one dimension search method. Gradient methods, Newton’s method, conjugate direction method and neural

Page 2: A new hybrid multi-agent-based particle swarm optimisation technique

260 R. Kumar et al.

networks are commonly used for unconstrained optimisation (Chong and Stanislaw, 2004). These methods are problem specific and use gradients. Consequently, they are applicable to a much smaller classes of optimisation problem.

A genetic algorithm is a probabilistic search technique that has its roots in the principles of genetics. It gives more emphasis on natural selection of surviving species and process of reproduction of new offspring. The algorithm works on process of mutation and crossover to create new population (Goldberg, 2005). Since its conception, genetic algorithm (GA) has been used widely as a tool in computer programming, artificial intelligence and optimisation.

Mimicking the behaviour of intelligence available in various swarms a new intelligence comes into existence, which is known as swarm intelligence (SI). SI is artificial intelligence, which based on the collective behaviour of decentralised, self-organised systems, which mimics natural behaviour of organisms. SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. The agents follow very simple rules and although there is no centralised control structure dictating how individual agents should behave, local interactions between such agents lead to the emergence of complex global behaviour (Wooldridge, 2002). A natural example of SI includes ant colonies, bird flocking, animal herding, bacterial growth and fish schooling. Various algorithms derive from SI are ant colony optimisation (ACO), GA and particle swarm optimisation (PSO) (Dorigo et al., 2006; Goldberg, 2005; Eberhart and Kennedy, 1995).

ACO algorithm emulates the behaviour of real ants. In the real world, during searching of food ants wander randomly and after finding food return to their colony while laying down pheromone trails. If other ants find such a path, they start seeking the food through that trail path rather than searching for new one. This pheromone helps a lot to guide ants for their search. During aping the ant search method the simulated ‘ants’ similarly record their positions and eminence of their solutions, so that in later simulation iterations more ants locate better solutions; the pheromone helps in avoiding the convergence to a locally optimal solution (Dorigo et al., 1996, 2002, 2006; Blum, 2007).

PSO algorithm is based on social behaviour of groups or organisations like flocking of birds or schooling of fish. It is a stochastic, population-based evolutionary computer algorithm for problem solving. It is a kind of SI that predicts each individual solution as ‘particles’ which evolve or change their positions with time. Each particle modifies its position in search space in accordance with its own experience and also that of neighbouring particle by remembering the best position visited by itself and its neighbours, then calculating local and global positions. As these techniques are free from use of gradients hence can be applicable to a wider class of optimisation problems (Eberhart and Kennedy, 1995).

The Bees algorithm is an optimisation algorithm inspired by the natural foraging behaviour of honey bees to

find the optimal solution for food as well as next site selection (Passino and Seeley, 2006; Passino et al., 2008). This algorithm performs a kind of neighbourhood search combined with random search and can be used for both combinatorial optimisation and functional optimisation. Bee colony optimisation (BCO) and bee system (BS) algorithms are some of the examples where algorithms are based on Waggle dance perform by scouts’ bee to inform other foraging bees about the nectar site (Teodorovic et al., 2006).

In this paper, we propose a new algorithm, which is hybrid version on PSO, which mimics its search algorithm from PSO and modify Nelder-Mead method (Nelder and Mead, 1965) to find optimal solution. The decision making technique is mimicked from Bee decision making process. The decision making process is based on the algorithm used by bees for finding a suitable place for establishing new colony. The experimental results show the robustness and accuracy of hybrid PSO over genetic algorithm and PSO. Due to its hybrid nature, this algorithm provides only deterministic solutions. Making use of these agent-agent interactions and evolution mechanism of PSO in a lattice-like environment, the proposed method can find high-quality solutions reliably with the faster convergence characteristics in a reasonably good computation time.

This paper is organised as follows. The hybrid algorithm is comprises of two parts search algorithm and other as decision making process. Section 2 details the standard PSO and the related issues about accuracy and convergence to optimal solutions. Section 3 describes the basic requirements of MAS. The development and working of the Hybrid PSO is elaborated in Section 4. The starting point and the number of agents are important issues while handling such algorithms. The experiments have been made to incorporate such issues. The experimental results on the choice of number of agents and the starting point of search are also presented and discussed. The decision making process in the honey bees make them an interesting swarm research area to work. Section 4 also discusses the decision making method used by the bees in the proposed algorithm. Section 5 discusses simulation and experimental results made on some standard test functions. Finally, Section 6 concludes the paper.

2 Standard PSO and its analysis

PSO was proposed by Eberhart and Kennedy (1995). It is popularly used in the complicated problem with non-linear and multi peak values. It is a population-based search algorithm that exploits a population of individuals to probe promising regions of the search space. The population here is called a swarm and the individuals are called particles. PSO follows a stochastic optimisation method based on SI. The fundamental idea is that the optimal solution can be found through cooperation and information sharing among individuals in the swarm (Lei et al., 2005).

Each particle moves with a given random speed and moves within the search space and retains in its memory the best position it ever encountered.

Page 3: A new hybrid multi-agent-based particle swarm optimisation technique

A new hybrid multi-agent–based particle swarm optimisation technique 261

The standard PSO can be described here. Let 1 2[ ]i i i inX x , x ,..., x= an n dimensional vector represents the

current position of particle i in a search space iS , X S∈ . The current velocity of this particle is

1 2[ ]i i i inV V ,V ,...,V S.= ∈ The past optional position encountered by the ith particle is denoted as

1 2[ ]i i i inP P ,P ,...,P S.= ∈ Assume g to be the index of the particle that attains the best of all particles taken as global best of swarms. At last, the modified velocity and position of each particle can be calculated as follows:

1 1

2 2

( 1) ( ) + () ( ( ) ( ))

+ () ( ( ) ( ))

in in in in

gn in

V t wV t C .r . p t x t

C .r . p t x t

+ = −

− (1)

( 1) ( ) + ( 1)in in inx t x t V t+ = + (2)

Where C1 and C2 are constants of acceleration, i = 1, 2,..., Np is particle index, n = 1, 2, ..., N is the dimension index and t = 1, 2,... indicates the iteration number, w is the weight of inertia and r1 and r2 are random numbers in [0,1].

The inertia weight w plays a role of balancing the local and global search. Tuning of C1 and C2 results to improved performance. Generalised models and techniques for tuning these parameters are analysed in (Lei et al., 2005). Since PSO is one of probability optimiser and hence it impossible for PSO to be guaranteed to converge the global optimisation solution. Many methods have been introduced to solve this problem, introducing immunity and heredity result in loosing character, diversity result to more complexes (Ho et al., 2008).

In this paper, we propose an entirely different approach to improve PSO performance for overcoming curse of probability and uniqueness in solution.

3 Multi-agent system (MAS)

MAS is computational system in which several agent works, interact with each other and in unison take decision to achieve goals. According to Jiao and Shi (1999), Zhong et al. (2004) and Shi et al. (2003) agent must have following properties:

• Agents live and act in a given environment.

• Agents are able to sense its local environment and to interact with other agents in its local environment.

• Agents attempt to achieve particular goals or perform particular tasks.

• Agents are able to respond to changes that if occur in them.

The agents develop a society with collaboration to achieve their own individual as well as the common goal. The group decision making process matches the basic nature of a particle in PSO and hence MAS provides an opportunity to compute and optimise complex problems.

Some issues like the environment of agents, method of interaction, starting point of search, behavioural rules are to be addressed when used for optimisation problems. These requirements and related issues to HMAPSO are addressed in the coming sections with experimental results and justifications.

4 Hybrid multi-agent–based particle swarm optimisation approach (HMAPSO)

In the proposed algorithm, different agents are being sent in the whole search area, which is divided into different fragments. The best solution in each fragment is being searched by its respective agent through modified Nelder-Mead method. For this purpose total range of the independent parameters are divided into smaller volumes, each of which determines the starting point for the exploration for each agent. The agent then finds its own optimised point by a developed optimisation technique Nelder-Mead method. Each agent then passes the information regarding the optimised point by bee waggle dance. When all the information of optimised points is obtained then the best among these is chosen by consensus method as in case of honey bee swarms (Seeley et al., 1991, 2006).

4.1 Particle search methodology

For optimisation of the given objective function, we have modified a very popular optimisation technique usually known as Nelder-Mead method (Nelder and Mead, 1965). The methodology used is deterministic search methodology but in a sense similar to swarm local search. Let z = f(x,y) be the function that is to be minimised. For agents this is food function. To start, we assume that agent considers three vertices of a triangle as food points for a two variables problem as z1, z2 and z3. z1 = (x1,y1) represents the initial position of agent z2 = (x2,y2) and z3 = (x3,y3) are the positions of probable food points, i.e., local optimal points. The movement of bee (agent) from its initial position towards the food position, i.e., optimisation point is as follows. Here we have considered the problem as to generate the minima of a function zi = f(xi,yi). The function zi = f(xi,yi) for i =1, 2, 3 is evaluated at each of these three points. The obtained values of zi are recorded in a way that z1 ≤ z2 ≤ z3 with corresponding agents positions and food points as from the best to worst position. The construction process uses the midpoint of the line segment joining the two best food positions z1 and z2 as shown in Figure 1(a).

The value of function decreases as bee moves along z3 to z1 or z3 to z2. Hence, it is feasible that f(x,y) takes smaller value if bee moves towards z12. For the further movement of the bee a test point zt is chosen in such a way that it is reflection of the worst food point, i.e., z3 as shown in Figure 1(a). The vector formula for zt is

12 32tz z z= × − (3)

Page 4: A new hybrid multi-agent-based particle swarm optimisation technique

262 R. Kumar et al.

If the function value at zt is smaller than the function value at z3, then the bee has moved in the correct direction towards minimum. Perhaps the minimum is just a bit further than the point zt. So the line segment is extended further to ze through zt and z12. The point ze is found by moving as additional distance d/2 along the line as shown in Figure 1(b). If the function value at ze is less than the function value at zt, then the bee has found a better food point than zt.

122e tz z z= × − (4)

If the function value at z12 and z3 are the same, another point must be tested. Two test points are considered by the bee on the both sides of z12 at distance d/2 as shown in Figure 1(c). The point of smaller value will frame a new triangle with other two best points. If the function value at the two test points is not less than the value at z3, the points z2 and z3 must be shrunk towards z1 as shown in Figure 1(d). The point z2 is replaced with z12 and z3 is replaced with the midpoint of the line segment joining z1 and z3.

Figure 2 shows the path trace by the bees and the sequences of triangles {Tk} converging to the optimal point for the objective function

2 2( ) 4f x, y x x y y xy= − + − − (5)

Figure 1 Agents search movements with the proposed optimisation algorithm (a) starting of the motion in search of solution (b) extension in the direction of good optimal point (c) contraction of the movement in case optimal point quality is not good (d) shrinking of the space towards optimistic solution

Figure 2 Movement of the agents for a given problem

4.1.1 Choice of starting point of searching in a volume

The solution of the Nelder-Mead method depends upon the starting location of the search in any volume. Experiment has been made to find the effect over optimal solution with change in starting point of exploration of agents in a volume. We have tested on many standard functions and found center as the best point as starting point as mentioned in the paper. The change of starting point as a factor of 0.1 having range from 0 to 1 on two functions Ackley and Alpine function are shown in Figure 3 for reference. Experimental results shows that the optimal solution is constant after 0.5, i.e., centers as showed in Figure 3. So for all results we have chosen centre of the volume as starting point.

Figure 3 Variation in optimal point with respect to change in starting point of agents in a volume for (a) Ackley function and (b) Alpine function

Page 5: A new hybrid multi-agent-based particle swarm optimisation technique

A new hybrid multi-agent–based particle swarm optimisation technique 263

Figure 3 Variation in optimal point with respect to change in starting point of agents in a volume for (a) Ackley function and (b) Alpine function (continued)

4.1.2 Choice of no of agents for searching

During the experiments it is found that small number of agents gives fast and give accurate result for simple problem having lower number of parameters whereas for more number of parameters more number of agents should go for exploration which in turn gives result with high accuracy but on the cost of time. Figure 4 figured out the variation in optimal solution of Rosenbrock banana function by increasing number of bees. The solutions are also being summarised into the Table 1.

Figure 4 Variation in optimum solution of Rosenbrock banana function with respect to number of agents goes for exploration

Notes: a The centre of the lattice is a good starting point

to get better optimal solution.

b 30–50 agents in number are sufficient to generate optimal solution.

4.2 Exploration

In MAS, all agents live in an environment (Wooldridge, 2002). An environment is organised in a structure as shown in Figures 5–7. In the environment, each agent is fixed on a lattice-point and each circle represents an agent; the data in

the circle represent the position of agent and the evaluated value of the function. The size and dimension of the lattice depends upon the variables and PSO.

The value of the objective function depends on p number of independent parameters. Let the range of

th parameter ,ji jfj W ,W⎡ ⎤∈ ⎣ ⎦ where Wji and Wjf represent the

initial and final value of the parameter. Thus, the complete domain of the objective function can be represented by a set of p number of axis. Each axis will be in a different dimension and will contain the total range of one parameter.

The next step is to divide each axis into smaller parts. Each of these parts is known as a step. Let the jth axis be divided in nj number of step each of length Sj where j = 1 to p. This length Sj is known as step size for the jth parameter. The relationship between nj and Sj can be given as

- jf jij

W Wn

Sj−

(6)

Hence, each axis is divided into their corresponding branches. If we take one branch from each axis then these p number of branches will constitute a p dimensional volume. Total number of such volumes can be calculated as

1

Number of volumes =p

v jj

, N n=∏ (7)

The number of volumes indicates the number of scout bees going out for exploration. One point inside each volume is chosen as the starting point for the optimisation, which in our approach is the midpoint of that volume, the reason for it is also being discussed here. The midpoint of total cluster can be calculated as follows

1 1 2 2[ , ,......., ]2 2 2

i f i f ip fpW W W W W W+ + + (8)

For an objective function having one independent parameter, the complete domain will be given by single axis represented as h1. Here each step will give us one volume. Let us take the following values

1 1 11, 1, 6, 1i fp W W S= = = =

Therefore n1 = 5 and Nv = 5. Thus, five bees are sent for exploration. The starting point for each bee is the midpoint of each step as shown in Figure 5.

Figure 5 Domain of the objective function with one independent parameter

For an objective function having two independent parameters, the complete domain will be given by a set of two axis represented as h1 and h2. Let us take the following values

Page 6: A new hybrid multi-agent-based particle swarm optimisation technique

264 R. Kumar et al.

1 1 1

2 2 2

2 , 1, 5 , 1 and

1, 5 , 1

i f

i f

p W W S

W W S

= = = =

= = =

Therefore, n1 = 4, n2 = 4 and Nv = 16. Thus, 16 bees are sent for exploration as shown in Figure 6. The starting point of each bee is the midpoint of each volume, which is a 2-dimensional rectangle in this case.

Figure 6 Domain of the objective function with two independent parameters

For an objective function with three independent parameters, the complete domain will be given by set of three axis represented as h1, h2 and h3. Let us take the following values

1 1 1 2 2 2

3 3 3

3 , 1, 5 , 1, 1, 4 , 1

1, 4 , 1

i f i f

i f

p W W S W W S and

W W S

= = = = = = =

= = =

Therefore, n1 = 4, n2 = 3, n3 = 3 and Nv = 36. Thus, 36 bees are sent for exploration. The starting point for each bee is the midpoint of corresponding volume, which is a 3-dimensional cuboid in this case as shown in Figure 7.

Figure 7 Domain of the objective function with three independent parameters

Objective functions with more than three independent parameters can also be solved in the similar manner.

4.3 Bee swarms based decision process

The honey bee swarms have a highly distributed decision-making process, which they used for finding out their next

hive or finding out new source of foods. Few hundreds of bees out of thousands work as ‘scout bees’ to start a search for next possible site. Upon finding the site, scout informs other bees by ‘waggle dance’ (Seeley et al., 1991, 2006). Discovered nest sites of sufficient quality are reported on the cluster via the scouts’ waggle dance. Depending on the waggle dance by scout bees, quiescent bees are activated and decided to recruit or explore for nest site. If an uncommitted bee is not satisfied with any of the scout sites then she can go for exploring new sites. When a bee advertises a site more than once then in every next turn she decreases the strength of her dance by about 15 dance circuits. Once the quorum threshold reaches for any one of the sites, the bee start ‘piping signals’ that elicit heating by the quiescent bees in preparation for flight. There are two methods used by bee swarms decision for finding out the best nest site as ‘consensus’ and ‘quorum’ (Seeley et al., 1991, 2006). In ‘consensus’, widespread agreement among the group is taken into account whereas in quorum the decision for best site happens when a site crosses the ‘quorum’ (threshold) value. In the present paper, the ‘consensus’ algorithm is used for finding out the optimum solution, i.e., best food site.

4.3.1 Waggle dance

As bee after returning from search, perform waggle dance to inform other bees about the quality of site or food. Here in the proposed algorithm the agents after collecting their individual optimal solution give to the centralised systems that choose the preferable solution from the searched one. For optimal minimum cases it selects the best optimal solution which can mathematically stated as

m in ( ( ))i iW d f X= (9)

Where fi(X) represent the different search value obtained by an agent. Each of these points is recorded in a table known as optimum vector table X. X is a vector containing p number of elements. These elements contain the value of parameters at that point. So both the optimal solution value and the corresponding variable values are recorded. This record is known as ‘personal best’, i.e., Pbest in PSO. The function value gets change according to the objective function requirement, i.e., if objective function is to be minimised then the min function is used and if we have to find maximise in an objective function it will switch over to maximise function.

4.3.2 Consensus

As bee swarms, it uses consensus method to decide the best obtained or search value. The authors mimic this event and behaviour by comparing the results obtained. Once exploration and waggle dance (transmission of data) is finished the global optimised point is chosen by comparing the fitness values of all the optimised points in the optimum vector table, i.e., ‘global best’, gbest as in case of PSO. For minimisation problems, the point with the lowest fitness

Page 7: A new hybrid multi-agent-based particle swarm optimisation technique

A new hybrid multi-agent–based particle swarm optimisation technique 265

value is selected as the global optimised point. The global optimised point XG is found by

1 2( ) = min [ ( ), ( ), ..........., ( )]G Nf X f X f X f X (10)

5 Simulation and experimental results

The proposed algorithm has been tested for a number of standard optimisation problems with different classes and results show the robustness and advantage of proposed algorithm over other algorithms. The benchmark functions in this section provide a balance of multi-modal with many local minima and only a few local minima as well as easy and difficult functions. During experiments, optimal solution for each objective function is being found from GA, common PSO, orthogonal PSO (OPSO) and HMAPSO. Process is repeated for ten times. During PSO experiments, the velocity divisor is being set as 2 and number of particles going for hunting is being taken as 25. In the case of genetic algorithm, the population size is being selected as 1,000 with type of double vector. The function used for function creation is uniform with crossover fraction being 0.8. The stopping criteria for GA are its population size and other factors as time limit, fitness limit, stall generations and stall time limit are set to infinity.

The uniqueness of the solution is the novel character of the proposed algorithm. All other evolutionary algorithms generate different solutions on different run whereas HMAPSO generates unique optimum solution as the randomness is removed from the algorithm. The same has been proved in the experimental results as shown in Figures 8–15. The processing time to get the optimal solution by GA, Common PSO, OPSO and HMAPSO for the best solutions is shown in Figure 16. All these functions have been tested both on the unconstrained as well as constrained benchmark problems.

Figure 8 Comparisons of various algorithms using optimal value curves for Ackley function

Figure 9 Comparisons of various algorithms using optimal value curves for Alpine function

Figure 10 Comparisons of various algorithms using optimal value curves for Foxhole function

Figure 11 Comparisons of various algorithms using optimal value curves for function

Figure 12 Comparisons of various algorithms using optimal value curves for Rastrigin function

Figure 13 Comparisons of various algorithms using optimal value curves for Rosenbrock banana function

Figure 14 Comparisons of various algorithms using optimal value curves for Shekel function

Page 8: A new hybrid multi-agent-based particle swarm optimisation technique

266 R. Kumar et al.

Figure 15 Comparisons of various algorithms using optimal value curves for Tripod

Figure 16 Comparisons of time taken to get the optimal solution by various algorithms using optimal value curves for functions f2, …….and f8 (see online version for colours)

5.1 Unconstrained objective function

The algorithm is tested for minimisation of ‘Rosenbrock banana function’, which is given by equation (11)

2 2 21 2 1 1100 ( ) (1 )f x x x= − + − (11)

The actual minimum of the banana function occurs at (1, 1) with value 0. The optimal value found using HMAPSO is 3.382027527323477e-023 with coordinate as (0.99999999999, 0.99999999999) as stated in Table 1.

Table 1 Performance comparison of various algorithms

Worst value Best value

GA 8.96e-03 9.06e-03 PSO 4.86e-07 6.34e-11 OPSO 5.42e-13 2.73e-19 HMAPSO 3.38e-23 3.38e-23

Average of ten runs

Best optimal point (x1,x2)

GA 9.463e-03 (1,1) PSO 5.91e-08 (1,1) OPSO 9.21e-14 (1,1) HMAPSO 3.38e-23 (0.9999999999,

0.9999999999)

HMAPSO gives a more accurate result as compared to other approaches. Table 1 shows the experimental results obtained

from the proposed algorithm, x1 and x2 refers to optimum points. To demonstrate a high performance and correctness of HMAPSO, the HMAPSO is additionally compared with the following popular GA, PSO and OPSO. The average, best and worst performance of GA, PSO, OPSO and HMAPSO using ten independent runs are listed in Table 1. The experimental results revel that the MAPSO performs well and is significantly better than other similar algorithms for unconstrained objective function in a limited amount of computation time.

5.2 Constrained objective function

We have used a number of benchmark testing functions to evaluate the performance of the proposed optimisation algorithm. The used bench mark functions are include: Alpine, Ackley, Foxhole, Rastrigin, Tripod and other as defined in Table 2. Each problem is being solved using PSO, GA and HMAPSO methods and results are being compared.

All functions are multi-dimensional multimodal functions. For each experiment, the simulation records the mean value, worst value, best value and optimal point obtained in each parameter. In each experiment, the simulation runs ten times. Objective of f7 and f8 function is to find global maximum in given range of parameters whereas other function are experimented for finding out optimal minimum. Table 3 presents the compression results for different test functions. Depending upon the range of parameters, no of agents and step size are being decided. Step size is set as 1 for Ackley, Foxhole and Tripod function. For function like Alpine, Rastrigin and Shekel, where the range of variable is less, the step size is set as 0.5. It is worth mentioning that HMAPSO improved significantly in terms of uniqueness and optimal solution for most of functions especially for Ackley, Foxhole and Rastrigin function. For the case of Alpine, Tripod and Shekel function Hybrid PSO provide robust optimal solution, which is better than other optimisation methods.

The processing time to get the optimal solution have been recorded and compared. The numbers of agents/population are kept same for all the four optimisation algorithms. It is seen that if the population is chosen less in case of GA then the algorithm is not able to generate good solutions. The experiments have been made with many iteration and best are recorded and shown in Figure 16. It can be concluded the time taken by HMPSO is again less as compared with its counterparts. The difference in the processing time is not much as seen in Figure 16, but GA, PSO and OPSO very rarely generate the best solution in first iteration. So as per the standards followed now a day if we take ten solutions and then the average as the optimal for these algorithms then it makes a lot of difference as HMPSO generates almost unique solution in all the iterations.

Page 9: A new hybrid multi-agent-based particle swarm optimisation technique

A new hybrid multi-agent–based particle swarm optimisation technique 267

Table 2 Benchmark functions

Page 10: A new hybrid multi-agent-based particle swarm optimisation technique

268 R. Kumar et al.

Table 3 Results functions

GA PSO

Worst Best Avg Worst Best Avg

f2 1.03e-4 6.234e-4 4.53e-4 4.95e-15 0 2.964e-15 f3 1.77e-4 1.48e-03 7.03e-04 3.11e-14 –8.88e-16 7.70e-15 f4 0.0128 3.704e-6 0.0071 0.0074 3.41e-13 0.0025 f5 0.0016 6.584e-5 2.78e-4 0 0 0 f6 1.0540 0.0419 0.7171 2 4.90e-16 0.3 f7 10.654 11.0298 11.0221 10.097 10.99 10.895 f8 3.674 3.7005 3.69664 3.2691 3.6897 3.5292

OPSO HMAPSO

Worst Best Avg Worst Best Avg

f2 2.08e-17 0 3.70e-18 7.22e-17 7.22e-17 7.22e-17 f3 –8.88e-16 –8.88e-16 –8.88e-16 –8.88e-16 –8.88e-16 –8.88e-16 f4 0.0074 0 0.001479 0 0 0 f5 0 0 0 0 0 0 f6 2 8.57e-21 0.7 0 0 0 f7 10.385 11.016 11.014 11.031 11.031 11.031 f8 3.0898 3.70047 3.5432 3.7005 3.7005 3.7005

6 Conclusions

The paper first analysed the standard PSO algorithm and discusses its known problems of consistency in solution and of premature phenomenon. A new optimisation algorithm HMAPSO has been proposed and described. The upshot of the proposed algorithm is that it generates better optimal solutions as compared to its counterparts. The algorithm is based on MAS with collaboration of natural swarm group decision method by bees to find next site. The proposed algorithm performs well on different objective functions with any number of parameters, LPT equation and also performs well on unconstrained optimisation problems. Experimental results prove the robustness and accuracy of HMAPSO over genetic algorithm and other PSO models. The results also show that HMAPSO removes the randomness in the algorithm and improves significantly in global optimisation performance. The solution of HMAPSO shows consistency in the solution and hence it gives a better option to optimise real-time and online optimisation problems.

References Blum, C. (2007) ‘Ant colony optimization: introduction and

hybridizations’, 7th International Conference on Hybrid Intelligent Systems, pp.24–29.

Chong, E.K.P. and Stanislaw, H.Z. (2004) An Introduction to Optimization, 2nd ed., Wiley-Interscience Publication.

Deb, K. (2002) Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley and Sons.

Dorigo, M., Birattari, M. and Stutzle, T. (2006) ‘Ant colony optimization’, Computational Intelligence Magazine, IEEE, Vol. 1, No. 4, pp.28–39.

Dorigo, M., Gambardella, L., Middendorf, M. and Stutzle, T. (2002) ‘Guest editorial: special section on ant colony optimization’, IEEE Transactions on Evolutionary Computation, Vol. 6, No. 4, pp.317–319.

Dorigo, M., Maniezzo, V. and Colorni, A. (1996) ‘Ant system: optimization by a colony of cooperating agents’, IEEE Transaction on Systems, Man and Cybernetics, Part B, Vol. 26, No. 1, pp.29–41.

Eberhart, R.C. and Kennedy, J. (1995) ‘Particles swarm optimization’, Proc. IEEE International Conference on Neural Network, Perth, Australia.

Goldberg, D.E. (2005) Genetic Algorithms in Search, Optimization and Machine Learning, 9th ed., Pearson Education.

Ho, S-Y., et al. (2008) ‘OPSO: orthogonal particle swarm optimization and its application to task assignment problems’, IEEE Trans. Syst. Man, Cybern. A, Vol. 38, No. 2, pp.288–298.

Jiao, W. and Shi, Z. (1999) ‘A dynamic architecture for multi-agent systems’, Technology of Object-oriented Languages and Systems, TOOLS, pp.253–260.

Lei, W., Qi, K. Hui, X. and Qidi, W. (2005) ‘A modified adaptive particle swarm optimization algorithm’, IEEE International Conference on Industrial Technology, ICIT, pp.209–214.

Nelder, J.A. and Mead, R. (1965) ‘A simplex method for function minimization’, Computer Journal, Vol. 7, pp.308–313.

Passino, K.M. and Seeley, T.D. ( 2006) ‘Modeling and analysis of nest-site selection by honey bee swarms: the speed and accuracy trade-off’, Behavioral Ecology and Sociobiology, Vol. 59, No. 3, pp.427–442.

Passino, K.M., Seeley, T.D., Visscher, P.K. (2008) ‘Swarm cognition in honey bee’, Behavioral Ecology and Sociobiology, Vol. 62, No. 3.

Seeley, T., Camazine, S. and Sneyd, J. (1991) ‘Collective decision-making in honey bees: how colonies choose among nectar sources’, Behavioral Ecology and Sociobiology, Vol. 28, pp.277–290.

Page 11: A new hybrid multi-agent-based particle swarm optimisation technique

A new hybrid multi-agent–based particle swarm optimisation technique 269

Seeley, T., Visscher, P.K. and Passino, K.M. (2006) ‘Group decision making in honey bee swarms’, American Scientist, Vol. 94, No. 3, pp.220–229.

Shi, Z., Zhang, H., Dong, M., Zhao, Z., Sheng, Q., Jiang, Y. and Cheng, Y. (2003) ‘MAGE: multi-agent environment’, International Conference on Computer Networks and Mobile Computing, ICCNMC, pp.181–188.

Teodorovic, D., Lucic, P., Markovic, G. and Dell Orco, M. (2006) ‘Bee colony optimization: principles and appliances’, 8th Seminar on Neural Network Applications in Electrical Engineering, pp.151–156.

Wooldridge, M. (2002) An Introduction to Multi-agent System, Wiley, New York.

Yanfei, L. and Passino, K.M. (2002) ‘Biomimicry of social foraging behavior for distributed optimization: models, principles and emergent behaviors’, Journal of Optimization Theory and Applications, Vol. 115, No. 3, pp.603–628.

Zhong, W.Z., et al. (2004) ‘A multiagent genetic algorithm for global numerical optimization’, IEEE Trans. Syst. Man, Cybern. B, Vol. 34, pp.1128–1141.