Top Banner
Research Article A New Initialization Approach in Particle Swarm Optimization for Global Optimization Problems Waqas Haider Bangyal , 1 Abdul Hameed , 2 Wael Alosaimi, 3 and Hashem Alyami 4 1 Dept. of Computer Science, University of Gujrat, Gujrat, Pakistan 2 Dept. of Computer Science, Iqra University, Islamabad, Pakistan 3 Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia 4 Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia Correspondence should be addressed to Abdul Hameed; [email protected] Received 22 October 2020; Revised 2 April 2021; Accepted 29 April 2021; Published 18 May 2021 Academic Editor: Ant´ onio Dourado Copyright©2021WaqasHaiderBangyaletal.isisanopenaccessarticledistributedundertheCreativeCommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Particleswarmoptimization(PSO)algorithmisapopulation-basedintelligentstochasticsearchtechniqueusedtosearchforfood with the intrinsic manner of bee swarming. PSO is widely used to solve the diverse problems of optimization. Initialization of population is a critical factor in the PSO algorithm, which considerably influences the diversity and convergence during the processofPSO.Quasirandomsequencesareusefulforinitializingthepopulationtoimprovethediversityandconvergence,rather thanapplyingtherandomdistributionforinitialization.eperformanceofPSOisexpandedinthispapertomakeitappropriate for the optimization problem by introducing a new initialization technique named WELL with the help of low-discrepancy sequence. To solve the optimization problems in large-dimensional search spaces, the proposed solution is termed as WE-PSO. e suggested solution has been verified on fifteen well-known unimodal and multimodal benchmark test problems extensively used in the literature, Moreover, the performance of WE-PSO is compared with the standard PSO and two other initialization approaches Sobol-based PSO (SO-PSO) and Halton-based PSO (H-PSO). e findings indicate that WE-PSO is better than the standard multimodal problem-solving techniques. e results validate the efficacy and effectiveness of our approach. In comparison, the proposed approach is used for artificial neural network (ANN) learning and contrasted to the standard backpropagation algorithm, standard PSO, H-PSO, and SO-PSO, respectively. e results of our technique has a higher accuracy score and outperforms traditional methods. Also, the outcome of our work presents an insight on how the proposed initialization technique has a high effect on the quality of cost function, integration, and diversity aspects. 1. Introduction Optimization is considered the most productive field of research for many decades. Advanced optimization algo- rithms are required, as the problems of the real world evolve time towards complexity. e key purpose is to obtain the fitness function’s optimum value [1]. e classification is an attempt to identify groups of certain categories of data. Moreover, the training data have many features that play a significant role in segregating the knowledge according to the classes’ prearranged categories. Globally, a massive growth is recognized in various data classification applica- tions, such as organic compound analysis, television audienceshareprediction,automaticabstraction,creditcard fraud detection, financial projection, targeted marketing, and medical diagnosis [2]. In evolutionary computation, data classification builds its model based on the genetic process and natural evolution [3]. ese techniques are adaptive and robust, which perform global exploration in- steadofcandidatesolutionsfortheextractionofinformation on large datasets. e fundamental domain of artificial intelligence is swarm intelligence (SI), which discusses the developmental methods that govern the multiagent mechanism by systemic architecture and are influenced by the behaviour of social insects such as ants, wasps, bees, and termites. ey are also Hindawi Computational Intelligence and Neuroscience Volume 2021, Article ID 6628889, 17 pages https://doi.org/10.1155/2021/6628889
17

A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

Jul 26, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

Research ArticleA New Initialization Approach in Particle Swarm Optimizationfor Global Optimization Problems

Waqas Haider Bangyal 1 Abdul Hameed 2 Wael Alosaimi3 and Hashem Alyami4

1Dept of Computer Science University of Gujrat Gujrat Pakistan2Dept of Computer Science Iqra University Islamabad Pakistan3Department of Information Technology College of Computers and Information Technology Taif University Taif Saudi Arabia4Department of Computer Science College of Computers and Information Technology Taif University Taif Saudi Arabia

Correspondence should be addressed to Abdul Hameed hameediqraisbedupk

Received 22 October 2020 Revised 2 April 2021 Accepted 29 April 2021 Published 18 May 2021

Academic Editor Antonio Dourado

Copyright copy 2021Waqas Haider Bangyal et al)is is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

Particle swarm optimization (PSO) algorithm is a population-based intelligent stochastic search technique used to search for foodwith the intrinsic manner of bee swarming PSO is widely used to solve the diverse problems of optimization Initialization ofpopulation is a critical factor in the PSO algorithm which considerably influences the diversity and convergence during theprocess of PSO Quasirandom sequences are useful for initializing the population to improve the diversity and convergence ratherthan applying the random distribution for initialization)e performance of PSO is expanded in this paper to make it appropriatefor the optimization problem by introducing a new initialization technique named WELL with the help of low-discrepancysequence To solve the optimization problems in large-dimensional search spaces the proposed solution is termed as WE-PSO)e suggested solution has been verified on fifteen well-known unimodal and multimodal benchmark test problems extensivelyused in the literature Moreover the performance of WE-PSO is compared with the standard PSO and two other initializationapproaches Sobol-based PSO (SO-PSO) and Halton-based PSO (H-PSO) )e findings indicate that WE-PSO is better than thestandard multimodal problem-solving techniques )e results validate the efficacy and effectiveness of our approach Incomparison the proposed approach is used for artificial neural network (ANN) learning and contrasted to the standardbackpropagation algorithm standard PSO H-PSO and SO-PSO respectively )e results of our technique has a higher accuracyscore and outperforms traditional methods Also the outcome of our work presents an insight on how the proposed initializationtechnique has a high effect on the quality of cost function integration and diversity aspects

1 Introduction

Optimization is considered the most productive field ofresearch for many decades Advanced optimization algo-rithms are required as the problems of the real world evolvetime towards complexity )e key purpose is to obtain thefitness functionrsquos optimum value [1] )e classification is anattempt to identify groups of certain categories of dataMoreover the training data have many features that play asignificant role in segregating the knowledge according tothe classesrsquo prearranged categories Globally a massivegrowth is recognized in various data classification applica-tions such as organic compound analysis television

audience share prediction automatic abstraction credit cardfraud detection financial projection targeted marketingand medical diagnosis [2] In evolutionary computationdata classification builds its model based on the geneticprocess and natural evolution [3] )ese techniques areadaptive and robust which perform global exploration in-stead of candidate solutions for the extraction of informationon large datasets

)e fundamental domain of artificial intelligence isswarm intelligence (SI) which discusses the developmentalmethods that govern the multiagent mechanism by systemicarchitecture and are influenced by the behaviour of socialinsects such as ants wasps bees and termites )ey are also

HindawiComputational Intelligence and NeuroscienceVolume 2021 Article ID 6628889 17 pageshttpsdoiorg10115520216628889

encouraged by other social animal colonies such as birdflocking or fish schooling [4] In the research of cellularrobotic systems first the word SI is defined by Beni andWang [5] Researchers have been associated with socialinsect communities for decades but for a long time re-searchers have not established the composition of theircollective behaviour Moreover the societyrsquos autonomousagent is preserved as a nonsophisticated single as it can dealwith complicated issues Complex tasks are accomplishedeffectively through an association with the single members ofsociety as it strengthens the capacity to perform actions Inthe field of optimization different techniques of swarmintelligence are used

Particle swarm optimization (PSO) is considered themost efficient population-based stochastic algorithm sug-gested by Kennedy and Eberhart in 1995 [6] employed todeal with the global optimization problems It has becomethe most successful technique to solve the optimizationproblems listed in the diversified domain of engineering dueto simplicity and effectiveness PSO includes the incrementof the population in the candidate solution known as theswarm which is investigating the new search spaces toaggregate the transformation of ldquoflock of birdsrdquo whileseeking the food )e communication of the informationamong all individuals is known as particles and all indi-viduals lodged with findings of the rest of the swarm Eachindividual follows the two essential rules for seeking toreturn its old best point and ensure the best location of itsswarm With the advent of PSO new methods were alsoencouraged to face the global problems with optimization interms of solutions for fuzzy systems artificial neural net-works (ANNs) design and evolutionary computing ANNsrsquodesign [7] and function minimizations [8] are the mostpromising applications of evolutionary computing forsolving complex optimization problems PSO and evolu-tionary algorithms (EAs) have been efficiently used tomeasure the learning parameters weight factors and designof artificial neural networks [9 10]

In the field of swarm evolutionary computing theperformance of PSO and other EAs are affected by thegeneration of random numbers during the initialization ofthe population into the multidimensional search space PSOtends to achieve maximum performance when executed inthe low dimensional search space )erefore the perfor-mance is expected to be low when the dimensionality of theproblem is too high which causes the particles to stick in thelocal solution [1 11 12] Perseverance of the aforesaidbehaviour becomes intolerable for a variety of real-life ap-plications that contain a lot of local and global minimaImmature performance explains the reason for an inade-quate population distribution of the swarm It often impliesthat optimum solutions are more difficult to find if theparticles do not accurately cover the entire search spacewhich could omit the global optimum [13ndash15])is issue canbe resolved by introducing a well-organized random dis-tribution to initialize the swarm )ese distributions canvary in structural design depending upon the family Ex-amples include pseudorandom sequences probability se-quences and quasirandom sequences

One of the classical ways of generating random numbersis by an inbuilt library (implemented in most programminglanguages eg C or C++) )e numbers are allocateduniformly by this inbuilt library Research has proved thatthis technique is not useful for the uniform generation ofrandom numbers and does not appear to obtain the lowestdiscrepancy [16] Also pseudorandom sequences of normaldistributions reported better results compared to randomlydistributed sequences [17] Based on the design of theproblem the output of probability sequences quasirandomsequences and pseudorandom sequences varies Due tovariance in the generation of random numbers pseudo-random sequences are better than quasirandom sequencesfor globally optimal solutions

At this point after a brief analysis of genetic algorithmsevolutionary algorithms and PSO we can infer that there isan insufficient amount of research has been performed toimplement the pseudorandom sequences for populationinitialization Despite this fact to initialize the particles inthe search space we have proposed a novel pseudorandominitialization strategy called the WELL generator translatedas (Well Equi-distributed Long-period Linear) We havecompared the novel techniques with the basic randomdistribution and low-discrepancy sequence families such asSobol and Halton sequences on several complex unimodaland multimodal benchmark functions )e experimentalfindings have shown that WELL-based PSO initialization(WE-PSO) exceeds the other traditional PSO PSO withSobol-based initialization (SO-PSO) and PSO with Halton-based initialization (H-PSO) algorithms Moreover we haveconducted the ANN training on real-world classificationproblems with quasirandom sequences To compare theclassifierrsquos output nine datasets were taken from the famousUCI repository )e results demonstrate that WE-PSO of-fered better results on real-world dynamic classificationproblems compared to PSO SO-PSO and H-PSOrespectively

)e remainder of the paper is structured as follows inSection 2 related analysis is discussed A general overview ofthe artificial neural network is found in Section 3 In Section4 the standard PSO is packed )e proposed technique isdescribed in Section 5 In Section 6 the findings areexplained Discussion conclusion and potential work aredescribed in Section 7

2 Related Work

21 Modified Initialization Approaches Researchers haveadopted various random number generators ie pseudo-random quasirandom and probability sequences to refinethe efficiency of population-based evolutionary algorithms)e concept of using random number generator to initializea swarm into multidimensional search space is not new Acomparison of low-discrepancy sequences with simpleuniform distribution was carried out by the authors in [18]to assign the initial positions to particles in the search region)e study in [18] covers only the role of benchmark min-imization function to verify the performance of differentlow-discrepancy sequence versions Similarly Kimura and

2 Computational Intelligence and Neuroscience

Matsumura [19] optimized a genetic algorithm using theimproved PSO variant to initialize the swarm based on theHalton sequence )e Halton series is under the umbrella oflow-discrepancy sequences )e authors of [20] generatedthe comprehensive compression of Faure Sobol and Haltonsequences and after evaluation of the competitive outcomesthey declared a Sobol sequences as winner among others

Van der Corput sequence associated with the quasir-andom family was first carried out in [21] For the initialparameters d 1 and b 2 the van der Corput sequenceswere generated where d represents the problem dimensionsand b is the base )e experimental results showed that forthe difficult multidimensional optimization problems thevan der Corput sequence-based PSO outperforms the otherquasirandom sequences such as Faure sequence Sobolsequence and Halton sequence respectively AlthoughHalton-based PSO and Faure-based PSO gave better per-formance when the optimization problem was low in di-mensionality Moreover many researchers used theprobability distribution to tune the different parameters ofevolutionary algorithms)e family of probability sequencesfalls under the Gaussian distribution Cauchy distributionbeta distribution and exponential distribution respectively)e authors in [22] tuned the PSO parameters using randomsequences followed the use of an exponential distributionAlso a detailed comparison of probability distributions ispresent in [23] )e experimental results revealed that thePSO based on exponential distribution performed wellcompared to the PSO based on Gaussian distribution andPSO based on beta distribution

Similarly the researchers applied a torus distribution[24] to initialize the improved Bat algorithm (I-BA) Torus-based initialization enhanced the diversity of swarm andshowed better performance In [2] the readers can find thesource for applying several variations of probabilisticquasirandom and the uniform distribution in BA

)ere are also other independent statistical methods toproduce random numbers apart from the probability dis-tribution pseudorandom distribution and quasirandomdistribution used by various researchers to select an initiallocation of particles in multidimensional search space )enonlinear simplexmethod (NSM) is an initializationmethodproposed by Parsopoulos and Vrahatis in [25] )e ini-tialization based on centroidal Voronoi tessellations (CVTs)was suggested by Richards Ventura in [26] )e searchregion is divided into several blocks for the CVTprocess Inthe first division of blocks each particle gets a spot )eremaining particles which have not been allocated a blockyet are further separated into subblocks To allocate a blockto a particle every time the CVT generator used differentpermutations )e distance function is determined to dis-perse particles into blocks and the less distant particles firstreserve the entire block in the swarm )e initializationapproach based on the CVT method is compared with thesimple random distribution and the numerical results il-lustrated that PSO based on CVT was much better for theinitialization of population

A new technique called opposition-based initialization(O-PSO) inspired by opposition-based learning particles

was suggested by the authors in [27] Certain particles tooktheir positions in the opposite direction of search space andO-PSO contributed to increasing the probability of having aglobal optimum at the beginning To discover the search fieldin the opposite direction which was parallel to the samedirection O-PSO enhanced the diversity of particles Sincegood behaviour and poor behaviour were experienced in thehuman world it was not possible for the entities to beentirely good and bad at the same time )is natural phe-nomenon governed by the O-PSO to choose the initialposition for the particles in the opposite direction as well asin the same direction Within this theory the entire swarmwas symbolized by the same and opposite particles )eexperimental results revealed that proposed O-PSO per-formed well on many multidimensional dynamic bench-mark functions compared to the simple PSO thatimplemented the uniform distribution for initializing theparticles and the experimental results depicts that O-PSOperformed better on several multidimensional complexbenchmark functions Gutierrez et al [28] conducted aresearch of three distinct PSO initialization methods theopposition-based initialization the initialization of or-thogonal array and the chaotic initialization

22 Artificial Neural Network Training Using PSO )eprocessing of real-world problem with the initialization ofvarious strategies using the ANN classifier produced a higheffect on the performance of the evolutionary algorithms)e classifier with the prearranged initialization techniqueswas shown to have precision compared to the one using therandom distribution

In [4 5] optimization of the hidden layer in the neuralnetwork was performed For the optimization process theauthor manipulated the uniform distribution-based ini-tialization of feedforward neural networks Subasi in [29]classified the EMG signals using the uniform random dis-tribution-based PSO along with SVM to diagnose theneuromuscular anarchy Similarly the improved swarmoptimized functional link artificial neural network (ISO-FLANN) was proposed by Dehuri in [30] using randomnumber initialization following uniform distribution Op-timal Latin Hypercube Design (OLHD) initialization ap-proach was proposed by the authors in [31] and evaluated onseveral data mining problems with the other quasirandomsequences such as Faure Halton and Sobol sequences )eproposed OLHD was better than quasirandom sequences interms of efficiency measures

In [32] the authors introduced the training of NN withparticle swarm optimization (NN-PSO) for anticipating thestructural failure in reinforced concrete (RC) buildings )eweight vectors for NN was calculated by incorporating PSOon the basis of minimum root mean square error )e in-troduced NN-PSO classifier was sufficient to handle thestructural failure in RC buildings Xue et al [33] presented anew strategy for the feedforward neural network (FNN)classifier in which a self-adaptive parameter and strategy-based PSO (SPS-PSO) was integrated to reduce the di-mensions of large-scale optimization problems A new

Computational Intelligence and Neuroscience 3

algorithm by using PSO was proposed in [34] which canspontaneously finalize the most appropriate architecture ofdeep convolutional neural networks (CNNs) for the clas-sification of images termed as psoCNN A novel NN-basedtraining algorithm by incorporating PSO is proposed in [35]called LPSONS In the LPSONS algorithm the velocityparameter of PSO was embedded with Mantegna Levy flightdistribution for improved diversity Additionally the pro-posed algorithm is used to train feedforward multilayeredperceptron ANNs In [36] PSO was used for feature en-gineering of diabetic retinopathy and after it the NNclassifier was applied for the classification of diabetic reti-nopathy disease

After conducting a thorough literature review we caninfer that the particle efficiency and convergence velocity arehighly dependent on the swarm initialization process If allthe particles with a proper pattern cover the entire searchspace there are more chances that the global optimum willbe found at an early stage of PSO

3 Particle Swarm Optimization

PSO is a global optimization technique that plays an im-portant role in the fields of applied technology and has beenwidely deployed in numerous engineering applications suchas preparation of heating systems data mining power al-location of cooperative communication networks patternrecognition machine learning optimizing route selectionand information security to name a few PSO works on theapplication of candidates To maximize a problem theoptimal solution is represented by each candidate who isdesignated as a particle)e current location of the particle isdefined by the n-dimensional search space and is repre-sented by the vector solution x In the form of a fitness scorecarried out by particles each solution is translated In the n-dimensional search space at the kth direction positionvector x can be calculated by provoking each particle pVelocity vector v can be defined as the motion of particlesand the step size of an entire swarm in the search space isother than position vector p

PSO begins with the population consisting of n particlesthat fly at the iteration ki in the d-dimensional search spaceto look for the optimal solution Swarm mutation cantransform the objective feature into the desired candidatesolution For updating the position and velocity of theparticles the following two equations are used

vz+1 vz + c1 times pbestz minus xz1113872 1113873 + c2 times g

bestz minus xz1113872 1113873 (1)

xz+1 xz + vz+1 (2)

In the above equations the position vector and velocityvectors are vz and xz respectively pbest

z shows the local bestsolution of the entire swarm acquired using its own previousexperience and gbest

z reflects the global best solution ac-quired using the N-dimension experience of its neighbourWhile c1 minus ⟶ c1r1 and c2 minus ⟶ c2r2 c1 and c2 are theacceleration factors that influence the acceleration weightsand r1 and r2 are two random numbers produced by using

the random number generator xz+1 is an updated positionvector that guides the novel point at the kth iteration for thecurrent particle where vz+1 is the newly updated velocity Itis possible to drive three different factors from equation (1))e ldquomomentum factor⟶ vzrdquo represents the old velocity)e ldquocognitive factor⟶ c1 times (pbest

z minus xz)rdquo gives local bestfitness that has taken from all the previous finesses )eldquosocial factor⟶ c2 times (gbest

z minus xz)rdquo provides the best globalsolution amplified by the intact neighbour particles )epseudocode of fundamental PSO is present in Algorithm 1

4 Training of the Neural Networks

)e artificial neural network (ANN) is perceived as the mosteffective technique of approximation which is used to ap-proximate the nonlinear functions and their relationships)e ANN model is capable of generalizing learning or-ganizing and adapting data )e ANN architecture is basedon an interlined series of synchronized neurons whereas themultiprocessing layer is used to compute the encoding ofinformation [37] ANN is a computational mathematicalmodel that regulates the relationship between the input andoutput layers of different nonlinear functions [38] In thisstudy we have used the feedforward neural network presentin Figure 1 which is the most frequently used and populararchitecture of the ANN )e feedforward neural network isdefined by the three layers ie input layer sandwich layerand output layer respectively Input layer served as NNgateway where the information frame is inserted )e in-termediate task of the sandwich layer is to execute the dataframe using the input layer )e outcomes are derived fromthe output layer [39] Both layersrsquo units are connected withthe serial layer nodes and the link between the nodes isstructured in the feedforward neural network Bias is acomponent of each unit and has a value of minus1 as present in[24]

For weight optimization of NN the position of eachparticle in swarm shows a set of weight for the current epochor iteration )e dimensionality of each particle is thenumber of weights associated with the network )e particlemoves within the weight space attempting to minimizelearning error (mean squared error (MSE) or sum of squarederror (SSE)) In order to change the weights of the neuralnetwork change in Position occurs that will reduce the errorin current epoch )ere is no backpropagation concept inPSONN where the feedforward NN produced the learningerror (particle fitness) based on set of weight and bias (PSOpositions)

)e challenge of premature convergence is addressed inthe problem of weight optimization of ANN [40 41] )eprimary objective of the ANN model is to achieve a set ofoptimum parameters and weights )e two major classifi-cation approaches used to segregate the positive entitiesfrom the negative entities are gradient descent and errorcorrection respectively Gradient descent-based techniquesare low in performance where the concerns are high di-mensional and the parameters are exclusively dependent onthe structure Due to this fact it stuck in local minimaBackpropagation is one of the gradient decent techniques

4 Computational Intelligence and Neuroscience

which is most commonly used to train the neural networkmodels and solve complex multimodal problems in the real-world as mentioned in [24]

5 Random Number Generator

)e built-in library function is used to construct the mesh ofnumbers randomly at uniform locations through Rand(x_(min) x_max) in [42] A continuous uniform distributionprobability density function describes the effect of unifor-mity on any sequence It is possible to characterize theprobability density function as given in the followingequation

f(t)

1p minus q

forplt tlt q

0 for tltp or tgt q

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

(3)

where p and q represent themaximum likelihood parameterDue to the zero impact on the f (t) dt integrals over anylength the value of f (t) is useless at the boundary of p and q)e calculation of maximum probability parameter is de-termined by the estimated probability function which isgiven in

l(p q|t) nlog(q minus p) (4)

PSO (object of particles)(1) input Particles⟶ pz1113864 1113865⟶ with undefined locations(2) output Particles⟶ pz1113864 1113865⟶ with best fitness score(3) For each particle p1 p2 p3 p4 p5 pz1113864 1113865

(4) For each Dimension d1 d2 d3 d4 d5 dz1113864 1113865

(a) Initialize xz as xz Rand(xmin xmax)

(b) if xz reach to best fitness than pbestz replace pbest

z by xz

(c) Initialize vz as xz Rand(xmin xmax)

(5) Declared one global solution as gbestz from all the optimal pbest

z

(6) Repeat the process up to kz(d) For each particle p1 p2 p3 p4 p5 pz1113864 1113865 update

Using equation (1) compute vz+1Using equation (2) compute xz+1If xz+1 gtpbest

z

pbestz xz+1

If xz+1 gt gbestz

gbestz xz+1

(7) Return particles pz1113864 1113865⟶ contains global optimal solution

ALGORITHM 1 Standard PSO pseudocode

Calculate velocity and update positionbased on Gbest and Pbest particles

Initialize particleposition using QRS

Particle 02

Particle 03

Particle 01

ndash4235 3562 2002

ndash0587 0544

3226ndash1953ndash084210050285

1

23

4

5

6

Repeat process 4 5 and 3 until meetingtargeted learning error or maximum

number of iteration

Train NN using newparticle position

Total number ofparticles

Train NN using initialparticle position

Learning error (setoverall best error as

Gbest and each particlebest error as Pbest )

Neural network (feedforward)

Figure 1 Feedforward neural network

Computational Intelligence and Neuroscience 5

6 The Sobol Sequence

)e Sobol distribution was undertaken for the reconstruc-tion of coordinates in [43] )e relation of linear recurrencesis included for each dimension dz coordinate and the binaryexpression for linear recurrence can be defined for thenonnegative instance az as present in

a a120

+ a221

+ a322

+ middot middot middot + az2zminus1

(5)

For dimension dz the instance i can be generated using

xDi i1v

D1 + i2v

D2 + middot middot middot + izv

Dz (6)

vD1 denotes the k th direction binary function of an in-

stance vDi at the dimension dz and vD

i can be computedusing

VDk c1v

Dkminus1 + c2v

Dkminus2 + middot middot middot + czv

Dzminus1 +

vDiminusz

2z1113888 1113889 (7)

where cz describes polynomial coefficient where kgt z

7 The Halton Sequence

In [44] the authors proposed the Halton sequence as animproved variant of the van der Corput sequence Forgenerating random points Halton sequences use a coprimebase Algorithm 2 shows the pseudocode for generating theHalton sequences

8 The WELL Sequence

Panneton et al [45] suggested the Well Equi-distributedLong-period Linear (WELL) sequence Initially it wasperformed as a modified variant of the Mersenne Twisteralgorithm )e WELL distribution algorithm is given as inAlgorithm 3

For the WELL distribution the algorithm mentionedabove describes the general recurrence )e algorithmdefinition is as follows x and r are two integers with aninterval of rgt 0 and 0lt xlt k and k rlowastw minus x and w is theweight factor of distribution )e binary matrix of size rlowastw

having the r bit block is expressed by A0 to A7 mx describesthe bitmask that holds the first wmdashx bits t0 to t7 aretemporary vector variables

)e random points in Figures 2ndash5 are the uniform andSobol Halton and WELL distributions are represented bythe bubble plot in which the y-axis is represented by therandom values and the x-axis is shown in the table by therelevant index of the point concerned

9 Methodology

)e objective of this paper is to work out the purity of one ofthe proposed pseudorandom sequences Pseudorandomsequences are much more random than quasirandom se-quences PSO is random in nature so it does not have aspecific pattern to guarantee the global optimum solution

)erefore we have suggested the WELL distribution-basedPSO (WE-PSO) by taking advantage of randomness in thePSO We have compared the WE-PSO with the uniformdistribution-based PSO and other quasirandom distribu-tions-based PSO ie Sobol distribution (SO-PSO) andHalton distribution (H-PSO) to ensure the integrity of theproposed approach Moreover by training the nine real-world NN problems we have tested the proposed techniqueover NN classifiers )e experimental outcomes reflect anunusual improvement over standard PSO with uniformdistribution WE-PSO approach also outperforms SO-PSOand H-PSO approaches as evident in results Numericalresults have shown that the use of WELL distribution toinitialize the swarm enhances the efficiency of population-based algorithms in evolutionary computing In Algo-rithm 4 the pseudocode for the proposed technique ispresented

10 Results and Discussion

WELL-PSO (WE-PSO) technique was simulated in C++ andapplied to a computer with the 23GHz Core (M) 2 DuoCPU processor specification A group of fifteen nonlinearbenchmark test functions are used to compare the WE-PSOwith standard PSO SO-PSO and H-PSO for measuring theexecution of the WELL-based PSO (WE-PSO) algorithmNormally these functions are applied to investigate theperformance of any technique )erefore we used it toexamine the optimization results of WE-PSO in our study Alist of such functions can be found in Table 1 )e di-mensionality of the problem is seen in Table 1 as D Srepresents the interval of the variables and fmin denotes theglobal optimumminimum value)e simulation parametersare used in the interval [09 04] where c1 c2145 inertiaweight w is used and swarm size is 40 )e function di-mensions are D 10 20 and 30 for simulation and a cu-mulative number of epochs is 3000 All techniques have beenapplied to similar parameters for comparatively effectiveresults To check the performance of each technique allalgorithms were tested for 30 runs

101 Discussion )e purpose of this study is to observe theunique characteristics of the standard benchmark functionsbased on the dimensions of the experimental results )reesimulation tests were performed in the experiments wherethe following TW-BA characteristics were observed

(i) Effect of using different initializing PSO approaches(ii) Effect of using different dimensions for problems(iii) A comparative analysis

)e objective of this study was to find the most suitableinitialization approach for the PSO and to explore WE-PSOwith other approaches such as SO-PSO H-PSO andstandard PSO during the first experiment )e purpose ofthe second simulation is to define the essence of the di-mension concerning the standard function optimization

6 Computational Intelligence and Neuroscience

Finally the simulation results of WE-PSO were comparedwith the standard PSO SO-PSO and H-PSO respectivelySimulation effects have been addressed in depth in the re-mainder of the article

)e graphical representation of the similarities of WE-PSO with PSO H-PSO and SO-PSO is shown in Figures 6 to20 For WE-PSO we can observe that majority of the es-timates have a better convergence curve )e dimensions 10

Halton ()input Size z and base b_cm with Dimension doutput population instances p

Fix the interval overmaxminus⟶ 1minminus⟶ 0For each iteration (k_1 k_2 k_3 k_z)doFor each particle p_1 p_2 p_3 p_zmaxmaxb_cmminmin+maxlowast z mod b_cmz zb_cm

ALGORITHM 2 Halton sequences

(i) WELL ()(ii) t0 (mxampvkrminus1) + (mxampvkrminus2)

(iii) t1 (A0vk0) + (A1vkm1)

(iv) t2 (A2vkm2) + (A3vkm3

)

(v) t3 t2 + t1(vi) t4 t0A4 + t1A5 + t2A6 + t3A7(vii) vk+1rminus1 vkrminus2 ampmx

(viii) for i minus ⟶ r minus 2 2 do vk+1ivkiminus1(ix) vk+11 t3(x) vk+10 t4(xi) Returnykvk0

ALGORITHM 3 WELL sequences

100806

s040200

0 100 200Index

300 400 500

Figure 3 Population initialization using Sobol distribution

100806

r040200

0 100 200Index

300 400 500

Figure 2 Population initialization using uniform distribution

100806

h040200

0 100 200Index

300 400 500

Figure 4 Population initialization using uniform distribution

08

w 04

000 100 200

Index300 400 500

Figure 5 Population initialization using WELL distribution

Computational Intelligence and Neuroscience 7

20 and 30 of the problem are described in the x-axis whilethe y-axis represents the mean best against each dimensionof the problem

1011 Effect of Using Different Initializing PSO ApproachesIn this simulation PSO is initialized with WELL sequence(WE-PSO) instead of the uniform distribution )e variantWE-PSO is compared with the other initialized approachesincluding Sobol sequence (SO-PSO) Halton Sequence (H-

PSO) and standard PSO)e experimental findings indicatethat the higher dimensions are better

1012 Effect of Using Different Dimensions for Problems)e core objective of this simulation setup is to find thesupremacy of the outcomes based on the dimension of theoptimization functions )ree dimensions were used forbench mark functions such as D 10 D 20 and D 30 inexperiments In Table 2 the simulation results were

Step 1 initialize the swarmSet epoch count I 0 population size Nz dimension of the problem Dz wmax and wminFor each particle Pz

Step 11 initialize xz as xz WELL(xmin xmax)

Step 12 initialize the particle velocity as vz Rand(xmin xmax)

Step 13 compute the fitness score fz

Step 14 set global best position gbestz as max(f1 f2 f3 fz)) where fz isin globally optimal fitness

Step 15 set local best position pbestz as max(f1 f2 f3 fz)) where fz isin locally optimal fitness

Step 2Compare the current particlersquos fitness score xz in the swarm and its old local best location pbest

z If the current fitness score xz is greaterthan pbest

z then substitute pbestz with xz else retain the xz unchanged

Step 3Compare the current particlersquos fitness score xz in the swarm and its old global best location gbest

z If the current fitness score xz isgreater than gbest

z then substitute gbestz with xz else retain the xz unchanged

Step 4Using equation (1) compute vz+1⟶ updated velocity vectorUsing equation (2) compute xz+1⟶ updated position vectorStep 5Go to step 2 if the stopping criteria does not met else terminate

ALGORITHM 4 Proposed PSO pseudocode

Table 1 Standard objective functions and their optimal

SR Function name Objective function Search space Optimalvalue

F1 Sphere Minf(x) 1113936ni1 x2

i minus512lexi le 512 0F2 Rastrigin Minf(x) 1113936

ni1 [x2

i minus 10 cos(2πx) + 10]i minus512lexi le 512 0

F3 Axis parallel hyper-ellipsoid Minf(x) 1113936

ni1 i middot x2

i minus512lexi le 512 0

F4 Rotated hyper-ellipsoid Minf(x) 1113936ni (1113936

ij1 xj)

2 minus65536lexi le 65536 0

F5 Moved axis parallelhyper-ellipsoid Minf(x) 1113936

ni1 5i middot x2

i minus512lexi le 512 0

F6 Sum of different power Minf(x) 1113936ni1 |xi|

i+1 minus1lexi le 1 0F7 Chung Reynolds Minf(x) (1113936

ni1 x2

i )2 minus100lexi le 100 0F8 Csendes Minf(x) 1113936

ni1 x6

i (2 + sin(1xi)) minus1lexi le 1 0

F9 Schaffer Minf(x) 1113936ni1 05 + (sin2

x2

i + x2i+1

1113969minus 05[1 + 0001(x2

i + x2i+1)]

2) minus100lexi le 100 0

F10 Schumer Steiglitz Minf(x) 1113936ni1 x4

i minus512lexi le 512 0F11 Schwefel Minf(x) 1113936

ni1 xα

i minus100lexi le 100 0

F12 Schwefel 12 Min f(x) 1113936Di (1113936

ij1 xj)

2 minus100lexi le 100 0

F13 Schwefel 221 Minf(x) max|xi|1ltiltD minus100lexi le 100 0

F14 Schwefel 222 Minf(x) 1113936Di1 |xi| + 1113937

ni1 |xi| minus100lexi le 100 0

F15 Schwefel 223 Minf(x) 1113936ni1 x10

i minus10lexi le 10 0

8 Computational Intelligence and Neuroscience

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 2: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

encouraged by other social animal colonies such as birdflocking or fish schooling [4] In the research of cellularrobotic systems first the word SI is defined by Beni andWang [5] Researchers have been associated with socialinsect communities for decades but for a long time re-searchers have not established the composition of theircollective behaviour Moreover the societyrsquos autonomousagent is preserved as a nonsophisticated single as it can dealwith complicated issues Complex tasks are accomplishedeffectively through an association with the single members ofsociety as it strengthens the capacity to perform actions Inthe field of optimization different techniques of swarmintelligence are used

Particle swarm optimization (PSO) is considered themost efficient population-based stochastic algorithm sug-gested by Kennedy and Eberhart in 1995 [6] employed todeal with the global optimization problems It has becomethe most successful technique to solve the optimizationproblems listed in the diversified domain of engineering dueto simplicity and effectiveness PSO includes the incrementof the population in the candidate solution known as theswarm which is investigating the new search spaces toaggregate the transformation of ldquoflock of birdsrdquo whileseeking the food )e communication of the informationamong all individuals is known as particles and all indi-viduals lodged with findings of the rest of the swarm Eachindividual follows the two essential rules for seeking toreturn its old best point and ensure the best location of itsswarm With the advent of PSO new methods were alsoencouraged to face the global problems with optimization interms of solutions for fuzzy systems artificial neural net-works (ANNs) design and evolutionary computing ANNsrsquodesign [7] and function minimizations [8] are the mostpromising applications of evolutionary computing forsolving complex optimization problems PSO and evolu-tionary algorithms (EAs) have been efficiently used tomeasure the learning parameters weight factors and designof artificial neural networks [9 10]

In the field of swarm evolutionary computing theperformance of PSO and other EAs are affected by thegeneration of random numbers during the initialization ofthe population into the multidimensional search space PSOtends to achieve maximum performance when executed inthe low dimensional search space )erefore the perfor-mance is expected to be low when the dimensionality of theproblem is too high which causes the particles to stick in thelocal solution [1 11 12] Perseverance of the aforesaidbehaviour becomes intolerable for a variety of real-life ap-plications that contain a lot of local and global minimaImmature performance explains the reason for an inade-quate population distribution of the swarm It often impliesthat optimum solutions are more difficult to find if theparticles do not accurately cover the entire search spacewhich could omit the global optimum [13ndash15])is issue canbe resolved by introducing a well-organized random dis-tribution to initialize the swarm )ese distributions canvary in structural design depending upon the family Ex-amples include pseudorandom sequences probability se-quences and quasirandom sequences

One of the classical ways of generating random numbersis by an inbuilt library (implemented in most programminglanguages eg C or C++) )e numbers are allocateduniformly by this inbuilt library Research has proved thatthis technique is not useful for the uniform generation ofrandom numbers and does not appear to obtain the lowestdiscrepancy [16] Also pseudorandom sequences of normaldistributions reported better results compared to randomlydistributed sequences [17] Based on the design of theproblem the output of probability sequences quasirandomsequences and pseudorandom sequences varies Due tovariance in the generation of random numbers pseudo-random sequences are better than quasirandom sequencesfor globally optimal solutions

At this point after a brief analysis of genetic algorithmsevolutionary algorithms and PSO we can infer that there isan insufficient amount of research has been performed toimplement the pseudorandom sequences for populationinitialization Despite this fact to initialize the particles inthe search space we have proposed a novel pseudorandominitialization strategy called the WELL generator translatedas (Well Equi-distributed Long-period Linear) We havecompared the novel techniques with the basic randomdistribution and low-discrepancy sequence families such asSobol and Halton sequences on several complex unimodaland multimodal benchmark functions )e experimentalfindings have shown that WELL-based PSO initialization(WE-PSO) exceeds the other traditional PSO PSO withSobol-based initialization (SO-PSO) and PSO with Halton-based initialization (H-PSO) algorithms Moreover we haveconducted the ANN training on real-world classificationproblems with quasirandom sequences To compare theclassifierrsquos output nine datasets were taken from the famousUCI repository )e results demonstrate that WE-PSO of-fered better results on real-world dynamic classificationproblems compared to PSO SO-PSO and H-PSOrespectively

)e remainder of the paper is structured as follows inSection 2 related analysis is discussed A general overview ofthe artificial neural network is found in Section 3 In Section4 the standard PSO is packed )e proposed technique isdescribed in Section 5 In Section 6 the findings areexplained Discussion conclusion and potential work aredescribed in Section 7

2 Related Work

21 Modified Initialization Approaches Researchers haveadopted various random number generators ie pseudo-random quasirandom and probability sequences to refinethe efficiency of population-based evolutionary algorithms)e concept of using random number generator to initializea swarm into multidimensional search space is not new Acomparison of low-discrepancy sequences with simpleuniform distribution was carried out by the authors in [18]to assign the initial positions to particles in the search region)e study in [18] covers only the role of benchmark min-imization function to verify the performance of differentlow-discrepancy sequence versions Similarly Kimura and

2 Computational Intelligence and Neuroscience

Matsumura [19] optimized a genetic algorithm using theimproved PSO variant to initialize the swarm based on theHalton sequence )e Halton series is under the umbrella oflow-discrepancy sequences )e authors of [20] generatedthe comprehensive compression of Faure Sobol and Haltonsequences and after evaluation of the competitive outcomesthey declared a Sobol sequences as winner among others

Van der Corput sequence associated with the quasir-andom family was first carried out in [21] For the initialparameters d 1 and b 2 the van der Corput sequenceswere generated where d represents the problem dimensionsand b is the base )e experimental results showed that forthe difficult multidimensional optimization problems thevan der Corput sequence-based PSO outperforms the otherquasirandom sequences such as Faure sequence Sobolsequence and Halton sequence respectively AlthoughHalton-based PSO and Faure-based PSO gave better per-formance when the optimization problem was low in di-mensionality Moreover many researchers used theprobability distribution to tune the different parameters ofevolutionary algorithms)e family of probability sequencesfalls under the Gaussian distribution Cauchy distributionbeta distribution and exponential distribution respectively)e authors in [22] tuned the PSO parameters using randomsequences followed the use of an exponential distributionAlso a detailed comparison of probability distributions ispresent in [23] )e experimental results revealed that thePSO based on exponential distribution performed wellcompared to the PSO based on Gaussian distribution andPSO based on beta distribution

Similarly the researchers applied a torus distribution[24] to initialize the improved Bat algorithm (I-BA) Torus-based initialization enhanced the diversity of swarm andshowed better performance In [2] the readers can find thesource for applying several variations of probabilisticquasirandom and the uniform distribution in BA

)ere are also other independent statistical methods toproduce random numbers apart from the probability dis-tribution pseudorandom distribution and quasirandomdistribution used by various researchers to select an initiallocation of particles in multidimensional search space )enonlinear simplexmethod (NSM) is an initializationmethodproposed by Parsopoulos and Vrahatis in [25] )e ini-tialization based on centroidal Voronoi tessellations (CVTs)was suggested by Richards Ventura in [26] )e searchregion is divided into several blocks for the CVTprocess Inthe first division of blocks each particle gets a spot )eremaining particles which have not been allocated a blockyet are further separated into subblocks To allocate a blockto a particle every time the CVT generator used differentpermutations )e distance function is determined to dis-perse particles into blocks and the less distant particles firstreserve the entire block in the swarm )e initializationapproach based on the CVT method is compared with thesimple random distribution and the numerical results il-lustrated that PSO based on CVT was much better for theinitialization of population

A new technique called opposition-based initialization(O-PSO) inspired by opposition-based learning particles

was suggested by the authors in [27] Certain particles tooktheir positions in the opposite direction of search space andO-PSO contributed to increasing the probability of having aglobal optimum at the beginning To discover the search fieldin the opposite direction which was parallel to the samedirection O-PSO enhanced the diversity of particles Sincegood behaviour and poor behaviour were experienced in thehuman world it was not possible for the entities to beentirely good and bad at the same time )is natural phe-nomenon governed by the O-PSO to choose the initialposition for the particles in the opposite direction as well asin the same direction Within this theory the entire swarmwas symbolized by the same and opposite particles )eexperimental results revealed that proposed O-PSO per-formed well on many multidimensional dynamic bench-mark functions compared to the simple PSO thatimplemented the uniform distribution for initializing theparticles and the experimental results depicts that O-PSOperformed better on several multidimensional complexbenchmark functions Gutierrez et al [28] conducted aresearch of three distinct PSO initialization methods theopposition-based initialization the initialization of or-thogonal array and the chaotic initialization

22 Artificial Neural Network Training Using PSO )eprocessing of real-world problem with the initialization ofvarious strategies using the ANN classifier produced a higheffect on the performance of the evolutionary algorithms)e classifier with the prearranged initialization techniqueswas shown to have precision compared to the one using therandom distribution

In [4 5] optimization of the hidden layer in the neuralnetwork was performed For the optimization process theauthor manipulated the uniform distribution-based ini-tialization of feedforward neural networks Subasi in [29]classified the EMG signals using the uniform random dis-tribution-based PSO along with SVM to diagnose theneuromuscular anarchy Similarly the improved swarmoptimized functional link artificial neural network (ISO-FLANN) was proposed by Dehuri in [30] using randomnumber initialization following uniform distribution Op-timal Latin Hypercube Design (OLHD) initialization ap-proach was proposed by the authors in [31] and evaluated onseveral data mining problems with the other quasirandomsequences such as Faure Halton and Sobol sequences )eproposed OLHD was better than quasirandom sequences interms of efficiency measures

In [32] the authors introduced the training of NN withparticle swarm optimization (NN-PSO) for anticipating thestructural failure in reinforced concrete (RC) buildings )eweight vectors for NN was calculated by incorporating PSOon the basis of minimum root mean square error )e in-troduced NN-PSO classifier was sufficient to handle thestructural failure in RC buildings Xue et al [33] presented anew strategy for the feedforward neural network (FNN)classifier in which a self-adaptive parameter and strategy-based PSO (SPS-PSO) was integrated to reduce the di-mensions of large-scale optimization problems A new

Computational Intelligence and Neuroscience 3

algorithm by using PSO was proposed in [34] which canspontaneously finalize the most appropriate architecture ofdeep convolutional neural networks (CNNs) for the clas-sification of images termed as psoCNN A novel NN-basedtraining algorithm by incorporating PSO is proposed in [35]called LPSONS In the LPSONS algorithm the velocityparameter of PSO was embedded with Mantegna Levy flightdistribution for improved diversity Additionally the pro-posed algorithm is used to train feedforward multilayeredperceptron ANNs In [36] PSO was used for feature en-gineering of diabetic retinopathy and after it the NNclassifier was applied for the classification of diabetic reti-nopathy disease

After conducting a thorough literature review we caninfer that the particle efficiency and convergence velocity arehighly dependent on the swarm initialization process If allthe particles with a proper pattern cover the entire searchspace there are more chances that the global optimum willbe found at an early stage of PSO

3 Particle Swarm Optimization

PSO is a global optimization technique that plays an im-portant role in the fields of applied technology and has beenwidely deployed in numerous engineering applications suchas preparation of heating systems data mining power al-location of cooperative communication networks patternrecognition machine learning optimizing route selectionand information security to name a few PSO works on theapplication of candidates To maximize a problem theoptimal solution is represented by each candidate who isdesignated as a particle)e current location of the particle isdefined by the n-dimensional search space and is repre-sented by the vector solution x In the form of a fitness scorecarried out by particles each solution is translated In the n-dimensional search space at the kth direction positionvector x can be calculated by provoking each particle pVelocity vector v can be defined as the motion of particlesand the step size of an entire swarm in the search space isother than position vector p

PSO begins with the population consisting of n particlesthat fly at the iteration ki in the d-dimensional search spaceto look for the optimal solution Swarm mutation cantransform the objective feature into the desired candidatesolution For updating the position and velocity of theparticles the following two equations are used

vz+1 vz + c1 times pbestz minus xz1113872 1113873 + c2 times g

bestz minus xz1113872 1113873 (1)

xz+1 xz + vz+1 (2)

In the above equations the position vector and velocityvectors are vz and xz respectively pbest

z shows the local bestsolution of the entire swarm acquired using its own previousexperience and gbest

z reflects the global best solution ac-quired using the N-dimension experience of its neighbourWhile c1 minus ⟶ c1r1 and c2 minus ⟶ c2r2 c1 and c2 are theacceleration factors that influence the acceleration weightsand r1 and r2 are two random numbers produced by using

the random number generator xz+1 is an updated positionvector that guides the novel point at the kth iteration for thecurrent particle where vz+1 is the newly updated velocity Itis possible to drive three different factors from equation (1))e ldquomomentum factor⟶ vzrdquo represents the old velocity)e ldquocognitive factor⟶ c1 times (pbest

z minus xz)rdquo gives local bestfitness that has taken from all the previous finesses )eldquosocial factor⟶ c2 times (gbest

z minus xz)rdquo provides the best globalsolution amplified by the intact neighbour particles )epseudocode of fundamental PSO is present in Algorithm 1

4 Training of the Neural Networks

)e artificial neural network (ANN) is perceived as the mosteffective technique of approximation which is used to ap-proximate the nonlinear functions and their relationships)e ANN model is capable of generalizing learning or-ganizing and adapting data )e ANN architecture is basedon an interlined series of synchronized neurons whereas themultiprocessing layer is used to compute the encoding ofinformation [37] ANN is a computational mathematicalmodel that regulates the relationship between the input andoutput layers of different nonlinear functions [38] In thisstudy we have used the feedforward neural network presentin Figure 1 which is the most frequently used and populararchitecture of the ANN )e feedforward neural network isdefined by the three layers ie input layer sandwich layerand output layer respectively Input layer served as NNgateway where the information frame is inserted )e in-termediate task of the sandwich layer is to execute the dataframe using the input layer )e outcomes are derived fromthe output layer [39] Both layersrsquo units are connected withthe serial layer nodes and the link between the nodes isstructured in the feedforward neural network Bias is acomponent of each unit and has a value of minus1 as present in[24]

For weight optimization of NN the position of eachparticle in swarm shows a set of weight for the current epochor iteration )e dimensionality of each particle is thenumber of weights associated with the network )e particlemoves within the weight space attempting to minimizelearning error (mean squared error (MSE) or sum of squarederror (SSE)) In order to change the weights of the neuralnetwork change in Position occurs that will reduce the errorin current epoch )ere is no backpropagation concept inPSONN where the feedforward NN produced the learningerror (particle fitness) based on set of weight and bias (PSOpositions)

)e challenge of premature convergence is addressed inthe problem of weight optimization of ANN [40 41] )eprimary objective of the ANN model is to achieve a set ofoptimum parameters and weights )e two major classifi-cation approaches used to segregate the positive entitiesfrom the negative entities are gradient descent and errorcorrection respectively Gradient descent-based techniquesare low in performance where the concerns are high di-mensional and the parameters are exclusively dependent onthe structure Due to this fact it stuck in local minimaBackpropagation is one of the gradient decent techniques

4 Computational Intelligence and Neuroscience

which is most commonly used to train the neural networkmodels and solve complex multimodal problems in the real-world as mentioned in [24]

5 Random Number Generator

)e built-in library function is used to construct the mesh ofnumbers randomly at uniform locations through Rand(x_(min) x_max) in [42] A continuous uniform distributionprobability density function describes the effect of unifor-mity on any sequence It is possible to characterize theprobability density function as given in the followingequation

f(t)

1p minus q

forplt tlt q

0 for tltp or tgt q

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

(3)

where p and q represent themaximum likelihood parameterDue to the zero impact on the f (t) dt integrals over anylength the value of f (t) is useless at the boundary of p and q)e calculation of maximum probability parameter is de-termined by the estimated probability function which isgiven in

l(p q|t) nlog(q minus p) (4)

PSO (object of particles)(1) input Particles⟶ pz1113864 1113865⟶ with undefined locations(2) output Particles⟶ pz1113864 1113865⟶ with best fitness score(3) For each particle p1 p2 p3 p4 p5 pz1113864 1113865

(4) For each Dimension d1 d2 d3 d4 d5 dz1113864 1113865

(a) Initialize xz as xz Rand(xmin xmax)

(b) if xz reach to best fitness than pbestz replace pbest

z by xz

(c) Initialize vz as xz Rand(xmin xmax)

(5) Declared one global solution as gbestz from all the optimal pbest

z

(6) Repeat the process up to kz(d) For each particle p1 p2 p3 p4 p5 pz1113864 1113865 update

Using equation (1) compute vz+1Using equation (2) compute xz+1If xz+1 gtpbest

z

pbestz xz+1

If xz+1 gt gbestz

gbestz xz+1

(7) Return particles pz1113864 1113865⟶ contains global optimal solution

ALGORITHM 1 Standard PSO pseudocode

Calculate velocity and update positionbased on Gbest and Pbest particles

Initialize particleposition using QRS

Particle 02

Particle 03

Particle 01

ndash4235 3562 2002

ndash0587 0544

3226ndash1953ndash084210050285

1

23

4

5

6

Repeat process 4 5 and 3 until meetingtargeted learning error or maximum

number of iteration

Train NN using newparticle position

Total number ofparticles

Train NN using initialparticle position

Learning error (setoverall best error as

Gbest and each particlebest error as Pbest )

Neural network (feedforward)

Figure 1 Feedforward neural network

Computational Intelligence and Neuroscience 5

6 The Sobol Sequence

)e Sobol distribution was undertaken for the reconstruc-tion of coordinates in [43] )e relation of linear recurrencesis included for each dimension dz coordinate and the binaryexpression for linear recurrence can be defined for thenonnegative instance az as present in

a a120

+ a221

+ a322

+ middot middot middot + az2zminus1

(5)

For dimension dz the instance i can be generated using

xDi i1v

D1 + i2v

D2 + middot middot middot + izv

Dz (6)

vD1 denotes the k th direction binary function of an in-

stance vDi at the dimension dz and vD

i can be computedusing

VDk c1v

Dkminus1 + c2v

Dkminus2 + middot middot middot + czv

Dzminus1 +

vDiminusz

2z1113888 1113889 (7)

where cz describes polynomial coefficient where kgt z

7 The Halton Sequence

In [44] the authors proposed the Halton sequence as animproved variant of the van der Corput sequence Forgenerating random points Halton sequences use a coprimebase Algorithm 2 shows the pseudocode for generating theHalton sequences

8 The WELL Sequence

Panneton et al [45] suggested the Well Equi-distributedLong-period Linear (WELL) sequence Initially it wasperformed as a modified variant of the Mersenne Twisteralgorithm )e WELL distribution algorithm is given as inAlgorithm 3

For the WELL distribution the algorithm mentionedabove describes the general recurrence )e algorithmdefinition is as follows x and r are two integers with aninterval of rgt 0 and 0lt xlt k and k rlowastw minus x and w is theweight factor of distribution )e binary matrix of size rlowastw

having the r bit block is expressed by A0 to A7 mx describesthe bitmask that holds the first wmdashx bits t0 to t7 aretemporary vector variables

)e random points in Figures 2ndash5 are the uniform andSobol Halton and WELL distributions are represented bythe bubble plot in which the y-axis is represented by therandom values and the x-axis is shown in the table by therelevant index of the point concerned

9 Methodology

)e objective of this paper is to work out the purity of one ofthe proposed pseudorandom sequences Pseudorandomsequences are much more random than quasirandom se-quences PSO is random in nature so it does not have aspecific pattern to guarantee the global optimum solution

)erefore we have suggested the WELL distribution-basedPSO (WE-PSO) by taking advantage of randomness in thePSO We have compared the WE-PSO with the uniformdistribution-based PSO and other quasirandom distribu-tions-based PSO ie Sobol distribution (SO-PSO) andHalton distribution (H-PSO) to ensure the integrity of theproposed approach Moreover by training the nine real-world NN problems we have tested the proposed techniqueover NN classifiers )e experimental outcomes reflect anunusual improvement over standard PSO with uniformdistribution WE-PSO approach also outperforms SO-PSOand H-PSO approaches as evident in results Numericalresults have shown that the use of WELL distribution toinitialize the swarm enhances the efficiency of population-based algorithms in evolutionary computing In Algo-rithm 4 the pseudocode for the proposed technique ispresented

10 Results and Discussion

WELL-PSO (WE-PSO) technique was simulated in C++ andapplied to a computer with the 23GHz Core (M) 2 DuoCPU processor specification A group of fifteen nonlinearbenchmark test functions are used to compare the WE-PSOwith standard PSO SO-PSO and H-PSO for measuring theexecution of the WELL-based PSO (WE-PSO) algorithmNormally these functions are applied to investigate theperformance of any technique )erefore we used it toexamine the optimization results of WE-PSO in our study Alist of such functions can be found in Table 1 )e di-mensionality of the problem is seen in Table 1 as D Srepresents the interval of the variables and fmin denotes theglobal optimumminimum value)e simulation parametersare used in the interval [09 04] where c1 c2145 inertiaweight w is used and swarm size is 40 )e function di-mensions are D 10 20 and 30 for simulation and a cu-mulative number of epochs is 3000 All techniques have beenapplied to similar parameters for comparatively effectiveresults To check the performance of each technique allalgorithms were tested for 30 runs

101 Discussion )e purpose of this study is to observe theunique characteristics of the standard benchmark functionsbased on the dimensions of the experimental results )reesimulation tests were performed in the experiments wherethe following TW-BA characteristics were observed

(i) Effect of using different initializing PSO approaches(ii) Effect of using different dimensions for problems(iii) A comparative analysis

)e objective of this study was to find the most suitableinitialization approach for the PSO and to explore WE-PSOwith other approaches such as SO-PSO H-PSO andstandard PSO during the first experiment )e purpose ofthe second simulation is to define the essence of the di-mension concerning the standard function optimization

6 Computational Intelligence and Neuroscience

Finally the simulation results of WE-PSO were comparedwith the standard PSO SO-PSO and H-PSO respectivelySimulation effects have been addressed in depth in the re-mainder of the article

)e graphical representation of the similarities of WE-PSO with PSO H-PSO and SO-PSO is shown in Figures 6 to20 For WE-PSO we can observe that majority of the es-timates have a better convergence curve )e dimensions 10

Halton ()input Size z and base b_cm with Dimension doutput population instances p

Fix the interval overmaxminus⟶ 1minminus⟶ 0For each iteration (k_1 k_2 k_3 k_z)doFor each particle p_1 p_2 p_3 p_zmaxmaxb_cmminmin+maxlowast z mod b_cmz zb_cm

ALGORITHM 2 Halton sequences

(i) WELL ()(ii) t0 (mxampvkrminus1) + (mxampvkrminus2)

(iii) t1 (A0vk0) + (A1vkm1)

(iv) t2 (A2vkm2) + (A3vkm3

)

(v) t3 t2 + t1(vi) t4 t0A4 + t1A5 + t2A6 + t3A7(vii) vk+1rminus1 vkrminus2 ampmx

(viii) for i minus ⟶ r minus 2 2 do vk+1ivkiminus1(ix) vk+11 t3(x) vk+10 t4(xi) Returnykvk0

ALGORITHM 3 WELL sequences

100806

s040200

0 100 200Index

300 400 500

Figure 3 Population initialization using Sobol distribution

100806

r040200

0 100 200Index

300 400 500

Figure 2 Population initialization using uniform distribution

100806

h040200

0 100 200Index

300 400 500

Figure 4 Population initialization using uniform distribution

08

w 04

000 100 200

Index300 400 500

Figure 5 Population initialization using WELL distribution

Computational Intelligence and Neuroscience 7

20 and 30 of the problem are described in the x-axis whilethe y-axis represents the mean best against each dimensionof the problem

1011 Effect of Using Different Initializing PSO ApproachesIn this simulation PSO is initialized with WELL sequence(WE-PSO) instead of the uniform distribution )e variantWE-PSO is compared with the other initialized approachesincluding Sobol sequence (SO-PSO) Halton Sequence (H-

PSO) and standard PSO)e experimental findings indicatethat the higher dimensions are better

1012 Effect of Using Different Dimensions for Problems)e core objective of this simulation setup is to find thesupremacy of the outcomes based on the dimension of theoptimization functions )ree dimensions were used forbench mark functions such as D 10 D 20 and D 30 inexperiments In Table 2 the simulation results were

Step 1 initialize the swarmSet epoch count I 0 population size Nz dimension of the problem Dz wmax and wminFor each particle Pz

Step 11 initialize xz as xz WELL(xmin xmax)

Step 12 initialize the particle velocity as vz Rand(xmin xmax)

Step 13 compute the fitness score fz

Step 14 set global best position gbestz as max(f1 f2 f3 fz)) where fz isin globally optimal fitness

Step 15 set local best position pbestz as max(f1 f2 f3 fz)) where fz isin locally optimal fitness

Step 2Compare the current particlersquos fitness score xz in the swarm and its old local best location pbest

z If the current fitness score xz is greaterthan pbest

z then substitute pbestz with xz else retain the xz unchanged

Step 3Compare the current particlersquos fitness score xz in the swarm and its old global best location gbest

z If the current fitness score xz isgreater than gbest

z then substitute gbestz with xz else retain the xz unchanged

Step 4Using equation (1) compute vz+1⟶ updated velocity vectorUsing equation (2) compute xz+1⟶ updated position vectorStep 5Go to step 2 if the stopping criteria does not met else terminate

ALGORITHM 4 Proposed PSO pseudocode

Table 1 Standard objective functions and their optimal

SR Function name Objective function Search space Optimalvalue

F1 Sphere Minf(x) 1113936ni1 x2

i minus512lexi le 512 0F2 Rastrigin Minf(x) 1113936

ni1 [x2

i minus 10 cos(2πx) + 10]i minus512lexi le 512 0

F3 Axis parallel hyper-ellipsoid Minf(x) 1113936

ni1 i middot x2

i minus512lexi le 512 0

F4 Rotated hyper-ellipsoid Minf(x) 1113936ni (1113936

ij1 xj)

2 minus65536lexi le 65536 0

F5 Moved axis parallelhyper-ellipsoid Minf(x) 1113936

ni1 5i middot x2

i minus512lexi le 512 0

F6 Sum of different power Minf(x) 1113936ni1 |xi|

i+1 minus1lexi le 1 0F7 Chung Reynolds Minf(x) (1113936

ni1 x2

i )2 minus100lexi le 100 0F8 Csendes Minf(x) 1113936

ni1 x6

i (2 + sin(1xi)) minus1lexi le 1 0

F9 Schaffer Minf(x) 1113936ni1 05 + (sin2

x2

i + x2i+1

1113969minus 05[1 + 0001(x2

i + x2i+1)]

2) minus100lexi le 100 0

F10 Schumer Steiglitz Minf(x) 1113936ni1 x4

i minus512lexi le 512 0F11 Schwefel Minf(x) 1113936

ni1 xα

i minus100lexi le 100 0

F12 Schwefel 12 Min f(x) 1113936Di (1113936

ij1 xj)

2 minus100lexi le 100 0

F13 Schwefel 221 Minf(x) max|xi|1ltiltD minus100lexi le 100 0

F14 Schwefel 222 Minf(x) 1113936Di1 |xi| + 1113937

ni1 |xi| minus100lexi le 100 0

F15 Schwefel 223 Minf(x) 1113936ni1 x10

i minus10lexi le 10 0

8 Computational Intelligence and Neuroscience

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 3: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

Matsumura [19] optimized a genetic algorithm using theimproved PSO variant to initialize the swarm based on theHalton sequence )e Halton series is under the umbrella oflow-discrepancy sequences )e authors of [20] generatedthe comprehensive compression of Faure Sobol and Haltonsequences and after evaluation of the competitive outcomesthey declared a Sobol sequences as winner among others

Van der Corput sequence associated with the quasir-andom family was first carried out in [21] For the initialparameters d 1 and b 2 the van der Corput sequenceswere generated where d represents the problem dimensionsand b is the base )e experimental results showed that forthe difficult multidimensional optimization problems thevan der Corput sequence-based PSO outperforms the otherquasirandom sequences such as Faure sequence Sobolsequence and Halton sequence respectively AlthoughHalton-based PSO and Faure-based PSO gave better per-formance when the optimization problem was low in di-mensionality Moreover many researchers used theprobability distribution to tune the different parameters ofevolutionary algorithms)e family of probability sequencesfalls under the Gaussian distribution Cauchy distributionbeta distribution and exponential distribution respectively)e authors in [22] tuned the PSO parameters using randomsequences followed the use of an exponential distributionAlso a detailed comparison of probability distributions ispresent in [23] )e experimental results revealed that thePSO based on exponential distribution performed wellcompared to the PSO based on Gaussian distribution andPSO based on beta distribution

Similarly the researchers applied a torus distribution[24] to initialize the improved Bat algorithm (I-BA) Torus-based initialization enhanced the diversity of swarm andshowed better performance In [2] the readers can find thesource for applying several variations of probabilisticquasirandom and the uniform distribution in BA

)ere are also other independent statistical methods toproduce random numbers apart from the probability dis-tribution pseudorandom distribution and quasirandomdistribution used by various researchers to select an initiallocation of particles in multidimensional search space )enonlinear simplexmethod (NSM) is an initializationmethodproposed by Parsopoulos and Vrahatis in [25] )e ini-tialization based on centroidal Voronoi tessellations (CVTs)was suggested by Richards Ventura in [26] )e searchregion is divided into several blocks for the CVTprocess Inthe first division of blocks each particle gets a spot )eremaining particles which have not been allocated a blockyet are further separated into subblocks To allocate a blockto a particle every time the CVT generator used differentpermutations )e distance function is determined to dis-perse particles into blocks and the less distant particles firstreserve the entire block in the swarm )e initializationapproach based on the CVT method is compared with thesimple random distribution and the numerical results il-lustrated that PSO based on CVT was much better for theinitialization of population

A new technique called opposition-based initialization(O-PSO) inspired by opposition-based learning particles

was suggested by the authors in [27] Certain particles tooktheir positions in the opposite direction of search space andO-PSO contributed to increasing the probability of having aglobal optimum at the beginning To discover the search fieldin the opposite direction which was parallel to the samedirection O-PSO enhanced the diversity of particles Sincegood behaviour and poor behaviour were experienced in thehuman world it was not possible for the entities to beentirely good and bad at the same time )is natural phe-nomenon governed by the O-PSO to choose the initialposition for the particles in the opposite direction as well asin the same direction Within this theory the entire swarmwas symbolized by the same and opposite particles )eexperimental results revealed that proposed O-PSO per-formed well on many multidimensional dynamic bench-mark functions compared to the simple PSO thatimplemented the uniform distribution for initializing theparticles and the experimental results depicts that O-PSOperformed better on several multidimensional complexbenchmark functions Gutierrez et al [28] conducted aresearch of three distinct PSO initialization methods theopposition-based initialization the initialization of or-thogonal array and the chaotic initialization

22 Artificial Neural Network Training Using PSO )eprocessing of real-world problem with the initialization ofvarious strategies using the ANN classifier produced a higheffect on the performance of the evolutionary algorithms)e classifier with the prearranged initialization techniqueswas shown to have precision compared to the one using therandom distribution

In [4 5] optimization of the hidden layer in the neuralnetwork was performed For the optimization process theauthor manipulated the uniform distribution-based ini-tialization of feedforward neural networks Subasi in [29]classified the EMG signals using the uniform random dis-tribution-based PSO along with SVM to diagnose theneuromuscular anarchy Similarly the improved swarmoptimized functional link artificial neural network (ISO-FLANN) was proposed by Dehuri in [30] using randomnumber initialization following uniform distribution Op-timal Latin Hypercube Design (OLHD) initialization ap-proach was proposed by the authors in [31] and evaluated onseveral data mining problems with the other quasirandomsequences such as Faure Halton and Sobol sequences )eproposed OLHD was better than quasirandom sequences interms of efficiency measures

In [32] the authors introduced the training of NN withparticle swarm optimization (NN-PSO) for anticipating thestructural failure in reinforced concrete (RC) buildings )eweight vectors for NN was calculated by incorporating PSOon the basis of minimum root mean square error )e in-troduced NN-PSO classifier was sufficient to handle thestructural failure in RC buildings Xue et al [33] presented anew strategy for the feedforward neural network (FNN)classifier in which a self-adaptive parameter and strategy-based PSO (SPS-PSO) was integrated to reduce the di-mensions of large-scale optimization problems A new

Computational Intelligence and Neuroscience 3

algorithm by using PSO was proposed in [34] which canspontaneously finalize the most appropriate architecture ofdeep convolutional neural networks (CNNs) for the clas-sification of images termed as psoCNN A novel NN-basedtraining algorithm by incorporating PSO is proposed in [35]called LPSONS In the LPSONS algorithm the velocityparameter of PSO was embedded with Mantegna Levy flightdistribution for improved diversity Additionally the pro-posed algorithm is used to train feedforward multilayeredperceptron ANNs In [36] PSO was used for feature en-gineering of diabetic retinopathy and after it the NNclassifier was applied for the classification of diabetic reti-nopathy disease

After conducting a thorough literature review we caninfer that the particle efficiency and convergence velocity arehighly dependent on the swarm initialization process If allthe particles with a proper pattern cover the entire searchspace there are more chances that the global optimum willbe found at an early stage of PSO

3 Particle Swarm Optimization

PSO is a global optimization technique that plays an im-portant role in the fields of applied technology and has beenwidely deployed in numerous engineering applications suchas preparation of heating systems data mining power al-location of cooperative communication networks patternrecognition machine learning optimizing route selectionand information security to name a few PSO works on theapplication of candidates To maximize a problem theoptimal solution is represented by each candidate who isdesignated as a particle)e current location of the particle isdefined by the n-dimensional search space and is repre-sented by the vector solution x In the form of a fitness scorecarried out by particles each solution is translated In the n-dimensional search space at the kth direction positionvector x can be calculated by provoking each particle pVelocity vector v can be defined as the motion of particlesand the step size of an entire swarm in the search space isother than position vector p

PSO begins with the population consisting of n particlesthat fly at the iteration ki in the d-dimensional search spaceto look for the optimal solution Swarm mutation cantransform the objective feature into the desired candidatesolution For updating the position and velocity of theparticles the following two equations are used

vz+1 vz + c1 times pbestz minus xz1113872 1113873 + c2 times g

bestz minus xz1113872 1113873 (1)

xz+1 xz + vz+1 (2)

In the above equations the position vector and velocityvectors are vz and xz respectively pbest

z shows the local bestsolution of the entire swarm acquired using its own previousexperience and gbest

z reflects the global best solution ac-quired using the N-dimension experience of its neighbourWhile c1 minus ⟶ c1r1 and c2 minus ⟶ c2r2 c1 and c2 are theacceleration factors that influence the acceleration weightsand r1 and r2 are two random numbers produced by using

the random number generator xz+1 is an updated positionvector that guides the novel point at the kth iteration for thecurrent particle where vz+1 is the newly updated velocity Itis possible to drive three different factors from equation (1))e ldquomomentum factor⟶ vzrdquo represents the old velocity)e ldquocognitive factor⟶ c1 times (pbest

z minus xz)rdquo gives local bestfitness that has taken from all the previous finesses )eldquosocial factor⟶ c2 times (gbest

z minus xz)rdquo provides the best globalsolution amplified by the intact neighbour particles )epseudocode of fundamental PSO is present in Algorithm 1

4 Training of the Neural Networks

)e artificial neural network (ANN) is perceived as the mosteffective technique of approximation which is used to ap-proximate the nonlinear functions and their relationships)e ANN model is capable of generalizing learning or-ganizing and adapting data )e ANN architecture is basedon an interlined series of synchronized neurons whereas themultiprocessing layer is used to compute the encoding ofinformation [37] ANN is a computational mathematicalmodel that regulates the relationship between the input andoutput layers of different nonlinear functions [38] In thisstudy we have used the feedforward neural network presentin Figure 1 which is the most frequently used and populararchitecture of the ANN )e feedforward neural network isdefined by the three layers ie input layer sandwich layerand output layer respectively Input layer served as NNgateway where the information frame is inserted )e in-termediate task of the sandwich layer is to execute the dataframe using the input layer )e outcomes are derived fromthe output layer [39] Both layersrsquo units are connected withthe serial layer nodes and the link between the nodes isstructured in the feedforward neural network Bias is acomponent of each unit and has a value of minus1 as present in[24]

For weight optimization of NN the position of eachparticle in swarm shows a set of weight for the current epochor iteration )e dimensionality of each particle is thenumber of weights associated with the network )e particlemoves within the weight space attempting to minimizelearning error (mean squared error (MSE) or sum of squarederror (SSE)) In order to change the weights of the neuralnetwork change in Position occurs that will reduce the errorin current epoch )ere is no backpropagation concept inPSONN where the feedforward NN produced the learningerror (particle fitness) based on set of weight and bias (PSOpositions)

)e challenge of premature convergence is addressed inthe problem of weight optimization of ANN [40 41] )eprimary objective of the ANN model is to achieve a set ofoptimum parameters and weights )e two major classifi-cation approaches used to segregate the positive entitiesfrom the negative entities are gradient descent and errorcorrection respectively Gradient descent-based techniquesare low in performance where the concerns are high di-mensional and the parameters are exclusively dependent onthe structure Due to this fact it stuck in local minimaBackpropagation is one of the gradient decent techniques

4 Computational Intelligence and Neuroscience

which is most commonly used to train the neural networkmodels and solve complex multimodal problems in the real-world as mentioned in [24]

5 Random Number Generator

)e built-in library function is used to construct the mesh ofnumbers randomly at uniform locations through Rand(x_(min) x_max) in [42] A continuous uniform distributionprobability density function describes the effect of unifor-mity on any sequence It is possible to characterize theprobability density function as given in the followingequation

f(t)

1p minus q

forplt tlt q

0 for tltp or tgt q

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

(3)

where p and q represent themaximum likelihood parameterDue to the zero impact on the f (t) dt integrals over anylength the value of f (t) is useless at the boundary of p and q)e calculation of maximum probability parameter is de-termined by the estimated probability function which isgiven in

l(p q|t) nlog(q minus p) (4)

PSO (object of particles)(1) input Particles⟶ pz1113864 1113865⟶ with undefined locations(2) output Particles⟶ pz1113864 1113865⟶ with best fitness score(3) For each particle p1 p2 p3 p4 p5 pz1113864 1113865

(4) For each Dimension d1 d2 d3 d4 d5 dz1113864 1113865

(a) Initialize xz as xz Rand(xmin xmax)

(b) if xz reach to best fitness than pbestz replace pbest

z by xz

(c) Initialize vz as xz Rand(xmin xmax)

(5) Declared one global solution as gbestz from all the optimal pbest

z

(6) Repeat the process up to kz(d) For each particle p1 p2 p3 p4 p5 pz1113864 1113865 update

Using equation (1) compute vz+1Using equation (2) compute xz+1If xz+1 gtpbest

z

pbestz xz+1

If xz+1 gt gbestz

gbestz xz+1

(7) Return particles pz1113864 1113865⟶ contains global optimal solution

ALGORITHM 1 Standard PSO pseudocode

Calculate velocity and update positionbased on Gbest and Pbest particles

Initialize particleposition using QRS

Particle 02

Particle 03

Particle 01

ndash4235 3562 2002

ndash0587 0544

3226ndash1953ndash084210050285

1

23

4

5

6

Repeat process 4 5 and 3 until meetingtargeted learning error or maximum

number of iteration

Train NN using newparticle position

Total number ofparticles

Train NN using initialparticle position

Learning error (setoverall best error as

Gbest and each particlebest error as Pbest )

Neural network (feedforward)

Figure 1 Feedforward neural network

Computational Intelligence and Neuroscience 5

6 The Sobol Sequence

)e Sobol distribution was undertaken for the reconstruc-tion of coordinates in [43] )e relation of linear recurrencesis included for each dimension dz coordinate and the binaryexpression for linear recurrence can be defined for thenonnegative instance az as present in

a a120

+ a221

+ a322

+ middot middot middot + az2zminus1

(5)

For dimension dz the instance i can be generated using

xDi i1v

D1 + i2v

D2 + middot middot middot + izv

Dz (6)

vD1 denotes the k th direction binary function of an in-

stance vDi at the dimension dz and vD

i can be computedusing

VDk c1v

Dkminus1 + c2v

Dkminus2 + middot middot middot + czv

Dzminus1 +

vDiminusz

2z1113888 1113889 (7)

where cz describes polynomial coefficient where kgt z

7 The Halton Sequence

In [44] the authors proposed the Halton sequence as animproved variant of the van der Corput sequence Forgenerating random points Halton sequences use a coprimebase Algorithm 2 shows the pseudocode for generating theHalton sequences

8 The WELL Sequence

Panneton et al [45] suggested the Well Equi-distributedLong-period Linear (WELL) sequence Initially it wasperformed as a modified variant of the Mersenne Twisteralgorithm )e WELL distribution algorithm is given as inAlgorithm 3

For the WELL distribution the algorithm mentionedabove describes the general recurrence )e algorithmdefinition is as follows x and r are two integers with aninterval of rgt 0 and 0lt xlt k and k rlowastw minus x and w is theweight factor of distribution )e binary matrix of size rlowastw

having the r bit block is expressed by A0 to A7 mx describesthe bitmask that holds the first wmdashx bits t0 to t7 aretemporary vector variables

)e random points in Figures 2ndash5 are the uniform andSobol Halton and WELL distributions are represented bythe bubble plot in which the y-axis is represented by therandom values and the x-axis is shown in the table by therelevant index of the point concerned

9 Methodology

)e objective of this paper is to work out the purity of one ofthe proposed pseudorandom sequences Pseudorandomsequences are much more random than quasirandom se-quences PSO is random in nature so it does not have aspecific pattern to guarantee the global optimum solution

)erefore we have suggested the WELL distribution-basedPSO (WE-PSO) by taking advantage of randomness in thePSO We have compared the WE-PSO with the uniformdistribution-based PSO and other quasirandom distribu-tions-based PSO ie Sobol distribution (SO-PSO) andHalton distribution (H-PSO) to ensure the integrity of theproposed approach Moreover by training the nine real-world NN problems we have tested the proposed techniqueover NN classifiers )e experimental outcomes reflect anunusual improvement over standard PSO with uniformdistribution WE-PSO approach also outperforms SO-PSOand H-PSO approaches as evident in results Numericalresults have shown that the use of WELL distribution toinitialize the swarm enhances the efficiency of population-based algorithms in evolutionary computing In Algo-rithm 4 the pseudocode for the proposed technique ispresented

10 Results and Discussion

WELL-PSO (WE-PSO) technique was simulated in C++ andapplied to a computer with the 23GHz Core (M) 2 DuoCPU processor specification A group of fifteen nonlinearbenchmark test functions are used to compare the WE-PSOwith standard PSO SO-PSO and H-PSO for measuring theexecution of the WELL-based PSO (WE-PSO) algorithmNormally these functions are applied to investigate theperformance of any technique )erefore we used it toexamine the optimization results of WE-PSO in our study Alist of such functions can be found in Table 1 )e di-mensionality of the problem is seen in Table 1 as D Srepresents the interval of the variables and fmin denotes theglobal optimumminimum value)e simulation parametersare used in the interval [09 04] where c1 c2145 inertiaweight w is used and swarm size is 40 )e function di-mensions are D 10 20 and 30 for simulation and a cu-mulative number of epochs is 3000 All techniques have beenapplied to similar parameters for comparatively effectiveresults To check the performance of each technique allalgorithms were tested for 30 runs

101 Discussion )e purpose of this study is to observe theunique characteristics of the standard benchmark functionsbased on the dimensions of the experimental results )reesimulation tests were performed in the experiments wherethe following TW-BA characteristics were observed

(i) Effect of using different initializing PSO approaches(ii) Effect of using different dimensions for problems(iii) A comparative analysis

)e objective of this study was to find the most suitableinitialization approach for the PSO and to explore WE-PSOwith other approaches such as SO-PSO H-PSO andstandard PSO during the first experiment )e purpose ofthe second simulation is to define the essence of the di-mension concerning the standard function optimization

6 Computational Intelligence and Neuroscience

Finally the simulation results of WE-PSO were comparedwith the standard PSO SO-PSO and H-PSO respectivelySimulation effects have been addressed in depth in the re-mainder of the article

)e graphical representation of the similarities of WE-PSO with PSO H-PSO and SO-PSO is shown in Figures 6 to20 For WE-PSO we can observe that majority of the es-timates have a better convergence curve )e dimensions 10

Halton ()input Size z and base b_cm with Dimension doutput population instances p

Fix the interval overmaxminus⟶ 1minminus⟶ 0For each iteration (k_1 k_2 k_3 k_z)doFor each particle p_1 p_2 p_3 p_zmaxmaxb_cmminmin+maxlowast z mod b_cmz zb_cm

ALGORITHM 2 Halton sequences

(i) WELL ()(ii) t0 (mxampvkrminus1) + (mxampvkrminus2)

(iii) t1 (A0vk0) + (A1vkm1)

(iv) t2 (A2vkm2) + (A3vkm3

)

(v) t3 t2 + t1(vi) t4 t0A4 + t1A5 + t2A6 + t3A7(vii) vk+1rminus1 vkrminus2 ampmx

(viii) for i minus ⟶ r minus 2 2 do vk+1ivkiminus1(ix) vk+11 t3(x) vk+10 t4(xi) Returnykvk0

ALGORITHM 3 WELL sequences

100806

s040200

0 100 200Index

300 400 500

Figure 3 Population initialization using Sobol distribution

100806

r040200

0 100 200Index

300 400 500

Figure 2 Population initialization using uniform distribution

100806

h040200

0 100 200Index

300 400 500

Figure 4 Population initialization using uniform distribution

08

w 04

000 100 200

Index300 400 500

Figure 5 Population initialization using WELL distribution

Computational Intelligence and Neuroscience 7

20 and 30 of the problem are described in the x-axis whilethe y-axis represents the mean best against each dimensionof the problem

1011 Effect of Using Different Initializing PSO ApproachesIn this simulation PSO is initialized with WELL sequence(WE-PSO) instead of the uniform distribution )e variantWE-PSO is compared with the other initialized approachesincluding Sobol sequence (SO-PSO) Halton Sequence (H-

PSO) and standard PSO)e experimental findings indicatethat the higher dimensions are better

1012 Effect of Using Different Dimensions for Problems)e core objective of this simulation setup is to find thesupremacy of the outcomes based on the dimension of theoptimization functions )ree dimensions were used forbench mark functions such as D 10 D 20 and D 30 inexperiments In Table 2 the simulation results were

Step 1 initialize the swarmSet epoch count I 0 population size Nz dimension of the problem Dz wmax and wminFor each particle Pz

Step 11 initialize xz as xz WELL(xmin xmax)

Step 12 initialize the particle velocity as vz Rand(xmin xmax)

Step 13 compute the fitness score fz

Step 14 set global best position gbestz as max(f1 f2 f3 fz)) where fz isin globally optimal fitness

Step 15 set local best position pbestz as max(f1 f2 f3 fz)) where fz isin locally optimal fitness

Step 2Compare the current particlersquos fitness score xz in the swarm and its old local best location pbest

z If the current fitness score xz is greaterthan pbest

z then substitute pbestz with xz else retain the xz unchanged

Step 3Compare the current particlersquos fitness score xz in the swarm and its old global best location gbest

z If the current fitness score xz isgreater than gbest

z then substitute gbestz with xz else retain the xz unchanged

Step 4Using equation (1) compute vz+1⟶ updated velocity vectorUsing equation (2) compute xz+1⟶ updated position vectorStep 5Go to step 2 if the stopping criteria does not met else terminate

ALGORITHM 4 Proposed PSO pseudocode

Table 1 Standard objective functions and their optimal

SR Function name Objective function Search space Optimalvalue

F1 Sphere Minf(x) 1113936ni1 x2

i minus512lexi le 512 0F2 Rastrigin Minf(x) 1113936

ni1 [x2

i minus 10 cos(2πx) + 10]i minus512lexi le 512 0

F3 Axis parallel hyper-ellipsoid Minf(x) 1113936

ni1 i middot x2

i minus512lexi le 512 0

F4 Rotated hyper-ellipsoid Minf(x) 1113936ni (1113936

ij1 xj)

2 minus65536lexi le 65536 0

F5 Moved axis parallelhyper-ellipsoid Minf(x) 1113936

ni1 5i middot x2

i minus512lexi le 512 0

F6 Sum of different power Minf(x) 1113936ni1 |xi|

i+1 minus1lexi le 1 0F7 Chung Reynolds Minf(x) (1113936

ni1 x2

i )2 minus100lexi le 100 0F8 Csendes Minf(x) 1113936

ni1 x6

i (2 + sin(1xi)) minus1lexi le 1 0

F9 Schaffer Minf(x) 1113936ni1 05 + (sin2

x2

i + x2i+1

1113969minus 05[1 + 0001(x2

i + x2i+1)]

2) minus100lexi le 100 0

F10 Schumer Steiglitz Minf(x) 1113936ni1 x4

i minus512lexi le 512 0F11 Schwefel Minf(x) 1113936

ni1 xα

i minus100lexi le 100 0

F12 Schwefel 12 Min f(x) 1113936Di (1113936

ij1 xj)

2 minus100lexi le 100 0

F13 Schwefel 221 Minf(x) max|xi|1ltiltD minus100lexi le 100 0

F14 Schwefel 222 Minf(x) 1113936Di1 |xi| + 1113937

ni1 |xi| minus100lexi le 100 0

F15 Schwefel 223 Minf(x) 1113936ni1 x10

i minus10lexi le 10 0

8 Computational Intelligence and Neuroscience

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 4: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

algorithm by using PSO was proposed in [34] which canspontaneously finalize the most appropriate architecture ofdeep convolutional neural networks (CNNs) for the clas-sification of images termed as psoCNN A novel NN-basedtraining algorithm by incorporating PSO is proposed in [35]called LPSONS In the LPSONS algorithm the velocityparameter of PSO was embedded with Mantegna Levy flightdistribution for improved diversity Additionally the pro-posed algorithm is used to train feedforward multilayeredperceptron ANNs In [36] PSO was used for feature en-gineering of diabetic retinopathy and after it the NNclassifier was applied for the classification of diabetic reti-nopathy disease

After conducting a thorough literature review we caninfer that the particle efficiency and convergence velocity arehighly dependent on the swarm initialization process If allthe particles with a proper pattern cover the entire searchspace there are more chances that the global optimum willbe found at an early stage of PSO

3 Particle Swarm Optimization

PSO is a global optimization technique that plays an im-portant role in the fields of applied technology and has beenwidely deployed in numerous engineering applications suchas preparation of heating systems data mining power al-location of cooperative communication networks patternrecognition machine learning optimizing route selectionand information security to name a few PSO works on theapplication of candidates To maximize a problem theoptimal solution is represented by each candidate who isdesignated as a particle)e current location of the particle isdefined by the n-dimensional search space and is repre-sented by the vector solution x In the form of a fitness scorecarried out by particles each solution is translated In the n-dimensional search space at the kth direction positionvector x can be calculated by provoking each particle pVelocity vector v can be defined as the motion of particlesand the step size of an entire swarm in the search space isother than position vector p

PSO begins with the population consisting of n particlesthat fly at the iteration ki in the d-dimensional search spaceto look for the optimal solution Swarm mutation cantransform the objective feature into the desired candidatesolution For updating the position and velocity of theparticles the following two equations are used

vz+1 vz + c1 times pbestz minus xz1113872 1113873 + c2 times g

bestz minus xz1113872 1113873 (1)

xz+1 xz + vz+1 (2)

In the above equations the position vector and velocityvectors are vz and xz respectively pbest

z shows the local bestsolution of the entire swarm acquired using its own previousexperience and gbest

z reflects the global best solution ac-quired using the N-dimension experience of its neighbourWhile c1 minus ⟶ c1r1 and c2 minus ⟶ c2r2 c1 and c2 are theacceleration factors that influence the acceleration weightsand r1 and r2 are two random numbers produced by using

the random number generator xz+1 is an updated positionvector that guides the novel point at the kth iteration for thecurrent particle where vz+1 is the newly updated velocity Itis possible to drive three different factors from equation (1))e ldquomomentum factor⟶ vzrdquo represents the old velocity)e ldquocognitive factor⟶ c1 times (pbest

z minus xz)rdquo gives local bestfitness that has taken from all the previous finesses )eldquosocial factor⟶ c2 times (gbest

z minus xz)rdquo provides the best globalsolution amplified by the intact neighbour particles )epseudocode of fundamental PSO is present in Algorithm 1

4 Training of the Neural Networks

)e artificial neural network (ANN) is perceived as the mosteffective technique of approximation which is used to ap-proximate the nonlinear functions and their relationships)e ANN model is capable of generalizing learning or-ganizing and adapting data )e ANN architecture is basedon an interlined series of synchronized neurons whereas themultiprocessing layer is used to compute the encoding ofinformation [37] ANN is a computational mathematicalmodel that regulates the relationship between the input andoutput layers of different nonlinear functions [38] In thisstudy we have used the feedforward neural network presentin Figure 1 which is the most frequently used and populararchitecture of the ANN )e feedforward neural network isdefined by the three layers ie input layer sandwich layerand output layer respectively Input layer served as NNgateway where the information frame is inserted )e in-termediate task of the sandwich layer is to execute the dataframe using the input layer )e outcomes are derived fromthe output layer [39] Both layersrsquo units are connected withthe serial layer nodes and the link between the nodes isstructured in the feedforward neural network Bias is acomponent of each unit and has a value of minus1 as present in[24]

For weight optimization of NN the position of eachparticle in swarm shows a set of weight for the current epochor iteration )e dimensionality of each particle is thenumber of weights associated with the network )e particlemoves within the weight space attempting to minimizelearning error (mean squared error (MSE) or sum of squarederror (SSE)) In order to change the weights of the neuralnetwork change in Position occurs that will reduce the errorin current epoch )ere is no backpropagation concept inPSONN where the feedforward NN produced the learningerror (particle fitness) based on set of weight and bias (PSOpositions)

)e challenge of premature convergence is addressed inthe problem of weight optimization of ANN [40 41] )eprimary objective of the ANN model is to achieve a set ofoptimum parameters and weights )e two major classifi-cation approaches used to segregate the positive entitiesfrom the negative entities are gradient descent and errorcorrection respectively Gradient descent-based techniquesare low in performance where the concerns are high di-mensional and the parameters are exclusively dependent onthe structure Due to this fact it stuck in local minimaBackpropagation is one of the gradient decent techniques

4 Computational Intelligence and Neuroscience

which is most commonly used to train the neural networkmodels and solve complex multimodal problems in the real-world as mentioned in [24]

5 Random Number Generator

)e built-in library function is used to construct the mesh ofnumbers randomly at uniform locations through Rand(x_(min) x_max) in [42] A continuous uniform distributionprobability density function describes the effect of unifor-mity on any sequence It is possible to characterize theprobability density function as given in the followingequation

f(t)

1p minus q

forplt tlt q

0 for tltp or tgt q

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

(3)

where p and q represent themaximum likelihood parameterDue to the zero impact on the f (t) dt integrals over anylength the value of f (t) is useless at the boundary of p and q)e calculation of maximum probability parameter is de-termined by the estimated probability function which isgiven in

l(p q|t) nlog(q minus p) (4)

PSO (object of particles)(1) input Particles⟶ pz1113864 1113865⟶ with undefined locations(2) output Particles⟶ pz1113864 1113865⟶ with best fitness score(3) For each particle p1 p2 p3 p4 p5 pz1113864 1113865

(4) For each Dimension d1 d2 d3 d4 d5 dz1113864 1113865

(a) Initialize xz as xz Rand(xmin xmax)

(b) if xz reach to best fitness than pbestz replace pbest

z by xz

(c) Initialize vz as xz Rand(xmin xmax)

(5) Declared one global solution as gbestz from all the optimal pbest

z

(6) Repeat the process up to kz(d) For each particle p1 p2 p3 p4 p5 pz1113864 1113865 update

Using equation (1) compute vz+1Using equation (2) compute xz+1If xz+1 gtpbest

z

pbestz xz+1

If xz+1 gt gbestz

gbestz xz+1

(7) Return particles pz1113864 1113865⟶ contains global optimal solution

ALGORITHM 1 Standard PSO pseudocode

Calculate velocity and update positionbased on Gbest and Pbest particles

Initialize particleposition using QRS

Particle 02

Particle 03

Particle 01

ndash4235 3562 2002

ndash0587 0544

3226ndash1953ndash084210050285

1

23

4

5

6

Repeat process 4 5 and 3 until meetingtargeted learning error or maximum

number of iteration

Train NN using newparticle position

Total number ofparticles

Train NN using initialparticle position

Learning error (setoverall best error as

Gbest and each particlebest error as Pbest )

Neural network (feedforward)

Figure 1 Feedforward neural network

Computational Intelligence and Neuroscience 5

6 The Sobol Sequence

)e Sobol distribution was undertaken for the reconstruc-tion of coordinates in [43] )e relation of linear recurrencesis included for each dimension dz coordinate and the binaryexpression for linear recurrence can be defined for thenonnegative instance az as present in

a a120

+ a221

+ a322

+ middot middot middot + az2zminus1

(5)

For dimension dz the instance i can be generated using

xDi i1v

D1 + i2v

D2 + middot middot middot + izv

Dz (6)

vD1 denotes the k th direction binary function of an in-

stance vDi at the dimension dz and vD

i can be computedusing

VDk c1v

Dkminus1 + c2v

Dkminus2 + middot middot middot + czv

Dzminus1 +

vDiminusz

2z1113888 1113889 (7)

where cz describes polynomial coefficient where kgt z

7 The Halton Sequence

In [44] the authors proposed the Halton sequence as animproved variant of the van der Corput sequence Forgenerating random points Halton sequences use a coprimebase Algorithm 2 shows the pseudocode for generating theHalton sequences

8 The WELL Sequence

Panneton et al [45] suggested the Well Equi-distributedLong-period Linear (WELL) sequence Initially it wasperformed as a modified variant of the Mersenne Twisteralgorithm )e WELL distribution algorithm is given as inAlgorithm 3

For the WELL distribution the algorithm mentionedabove describes the general recurrence )e algorithmdefinition is as follows x and r are two integers with aninterval of rgt 0 and 0lt xlt k and k rlowastw minus x and w is theweight factor of distribution )e binary matrix of size rlowastw

having the r bit block is expressed by A0 to A7 mx describesthe bitmask that holds the first wmdashx bits t0 to t7 aretemporary vector variables

)e random points in Figures 2ndash5 are the uniform andSobol Halton and WELL distributions are represented bythe bubble plot in which the y-axis is represented by therandom values and the x-axis is shown in the table by therelevant index of the point concerned

9 Methodology

)e objective of this paper is to work out the purity of one ofthe proposed pseudorandom sequences Pseudorandomsequences are much more random than quasirandom se-quences PSO is random in nature so it does not have aspecific pattern to guarantee the global optimum solution

)erefore we have suggested the WELL distribution-basedPSO (WE-PSO) by taking advantage of randomness in thePSO We have compared the WE-PSO with the uniformdistribution-based PSO and other quasirandom distribu-tions-based PSO ie Sobol distribution (SO-PSO) andHalton distribution (H-PSO) to ensure the integrity of theproposed approach Moreover by training the nine real-world NN problems we have tested the proposed techniqueover NN classifiers )e experimental outcomes reflect anunusual improvement over standard PSO with uniformdistribution WE-PSO approach also outperforms SO-PSOand H-PSO approaches as evident in results Numericalresults have shown that the use of WELL distribution toinitialize the swarm enhances the efficiency of population-based algorithms in evolutionary computing In Algo-rithm 4 the pseudocode for the proposed technique ispresented

10 Results and Discussion

WELL-PSO (WE-PSO) technique was simulated in C++ andapplied to a computer with the 23GHz Core (M) 2 DuoCPU processor specification A group of fifteen nonlinearbenchmark test functions are used to compare the WE-PSOwith standard PSO SO-PSO and H-PSO for measuring theexecution of the WELL-based PSO (WE-PSO) algorithmNormally these functions are applied to investigate theperformance of any technique )erefore we used it toexamine the optimization results of WE-PSO in our study Alist of such functions can be found in Table 1 )e di-mensionality of the problem is seen in Table 1 as D Srepresents the interval of the variables and fmin denotes theglobal optimumminimum value)e simulation parametersare used in the interval [09 04] where c1 c2145 inertiaweight w is used and swarm size is 40 )e function di-mensions are D 10 20 and 30 for simulation and a cu-mulative number of epochs is 3000 All techniques have beenapplied to similar parameters for comparatively effectiveresults To check the performance of each technique allalgorithms were tested for 30 runs

101 Discussion )e purpose of this study is to observe theunique characteristics of the standard benchmark functionsbased on the dimensions of the experimental results )reesimulation tests were performed in the experiments wherethe following TW-BA characteristics were observed

(i) Effect of using different initializing PSO approaches(ii) Effect of using different dimensions for problems(iii) A comparative analysis

)e objective of this study was to find the most suitableinitialization approach for the PSO and to explore WE-PSOwith other approaches such as SO-PSO H-PSO andstandard PSO during the first experiment )e purpose ofthe second simulation is to define the essence of the di-mension concerning the standard function optimization

6 Computational Intelligence and Neuroscience

Finally the simulation results of WE-PSO were comparedwith the standard PSO SO-PSO and H-PSO respectivelySimulation effects have been addressed in depth in the re-mainder of the article

)e graphical representation of the similarities of WE-PSO with PSO H-PSO and SO-PSO is shown in Figures 6 to20 For WE-PSO we can observe that majority of the es-timates have a better convergence curve )e dimensions 10

Halton ()input Size z and base b_cm with Dimension doutput population instances p

Fix the interval overmaxminus⟶ 1minminus⟶ 0For each iteration (k_1 k_2 k_3 k_z)doFor each particle p_1 p_2 p_3 p_zmaxmaxb_cmminmin+maxlowast z mod b_cmz zb_cm

ALGORITHM 2 Halton sequences

(i) WELL ()(ii) t0 (mxampvkrminus1) + (mxampvkrminus2)

(iii) t1 (A0vk0) + (A1vkm1)

(iv) t2 (A2vkm2) + (A3vkm3

)

(v) t3 t2 + t1(vi) t4 t0A4 + t1A5 + t2A6 + t3A7(vii) vk+1rminus1 vkrminus2 ampmx

(viii) for i minus ⟶ r minus 2 2 do vk+1ivkiminus1(ix) vk+11 t3(x) vk+10 t4(xi) Returnykvk0

ALGORITHM 3 WELL sequences

100806

s040200

0 100 200Index

300 400 500

Figure 3 Population initialization using Sobol distribution

100806

r040200

0 100 200Index

300 400 500

Figure 2 Population initialization using uniform distribution

100806

h040200

0 100 200Index

300 400 500

Figure 4 Population initialization using uniform distribution

08

w 04

000 100 200

Index300 400 500

Figure 5 Population initialization using WELL distribution

Computational Intelligence and Neuroscience 7

20 and 30 of the problem are described in the x-axis whilethe y-axis represents the mean best against each dimensionof the problem

1011 Effect of Using Different Initializing PSO ApproachesIn this simulation PSO is initialized with WELL sequence(WE-PSO) instead of the uniform distribution )e variantWE-PSO is compared with the other initialized approachesincluding Sobol sequence (SO-PSO) Halton Sequence (H-

PSO) and standard PSO)e experimental findings indicatethat the higher dimensions are better

1012 Effect of Using Different Dimensions for Problems)e core objective of this simulation setup is to find thesupremacy of the outcomes based on the dimension of theoptimization functions )ree dimensions were used forbench mark functions such as D 10 D 20 and D 30 inexperiments In Table 2 the simulation results were

Step 1 initialize the swarmSet epoch count I 0 population size Nz dimension of the problem Dz wmax and wminFor each particle Pz

Step 11 initialize xz as xz WELL(xmin xmax)

Step 12 initialize the particle velocity as vz Rand(xmin xmax)

Step 13 compute the fitness score fz

Step 14 set global best position gbestz as max(f1 f2 f3 fz)) where fz isin globally optimal fitness

Step 15 set local best position pbestz as max(f1 f2 f3 fz)) where fz isin locally optimal fitness

Step 2Compare the current particlersquos fitness score xz in the swarm and its old local best location pbest

z If the current fitness score xz is greaterthan pbest

z then substitute pbestz with xz else retain the xz unchanged

Step 3Compare the current particlersquos fitness score xz in the swarm and its old global best location gbest

z If the current fitness score xz isgreater than gbest

z then substitute gbestz with xz else retain the xz unchanged

Step 4Using equation (1) compute vz+1⟶ updated velocity vectorUsing equation (2) compute xz+1⟶ updated position vectorStep 5Go to step 2 if the stopping criteria does not met else terminate

ALGORITHM 4 Proposed PSO pseudocode

Table 1 Standard objective functions and their optimal

SR Function name Objective function Search space Optimalvalue

F1 Sphere Minf(x) 1113936ni1 x2

i minus512lexi le 512 0F2 Rastrigin Minf(x) 1113936

ni1 [x2

i minus 10 cos(2πx) + 10]i minus512lexi le 512 0

F3 Axis parallel hyper-ellipsoid Minf(x) 1113936

ni1 i middot x2

i minus512lexi le 512 0

F4 Rotated hyper-ellipsoid Minf(x) 1113936ni (1113936

ij1 xj)

2 minus65536lexi le 65536 0

F5 Moved axis parallelhyper-ellipsoid Minf(x) 1113936

ni1 5i middot x2

i minus512lexi le 512 0

F6 Sum of different power Minf(x) 1113936ni1 |xi|

i+1 minus1lexi le 1 0F7 Chung Reynolds Minf(x) (1113936

ni1 x2

i )2 minus100lexi le 100 0F8 Csendes Minf(x) 1113936

ni1 x6

i (2 + sin(1xi)) minus1lexi le 1 0

F9 Schaffer Minf(x) 1113936ni1 05 + (sin2

x2

i + x2i+1

1113969minus 05[1 + 0001(x2

i + x2i+1)]

2) minus100lexi le 100 0

F10 Schumer Steiglitz Minf(x) 1113936ni1 x4

i minus512lexi le 512 0F11 Schwefel Minf(x) 1113936

ni1 xα

i minus100lexi le 100 0

F12 Schwefel 12 Min f(x) 1113936Di (1113936

ij1 xj)

2 minus100lexi le 100 0

F13 Schwefel 221 Minf(x) max|xi|1ltiltD minus100lexi le 100 0

F14 Schwefel 222 Minf(x) 1113936Di1 |xi| + 1113937

ni1 |xi| minus100lexi le 100 0

F15 Schwefel 223 Minf(x) 1113936ni1 x10

i minus10lexi le 10 0

8 Computational Intelligence and Neuroscience

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 5: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

which is most commonly used to train the neural networkmodels and solve complex multimodal problems in the real-world as mentioned in [24]

5 Random Number Generator

)e built-in library function is used to construct the mesh ofnumbers randomly at uniform locations through Rand(x_(min) x_max) in [42] A continuous uniform distributionprobability density function describes the effect of unifor-mity on any sequence It is possible to characterize theprobability density function as given in the followingequation

f(t)

1p minus q

forplt tlt q

0 for tltp or tgt q

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

(3)

where p and q represent themaximum likelihood parameterDue to the zero impact on the f (t) dt integrals over anylength the value of f (t) is useless at the boundary of p and q)e calculation of maximum probability parameter is de-termined by the estimated probability function which isgiven in

l(p q|t) nlog(q minus p) (4)

PSO (object of particles)(1) input Particles⟶ pz1113864 1113865⟶ with undefined locations(2) output Particles⟶ pz1113864 1113865⟶ with best fitness score(3) For each particle p1 p2 p3 p4 p5 pz1113864 1113865

(4) For each Dimension d1 d2 d3 d4 d5 dz1113864 1113865

(a) Initialize xz as xz Rand(xmin xmax)

(b) if xz reach to best fitness than pbestz replace pbest

z by xz

(c) Initialize vz as xz Rand(xmin xmax)

(5) Declared one global solution as gbestz from all the optimal pbest

z

(6) Repeat the process up to kz(d) For each particle p1 p2 p3 p4 p5 pz1113864 1113865 update

Using equation (1) compute vz+1Using equation (2) compute xz+1If xz+1 gtpbest

z

pbestz xz+1

If xz+1 gt gbestz

gbestz xz+1

(7) Return particles pz1113864 1113865⟶ contains global optimal solution

ALGORITHM 1 Standard PSO pseudocode

Calculate velocity and update positionbased on Gbest and Pbest particles

Initialize particleposition using QRS

Particle 02

Particle 03

Particle 01

ndash4235 3562 2002

ndash0587 0544

3226ndash1953ndash084210050285

1

23

4

5

6

Repeat process 4 5 and 3 until meetingtargeted learning error or maximum

number of iteration

Train NN using newparticle position

Total number ofparticles

Train NN using initialparticle position

Learning error (setoverall best error as

Gbest and each particlebest error as Pbest )

Neural network (feedforward)

Figure 1 Feedforward neural network

Computational Intelligence and Neuroscience 5

6 The Sobol Sequence

)e Sobol distribution was undertaken for the reconstruc-tion of coordinates in [43] )e relation of linear recurrencesis included for each dimension dz coordinate and the binaryexpression for linear recurrence can be defined for thenonnegative instance az as present in

a a120

+ a221

+ a322

+ middot middot middot + az2zminus1

(5)

For dimension dz the instance i can be generated using

xDi i1v

D1 + i2v

D2 + middot middot middot + izv

Dz (6)

vD1 denotes the k th direction binary function of an in-

stance vDi at the dimension dz and vD

i can be computedusing

VDk c1v

Dkminus1 + c2v

Dkminus2 + middot middot middot + czv

Dzminus1 +

vDiminusz

2z1113888 1113889 (7)

where cz describes polynomial coefficient where kgt z

7 The Halton Sequence

In [44] the authors proposed the Halton sequence as animproved variant of the van der Corput sequence Forgenerating random points Halton sequences use a coprimebase Algorithm 2 shows the pseudocode for generating theHalton sequences

8 The WELL Sequence

Panneton et al [45] suggested the Well Equi-distributedLong-period Linear (WELL) sequence Initially it wasperformed as a modified variant of the Mersenne Twisteralgorithm )e WELL distribution algorithm is given as inAlgorithm 3

For the WELL distribution the algorithm mentionedabove describes the general recurrence )e algorithmdefinition is as follows x and r are two integers with aninterval of rgt 0 and 0lt xlt k and k rlowastw minus x and w is theweight factor of distribution )e binary matrix of size rlowastw

having the r bit block is expressed by A0 to A7 mx describesthe bitmask that holds the first wmdashx bits t0 to t7 aretemporary vector variables

)e random points in Figures 2ndash5 are the uniform andSobol Halton and WELL distributions are represented bythe bubble plot in which the y-axis is represented by therandom values and the x-axis is shown in the table by therelevant index of the point concerned

9 Methodology

)e objective of this paper is to work out the purity of one ofthe proposed pseudorandom sequences Pseudorandomsequences are much more random than quasirandom se-quences PSO is random in nature so it does not have aspecific pattern to guarantee the global optimum solution

)erefore we have suggested the WELL distribution-basedPSO (WE-PSO) by taking advantage of randomness in thePSO We have compared the WE-PSO with the uniformdistribution-based PSO and other quasirandom distribu-tions-based PSO ie Sobol distribution (SO-PSO) andHalton distribution (H-PSO) to ensure the integrity of theproposed approach Moreover by training the nine real-world NN problems we have tested the proposed techniqueover NN classifiers )e experimental outcomes reflect anunusual improvement over standard PSO with uniformdistribution WE-PSO approach also outperforms SO-PSOand H-PSO approaches as evident in results Numericalresults have shown that the use of WELL distribution toinitialize the swarm enhances the efficiency of population-based algorithms in evolutionary computing In Algo-rithm 4 the pseudocode for the proposed technique ispresented

10 Results and Discussion

WELL-PSO (WE-PSO) technique was simulated in C++ andapplied to a computer with the 23GHz Core (M) 2 DuoCPU processor specification A group of fifteen nonlinearbenchmark test functions are used to compare the WE-PSOwith standard PSO SO-PSO and H-PSO for measuring theexecution of the WELL-based PSO (WE-PSO) algorithmNormally these functions are applied to investigate theperformance of any technique )erefore we used it toexamine the optimization results of WE-PSO in our study Alist of such functions can be found in Table 1 )e di-mensionality of the problem is seen in Table 1 as D Srepresents the interval of the variables and fmin denotes theglobal optimumminimum value)e simulation parametersare used in the interval [09 04] where c1 c2145 inertiaweight w is used and swarm size is 40 )e function di-mensions are D 10 20 and 30 for simulation and a cu-mulative number of epochs is 3000 All techniques have beenapplied to similar parameters for comparatively effectiveresults To check the performance of each technique allalgorithms were tested for 30 runs

101 Discussion )e purpose of this study is to observe theunique characteristics of the standard benchmark functionsbased on the dimensions of the experimental results )reesimulation tests were performed in the experiments wherethe following TW-BA characteristics were observed

(i) Effect of using different initializing PSO approaches(ii) Effect of using different dimensions for problems(iii) A comparative analysis

)e objective of this study was to find the most suitableinitialization approach for the PSO and to explore WE-PSOwith other approaches such as SO-PSO H-PSO andstandard PSO during the first experiment )e purpose ofthe second simulation is to define the essence of the di-mension concerning the standard function optimization

6 Computational Intelligence and Neuroscience

Finally the simulation results of WE-PSO were comparedwith the standard PSO SO-PSO and H-PSO respectivelySimulation effects have been addressed in depth in the re-mainder of the article

)e graphical representation of the similarities of WE-PSO with PSO H-PSO and SO-PSO is shown in Figures 6 to20 For WE-PSO we can observe that majority of the es-timates have a better convergence curve )e dimensions 10

Halton ()input Size z and base b_cm with Dimension doutput population instances p

Fix the interval overmaxminus⟶ 1minminus⟶ 0For each iteration (k_1 k_2 k_3 k_z)doFor each particle p_1 p_2 p_3 p_zmaxmaxb_cmminmin+maxlowast z mod b_cmz zb_cm

ALGORITHM 2 Halton sequences

(i) WELL ()(ii) t0 (mxampvkrminus1) + (mxampvkrminus2)

(iii) t1 (A0vk0) + (A1vkm1)

(iv) t2 (A2vkm2) + (A3vkm3

)

(v) t3 t2 + t1(vi) t4 t0A4 + t1A5 + t2A6 + t3A7(vii) vk+1rminus1 vkrminus2 ampmx

(viii) for i minus ⟶ r minus 2 2 do vk+1ivkiminus1(ix) vk+11 t3(x) vk+10 t4(xi) Returnykvk0

ALGORITHM 3 WELL sequences

100806

s040200

0 100 200Index

300 400 500

Figure 3 Population initialization using Sobol distribution

100806

r040200

0 100 200Index

300 400 500

Figure 2 Population initialization using uniform distribution

100806

h040200

0 100 200Index

300 400 500

Figure 4 Population initialization using uniform distribution

08

w 04

000 100 200

Index300 400 500

Figure 5 Population initialization using WELL distribution

Computational Intelligence and Neuroscience 7

20 and 30 of the problem are described in the x-axis whilethe y-axis represents the mean best against each dimensionof the problem

1011 Effect of Using Different Initializing PSO ApproachesIn this simulation PSO is initialized with WELL sequence(WE-PSO) instead of the uniform distribution )e variantWE-PSO is compared with the other initialized approachesincluding Sobol sequence (SO-PSO) Halton Sequence (H-

PSO) and standard PSO)e experimental findings indicatethat the higher dimensions are better

1012 Effect of Using Different Dimensions for Problems)e core objective of this simulation setup is to find thesupremacy of the outcomes based on the dimension of theoptimization functions )ree dimensions were used forbench mark functions such as D 10 D 20 and D 30 inexperiments In Table 2 the simulation results were

Step 1 initialize the swarmSet epoch count I 0 population size Nz dimension of the problem Dz wmax and wminFor each particle Pz

Step 11 initialize xz as xz WELL(xmin xmax)

Step 12 initialize the particle velocity as vz Rand(xmin xmax)

Step 13 compute the fitness score fz

Step 14 set global best position gbestz as max(f1 f2 f3 fz)) where fz isin globally optimal fitness

Step 15 set local best position pbestz as max(f1 f2 f3 fz)) where fz isin locally optimal fitness

Step 2Compare the current particlersquos fitness score xz in the swarm and its old local best location pbest

z If the current fitness score xz is greaterthan pbest

z then substitute pbestz with xz else retain the xz unchanged

Step 3Compare the current particlersquos fitness score xz in the swarm and its old global best location gbest

z If the current fitness score xz isgreater than gbest

z then substitute gbestz with xz else retain the xz unchanged

Step 4Using equation (1) compute vz+1⟶ updated velocity vectorUsing equation (2) compute xz+1⟶ updated position vectorStep 5Go to step 2 if the stopping criteria does not met else terminate

ALGORITHM 4 Proposed PSO pseudocode

Table 1 Standard objective functions and their optimal

SR Function name Objective function Search space Optimalvalue

F1 Sphere Minf(x) 1113936ni1 x2

i minus512lexi le 512 0F2 Rastrigin Minf(x) 1113936

ni1 [x2

i minus 10 cos(2πx) + 10]i minus512lexi le 512 0

F3 Axis parallel hyper-ellipsoid Minf(x) 1113936

ni1 i middot x2

i minus512lexi le 512 0

F4 Rotated hyper-ellipsoid Minf(x) 1113936ni (1113936

ij1 xj)

2 minus65536lexi le 65536 0

F5 Moved axis parallelhyper-ellipsoid Minf(x) 1113936

ni1 5i middot x2

i minus512lexi le 512 0

F6 Sum of different power Minf(x) 1113936ni1 |xi|

i+1 minus1lexi le 1 0F7 Chung Reynolds Minf(x) (1113936

ni1 x2

i )2 minus100lexi le 100 0F8 Csendes Minf(x) 1113936

ni1 x6

i (2 + sin(1xi)) minus1lexi le 1 0

F9 Schaffer Minf(x) 1113936ni1 05 + (sin2

x2

i + x2i+1

1113969minus 05[1 + 0001(x2

i + x2i+1)]

2) minus100lexi le 100 0

F10 Schumer Steiglitz Minf(x) 1113936ni1 x4

i minus512lexi le 512 0F11 Schwefel Minf(x) 1113936

ni1 xα

i minus100lexi le 100 0

F12 Schwefel 12 Min f(x) 1113936Di (1113936

ij1 xj)

2 minus100lexi le 100 0

F13 Schwefel 221 Minf(x) max|xi|1ltiltD minus100lexi le 100 0

F14 Schwefel 222 Minf(x) 1113936Di1 |xi| + 1113937

ni1 |xi| minus100lexi le 100 0

F15 Schwefel 223 Minf(x) 1113936ni1 x10

i minus10lexi le 10 0

8 Computational Intelligence and Neuroscience

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 6: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

6 The Sobol Sequence

)e Sobol distribution was undertaken for the reconstruc-tion of coordinates in [43] )e relation of linear recurrencesis included for each dimension dz coordinate and the binaryexpression for linear recurrence can be defined for thenonnegative instance az as present in

a a120

+ a221

+ a322

+ middot middot middot + az2zminus1

(5)

For dimension dz the instance i can be generated using

xDi i1v

D1 + i2v

D2 + middot middot middot + izv

Dz (6)

vD1 denotes the k th direction binary function of an in-

stance vDi at the dimension dz and vD

i can be computedusing

VDk c1v

Dkminus1 + c2v

Dkminus2 + middot middot middot + czv

Dzminus1 +

vDiminusz

2z1113888 1113889 (7)

where cz describes polynomial coefficient where kgt z

7 The Halton Sequence

In [44] the authors proposed the Halton sequence as animproved variant of the van der Corput sequence Forgenerating random points Halton sequences use a coprimebase Algorithm 2 shows the pseudocode for generating theHalton sequences

8 The WELL Sequence

Panneton et al [45] suggested the Well Equi-distributedLong-period Linear (WELL) sequence Initially it wasperformed as a modified variant of the Mersenne Twisteralgorithm )e WELL distribution algorithm is given as inAlgorithm 3

For the WELL distribution the algorithm mentionedabove describes the general recurrence )e algorithmdefinition is as follows x and r are two integers with aninterval of rgt 0 and 0lt xlt k and k rlowastw minus x and w is theweight factor of distribution )e binary matrix of size rlowastw

having the r bit block is expressed by A0 to A7 mx describesthe bitmask that holds the first wmdashx bits t0 to t7 aretemporary vector variables

)e random points in Figures 2ndash5 are the uniform andSobol Halton and WELL distributions are represented bythe bubble plot in which the y-axis is represented by therandom values and the x-axis is shown in the table by therelevant index of the point concerned

9 Methodology

)e objective of this paper is to work out the purity of one ofthe proposed pseudorandom sequences Pseudorandomsequences are much more random than quasirandom se-quences PSO is random in nature so it does not have aspecific pattern to guarantee the global optimum solution

)erefore we have suggested the WELL distribution-basedPSO (WE-PSO) by taking advantage of randomness in thePSO We have compared the WE-PSO with the uniformdistribution-based PSO and other quasirandom distribu-tions-based PSO ie Sobol distribution (SO-PSO) andHalton distribution (H-PSO) to ensure the integrity of theproposed approach Moreover by training the nine real-world NN problems we have tested the proposed techniqueover NN classifiers )e experimental outcomes reflect anunusual improvement over standard PSO with uniformdistribution WE-PSO approach also outperforms SO-PSOand H-PSO approaches as evident in results Numericalresults have shown that the use of WELL distribution toinitialize the swarm enhances the efficiency of population-based algorithms in evolutionary computing In Algo-rithm 4 the pseudocode for the proposed technique ispresented

10 Results and Discussion

WELL-PSO (WE-PSO) technique was simulated in C++ andapplied to a computer with the 23GHz Core (M) 2 DuoCPU processor specification A group of fifteen nonlinearbenchmark test functions are used to compare the WE-PSOwith standard PSO SO-PSO and H-PSO for measuring theexecution of the WELL-based PSO (WE-PSO) algorithmNormally these functions are applied to investigate theperformance of any technique )erefore we used it toexamine the optimization results of WE-PSO in our study Alist of such functions can be found in Table 1 )e di-mensionality of the problem is seen in Table 1 as D Srepresents the interval of the variables and fmin denotes theglobal optimumminimum value)e simulation parametersare used in the interval [09 04] where c1 c2145 inertiaweight w is used and swarm size is 40 )e function di-mensions are D 10 20 and 30 for simulation and a cu-mulative number of epochs is 3000 All techniques have beenapplied to similar parameters for comparatively effectiveresults To check the performance of each technique allalgorithms were tested for 30 runs

101 Discussion )e purpose of this study is to observe theunique characteristics of the standard benchmark functionsbased on the dimensions of the experimental results )reesimulation tests were performed in the experiments wherethe following TW-BA characteristics were observed

(i) Effect of using different initializing PSO approaches(ii) Effect of using different dimensions for problems(iii) A comparative analysis

)e objective of this study was to find the most suitableinitialization approach for the PSO and to explore WE-PSOwith other approaches such as SO-PSO H-PSO andstandard PSO during the first experiment )e purpose ofthe second simulation is to define the essence of the di-mension concerning the standard function optimization

6 Computational Intelligence and Neuroscience

Finally the simulation results of WE-PSO were comparedwith the standard PSO SO-PSO and H-PSO respectivelySimulation effects have been addressed in depth in the re-mainder of the article

)e graphical representation of the similarities of WE-PSO with PSO H-PSO and SO-PSO is shown in Figures 6 to20 For WE-PSO we can observe that majority of the es-timates have a better convergence curve )e dimensions 10

Halton ()input Size z and base b_cm with Dimension doutput population instances p

Fix the interval overmaxminus⟶ 1minminus⟶ 0For each iteration (k_1 k_2 k_3 k_z)doFor each particle p_1 p_2 p_3 p_zmaxmaxb_cmminmin+maxlowast z mod b_cmz zb_cm

ALGORITHM 2 Halton sequences

(i) WELL ()(ii) t0 (mxampvkrminus1) + (mxampvkrminus2)

(iii) t1 (A0vk0) + (A1vkm1)

(iv) t2 (A2vkm2) + (A3vkm3

)

(v) t3 t2 + t1(vi) t4 t0A4 + t1A5 + t2A6 + t3A7(vii) vk+1rminus1 vkrminus2 ampmx

(viii) for i minus ⟶ r minus 2 2 do vk+1ivkiminus1(ix) vk+11 t3(x) vk+10 t4(xi) Returnykvk0

ALGORITHM 3 WELL sequences

100806

s040200

0 100 200Index

300 400 500

Figure 3 Population initialization using Sobol distribution

100806

r040200

0 100 200Index

300 400 500

Figure 2 Population initialization using uniform distribution

100806

h040200

0 100 200Index

300 400 500

Figure 4 Population initialization using uniform distribution

08

w 04

000 100 200

Index300 400 500

Figure 5 Population initialization using WELL distribution

Computational Intelligence and Neuroscience 7

20 and 30 of the problem are described in the x-axis whilethe y-axis represents the mean best against each dimensionof the problem

1011 Effect of Using Different Initializing PSO ApproachesIn this simulation PSO is initialized with WELL sequence(WE-PSO) instead of the uniform distribution )e variantWE-PSO is compared with the other initialized approachesincluding Sobol sequence (SO-PSO) Halton Sequence (H-

PSO) and standard PSO)e experimental findings indicatethat the higher dimensions are better

1012 Effect of Using Different Dimensions for Problems)e core objective of this simulation setup is to find thesupremacy of the outcomes based on the dimension of theoptimization functions )ree dimensions were used forbench mark functions such as D 10 D 20 and D 30 inexperiments In Table 2 the simulation results were

Step 1 initialize the swarmSet epoch count I 0 population size Nz dimension of the problem Dz wmax and wminFor each particle Pz

Step 11 initialize xz as xz WELL(xmin xmax)

Step 12 initialize the particle velocity as vz Rand(xmin xmax)

Step 13 compute the fitness score fz

Step 14 set global best position gbestz as max(f1 f2 f3 fz)) where fz isin globally optimal fitness

Step 15 set local best position pbestz as max(f1 f2 f3 fz)) where fz isin locally optimal fitness

Step 2Compare the current particlersquos fitness score xz in the swarm and its old local best location pbest

z If the current fitness score xz is greaterthan pbest

z then substitute pbestz with xz else retain the xz unchanged

Step 3Compare the current particlersquos fitness score xz in the swarm and its old global best location gbest

z If the current fitness score xz isgreater than gbest

z then substitute gbestz with xz else retain the xz unchanged

Step 4Using equation (1) compute vz+1⟶ updated velocity vectorUsing equation (2) compute xz+1⟶ updated position vectorStep 5Go to step 2 if the stopping criteria does not met else terminate

ALGORITHM 4 Proposed PSO pseudocode

Table 1 Standard objective functions and their optimal

SR Function name Objective function Search space Optimalvalue

F1 Sphere Minf(x) 1113936ni1 x2

i minus512lexi le 512 0F2 Rastrigin Minf(x) 1113936

ni1 [x2

i minus 10 cos(2πx) + 10]i minus512lexi le 512 0

F3 Axis parallel hyper-ellipsoid Minf(x) 1113936

ni1 i middot x2

i minus512lexi le 512 0

F4 Rotated hyper-ellipsoid Minf(x) 1113936ni (1113936

ij1 xj)

2 minus65536lexi le 65536 0

F5 Moved axis parallelhyper-ellipsoid Minf(x) 1113936

ni1 5i middot x2

i minus512lexi le 512 0

F6 Sum of different power Minf(x) 1113936ni1 |xi|

i+1 minus1lexi le 1 0F7 Chung Reynolds Minf(x) (1113936

ni1 x2

i )2 minus100lexi le 100 0F8 Csendes Minf(x) 1113936

ni1 x6

i (2 + sin(1xi)) minus1lexi le 1 0

F9 Schaffer Minf(x) 1113936ni1 05 + (sin2

x2

i + x2i+1

1113969minus 05[1 + 0001(x2

i + x2i+1)]

2) minus100lexi le 100 0

F10 Schumer Steiglitz Minf(x) 1113936ni1 x4

i minus512lexi le 512 0F11 Schwefel Minf(x) 1113936

ni1 xα

i minus100lexi le 100 0

F12 Schwefel 12 Min f(x) 1113936Di (1113936

ij1 xj)

2 minus100lexi le 100 0

F13 Schwefel 221 Minf(x) max|xi|1ltiltD minus100lexi le 100 0

F14 Schwefel 222 Minf(x) 1113936Di1 |xi| + 1113937

ni1 |xi| minus100lexi le 100 0

F15 Schwefel 223 Minf(x) 1113936ni1 x10

i minus10lexi le 10 0

8 Computational Intelligence and Neuroscience

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 7: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

Finally the simulation results of WE-PSO were comparedwith the standard PSO SO-PSO and H-PSO respectivelySimulation effects have been addressed in depth in the re-mainder of the article

)e graphical representation of the similarities of WE-PSO with PSO H-PSO and SO-PSO is shown in Figures 6 to20 For WE-PSO we can observe that majority of the es-timates have a better convergence curve )e dimensions 10

Halton ()input Size z and base b_cm with Dimension doutput population instances p

Fix the interval overmaxminus⟶ 1minminus⟶ 0For each iteration (k_1 k_2 k_3 k_z)doFor each particle p_1 p_2 p_3 p_zmaxmaxb_cmminmin+maxlowast z mod b_cmz zb_cm

ALGORITHM 2 Halton sequences

(i) WELL ()(ii) t0 (mxampvkrminus1) + (mxampvkrminus2)

(iii) t1 (A0vk0) + (A1vkm1)

(iv) t2 (A2vkm2) + (A3vkm3

)

(v) t3 t2 + t1(vi) t4 t0A4 + t1A5 + t2A6 + t3A7(vii) vk+1rminus1 vkrminus2 ampmx

(viii) for i minus ⟶ r minus 2 2 do vk+1ivkiminus1(ix) vk+11 t3(x) vk+10 t4(xi) Returnykvk0

ALGORITHM 3 WELL sequences

100806

s040200

0 100 200Index

300 400 500

Figure 3 Population initialization using Sobol distribution

100806

r040200

0 100 200Index

300 400 500

Figure 2 Population initialization using uniform distribution

100806

h040200

0 100 200Index

300 400 500

Figure 4 Population initialization using uniform distribution

08

w 04

000 100 200

Index300 400 500

Figure 5 Population initialization using WELL distribution

Computational Intelligence and Neuroscience 7

20 and 30 of the problem are described in the x-axis whilethe y-axis represents the mean best against each dimensionof the problem

1011 Effect of Using Different Initializing PSO ApproachesIn this simulation PSO is initialized with WELL sequence(WE-PSO) instead of the uniform distribution )e variantWE-PSO is compared with the other initialized approachesincluding Sobol sequence (SO-PSO) Halton Sequence (H-

PSO) and standard PSO)e experimental findings indicatethat the higher dimensions are better

1012 Effect of Using Different Dimensions for Problems)e core objective of this simulation setup is to find thesupremacy of the outcomes based on the dimension of theoptimization functions )ree dimensions were used forbench mark functions such as D 10 D 20 and D 30 inexperiments In Table 2 the simulation results were

Step 1 initialize the swarmSet epoch count I 0 population size Nz dimension of the problem Dz wmax and wminFor each particle Pz

Step 11 initialize xz as xz WELL(xmin xmax)

Step 12 initialize the particle velocity as vz Rand(xmin xmax)

Step 13 compute the fitness score fz

Step 14 set global best position gbestz as max(f1 f2 f3 fz)) where fz isin globally optimal fitness

Step 15 set local best position pbestz as max(f1 f2 f3 fz)) where fz isin locally optimal fitness

Step 2Compare the current particlersquos fitness score xz in the swarm and its old local best location pbest

z If the current fitness score xz is greaterthan pbest

z then substitute pbestz with xz else retain the xz unchanged

Step 3Compare the current particlersquos fitness score xz in the swarm and its old global best location gbest

z If the current fitness score xz isgreater than gbest

z then substitute gbestz with xz else retain the xz unchanged

Step 4Using equation (1) compute vz+1⟶ updated velocity vectorUsing equation (2) compute xz+1⟶ updated position vectorStep 5Go to step 2 if the stopping criteria does not met else terminate

ALGORITHM 4 Proposed PSO pseudocode

Table 1 Standard objective functions and their optimal

SR Function name Objective function Search space Optimalvalue

F1 Sphere Minf(x) 1113936ni1 x2

i minus512lexi le 512 0F2 Rastrigin Minf(x) 1113936

ni1 [x2

i minus 10 cos(2πx) + 10]i minus512lexi le 512 0

F3 Axis parallel hyper-ellipsoid Minf(x) 1113936

ni1 i middot x2

i minus512lexi le 512 0

F4 Rotated hyper-ellipsoid Minf(x) 1113936ni (1113936

ij1 xj)

2 minus65536lexi le 65536 0

F5 Moved axis parallelhyper-ellipsoid Minf(x) 1113936

ni1 5i middot x2

i minus512lexi le 512 0

F6 Sum of different power Minf(x) 1113936ni1 |xi|

i+1 minus1lexi le 1 0F7 Chung Reynolds Minf(x) (1113936

ni1 x2

i )2 minus100lexi le 100 0F8 Csendes Minf(x) 1113936

ni1 x6

i (2 + sin(1xi)) minus1lexi le 1 0

F9 Schaffer Minf(x) 1113936ni1 05 + (sin2

x2

i + x2i+1

1113969minus 05[1 + 0001(x2

i + x2i+1)]

2) minus100lexi le 100 0

F10 Schumer Steiglitz Minf(x) 1113936ni1 x4

i minus512lexi le 512 0F11 Schwefel Minf(x) 1113936

ni1 xα

i minus100lexi le 100 0

F12 Schwefel 12 Min f(x) 1113936Di (1113936

ij1 xj)

2 minus100lexi le 100 0

F13 Schwefel 221 Minf(x) max|xi|1ltiltD minus100lexi le 100 0

F14 Schwefel 222 Minf(x) 1113936Di1 |xi| + 1113937

ni1 |xi| minus100lexi le 100 0

F15 Schwefel 223 Minf(x) 1113936ni1 x10

i minus10lexi le 10 0

8 Computational Intelligence and Neuroscience

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 8: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

20 and 30 of the problem are described in the x-axis whilethe y-axis represents the mean best against each dimensionof the problem

1011 Effect of Using Different Initializing PSO ApproachesIn this simulation PSO is initialized with WELL sequence(WE-PSO) instead of the uniform distribution )e variantWE-PSO is compared with the other initialized approachesincluding Sobol sequence (SO-PSO) Halton Sequence (H-

PSO) and standard PSO)e experimental findings indicatethat the higher dimensions are better

1012 Effect of Using Different Dimensions for Problems)e core objective of this simulation setup is to find thesupremacy of the outcomes based on the dimension of theoptimization functions )ree dimensions were used forbench mark functions such as D 10 D 20 and D 30 inexperiments In Table 2 the simulation results were

Step 1 initialize the swarmSet epoch count I 0 population size Nz dimension of the problem Dz wmax and wminFor each particle Pz

Step 11 initialize xz as xz WELL(xmin xmax)

Step 12 initialize the particle velocity as vz Rand(xmin xmax)

Step 13 compute the fitness score fz

Step 14 set global best position gbestz as max(f1 f2 f3 fz)) where fz isin globally optimal fitness

Step 15 set local best position pbestz as max(f1 f2 f3 fz)) where fz isin locally optimal fitness

Step 2Compare the current particlersquos fitness score xz in the swarm and its old local best location pbest

z If the current fitness score xz is greaterthan pbest

z then substitute pbestz with xz else retain the xz unchanged

Step 3Compare the current particlersquos fitness score xz in the swarm and its old global best location gbest

z If the current fitness score xz isgreater than gbest

z then substitute gbestz with xz else retain the xz unchanged

Step 4Using equation (1) compute vz+1⟶ updated velocity vectorUsing equation (2) compute xz+1⟶ updated position vectorStep 5Go to step 2 if the stopping criteria does not met else terminate

ALGORITHM 4 Proposed PSO pseudocode

Table 1 Standard objective functions and their optimal

SR Function name Objective function Search space Optimalvalue

F1 Sphere Minf(x) 1113936ni1 x2

i minus512lexi le 512 0F2 Rastrigin Minf(x) 1113936

ni1 [x2

i minus 10 cos(2πx) + 10]i minus512lexi le 512 0

F3 Axis parallel hyper-ellipsoid Minf(x) 1113936

ni1 i middot x2

i minus512lexi le 512 0

F4 Rotated hyper-ellipsoid Minf(x) 1113936ni (1113936

ij1 xj)

2 minus65536lexi le 65536 0

F5 Moved axis parallelhyper-ellipsoid Minf(x) 1113936

ni1 5i middot x2

i minus512lexi le 512 0

F6 Sum of different power Minf(x) 1113936ni1 |xi|

i+1 minus1lexi le 1 0F7 Chung Reynolds Minf(x) (1113936

ni1 x2

i )2 minus100lexi le 100 0F8 Csendes Minf(x) 1113936

ni1 x6

i (2 + sin(1xi)) minus1lexi le 1 0

F9 Schaffer Minf(x) 1113936ni1 05 + (sin2

x2

i + x2i+1

1113969minus 05[1 + 0001(x2

i + x2i+1)]

2) minus100lexi le 100 0

F10 Schumer Steiglitz Minf(x) 1113936ni1 x4

i minus512lexi le 512 0F11 Schwefel Minf(x) 1113936

ni1 xα

i minus100lexi le 100 0

F12 Schwefel 12 Min f(x) 1113936Di (1113936

ij1 xj)

2 minus100lexi le 100 0

F13 Schwefel 221 Minf(x) max|xi|1ltiltD minus100lexi le 100 0

F14 Schwefel 222 Minf(x) 1113936Di1 |xi| + 1113937

ni1 |xi| minus100lexi le 100 0

F15 Schwefel 223 Minf(x) 1113936ni1 x10

i minus10lexi le 10 0

8 Computational Intelligence and Neuroscience

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 9: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

presented From these simulation results it was observedthat the optimization of higher-dimensional functions ismore complex which can be seen from Table 2 where thedimension size is D 20 and D 30

1013 A Comparative Analysis WE-PSO is compared tothe other approaches namely SO-PSO H-PSO and thestandard PSO where the true value of each technique withthe same nature of the problem is provided for comparisonpurposes Table 1 shows the standard benchmark functions

and their parameter settings Table 2 reveals that WE-PSO isbetter than the standard PSO SO-PSO and H-PSO withdimension D-30 and outperforms in convergence )ecomparative analysis can be seen from Table 2 in which thestandard PSO of the smaller dimension size (D 10 20)performs well while the proposed WE-PSO considerablyperforms well in convergence as the dimension size in-creases Hence WE-PSO is appropriate for higher dimen-sions Simulation runs were carried out on HP Compaq withthe Intel Core i7-3200 configuration with a speed of 38GHzwith RAM of 6GB

100

10ndash20

10ndash40

10ndash60

10ndash80

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 6 Mean value of function F1

102

101

100

10ndash1

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 7 Mean value of function F2

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 8 Mean value of function F3

10ndash120

10ndash140

10ndash130

10ndash150

10ndash160

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 9 Mean value of function F4

Computational Intelligence and Neuroscience 9

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 10: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

In contrast with the findings of SO-PSO H-PSO andtraditional PSO the experimental results from Table 2 revealthat WE-PSO surpasses the results of the aforementionedvariants of PSO It can be observed that the WE-PSOoutperforms in all functions when compared to othertechniques while the other approaches perform as followsH-PSO performs better on functions F4 F1 and F2 for 20Dbut H-PSO gives overall poor results on higher dimensionsand SO-PSO gives slightly better results on the functions F8F9 and F15 on 10-D but gives worst result on larger di-mensions Figures from Figures 7 to 15 depict that WE-PSOoutperforms in simulation results than other approaches for

solving the dim size D 10 D 20 and D 30 on thestandard benchmark test functions

1014 Statistical Test To objectively verify the consistencyof the findings the Student T-test is performed statisticallyFor the success of the competing algorithms the T value iscomputed using

t X1 minus X2

SD21 n1 minus 1( 11138571113872 1113873 + SD2

2 n2 minus 1( 11138571113872 11138731113872 1113873

1113969 (8)

100

10ndash50

10ndash100

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 10 Mean value of function F5

10ndash60

10ndash100

10ndash80

10ndash120

10ndash140

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 11 Mean value of function F6

Mea

n fit

ness

10ndash100

10ndash150

10ndash200

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 12 Mean value of function F7

100

10ndash50

10ndash100

10ndash150

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO3

WE-PSO

Figure 13 Mean value of function F8

10 Computational Intelligence and Neuroscience

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 11: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

T value can be positive or negative in the above equationwhere X1 and X2 reflect the mean value of the first andsecond samples )e sample size is referred to as n1 and n2for both samples )e standard deviations for both samplesare SD2

1 and SD22 Positive and negative values indicate that

WE-PSO outperforms other approaches Studentrsquos T-testresults are presented in Table 3

11 Experiments for Data Classification

A comparative analysis on the real-world benchmark datasetproblem is evaluated for the training of neural networks tovalidate the efficiency of the WE-PSO Using nine

benchmark datasets (Iris Diabetes Heart Wine SeedVertebral Blood Tissue Horse and Mammography) fromthe world-famous UCI machine-learning repository weconducted experiments Training weights are initializedrandomly within the interval [minus50 50] Feedforward neuralnetwork accuracy is tested in the form of root mean squarederror (RMSE) )e features of the datasets that are used canbe seen in Table 4

111 Discussion Backpropagation algorithms using stan-dard PSO SO-PSO H-PSO and WE-PSO are trained in themultilayer feedforward neural network Comparison of

Mea

n fit

ness

100

10 20

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 14 Mean value of function F9

10ndash130

10ndash140

10ndash150

10 20

DIM

Mea

n fit

ness

30

PSO

S-PSO

SO-PSO

WE-PSO

Figure 15 Mean value of function F10

10ndash150

10ndash160

10ndash170

10ndash190

10ndash180

10ndash200

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 16 Mean value of function F11

Computational Intelligence and Neuroscience 11

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 12: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

these training approaches is tested on real classificationdatasets that are taken from the UCI repository )e cross-validation method is used to assess the efficiency of variousclassification techniques)e k-fold cross-validationmethodis used in this paper for the training of neural networks withthe standard PSO SO-PSO H-PSO and proposed algorithmWE-PSO )e k-fold is used with the value k 10 in the

experiments )e dataset has been fragmented into 10chunks each data chunk comprises the same proportion ofeach class of dataset One chunk is used for the testing phasewhile nine chunks were used for the training phase Ninewell-known real-world datasets which were taken from UCIwere compared with the experimental results of algorithmsstandard PSO SO-PSO H-PSO and WE-PSO are used for

10ndash10

100

10ndash20

10ndash30

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 18 Mean value of function F13

10ndash10

100

1010

10ndash20

10ndash30

10ndash40

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 19 Mean value of function F14

1e ndash 312

1e ndash 200

Mea

n fit

ness

10 2015 25

DIM

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 20 Mean value of function F15

102

101

100

10ndash2

10ndash1

10ndash3

10 20

DIM

Mea

n fit

ness

30

PSO

SO-PSO

H-PSO

WE-PSO

Figure 17 Mean value of function F12

12 Computational Intelligence and Neuroscience

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 13: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

evaluating the performance After the simulation the resultsshowed that the training of neural networks with the WE-PSO algorithm is better in terms of precision and its effi-ciency is much higher than the traditional approaches

)e WE-PSO algorithm can also be used successfully inthe future for data classification and statistical problems)e findings of classification accuracy are summarized inTable 5

Table 2 Comparative results among the four PSO algorithms on 15 benchmark test functions

F Iter DIMPSO SO-PSO H-PSO WE-PSO

Mean Std dev Mean Std dev Mean Std dev Mean Std dev

11000 10 233Eminus 74 736Eminus 74 274Eminus 76 866Eminus 76 310Eminus 77 979Eminus 77 591Eminus 78 187Eminus 772000 20 102Eminus 84 322Eminus 84 820Eminus 88 259Eminus 87 176Eminus 90 558Eminus 90 495Eminus 90 148Eminus 893000 30 177Eminus 26 532Eminus 26 767Eminus 20 230Eminus 19 413Eminus 32 124Eminus 31 130Eminus 42 390Eminus 42

21000 10 497Eminus 01 149E+ 00 497Eminus 01 149E+ 00 796Eminus 01 239E+ 00 298Eminus 01 895Eminus 012000 20 817E+ 00 229E+ 01 647E+ 00 191E+ 01 358E+ 00 979E+ 00 311E+ 00 110E+ 013000 30 101E+ 01 295E+ 01 986E+ 00 276E+ 01 945E+ 00 276991 776E+ 00 220E+ 01

31000 10 870Eminus 80 261Eminus 79 179Eminus 79 537Eminus 79 487Eminus 79 146Eminus 78 440Eminus 81 132Eminus 802000 20 262144 786E+ 00 786432 236E+ 01 262144 786E+ 00 178Eminus 89 533Eminus 893000 30 262E+ 01 786E+ 01 157E+ 01 472E+ 01 105E+ 01 314573 387Eminus 57 116Eminus 56

41000 10 446Eminus 147 134Eminus 146 386Eminus 147 116Eminus 146 978Eminus 145 293Eminus 144 124Eminus 150 373Eminus 1502000 20 314Eminus 155 941Eminus 155 927Eminus 154 278Eminus 153 275Eminus 159 824Eminus 159 496Eminus 159 149Eminus 1583000 30 182Eminus 133 545Eminus 133 236Eminus 135 709Eminus 135 853Eminus 130 256Eminus 129 254Eminus 136 762Eminus 136

51000 10 435Eminus 79 130Eminus 78 895Eminus 79 269Eminus 78 243Eminus 78 730Eminus 78 220Eminus 80 661Eminus 802000 20 131E+ 01 393E+ 01 393E+ 01 118E+ 02 131E+ 01 393E+ 01 312Eminus 89 936Eminus 893000 30 131E+ 02 393E+ 02 786E+ 01 236E+ 02 524E+ 01 157E+ 02 194Eminus 56 581Eminus 56

61000 10 170Eminus 61 511Eminus 61 445Eminus 64 133Eminus 63 729Eminus 66 219Eminus 65 462Eminus 66 139Eminus 652000 20 325Eminus 112 974Eminus 112 439Eminus 112 132Eminus 111 501Eminus 109 150Eminus 108 445Eminus 113 134Eminus 1123000 30 721Eminus 135 216Eminus 134 410Eminus 124 123Eminus 123 151Eminus 134 454Eminus 134 696Eminus 135 209Eminus 134

71000 10 296Eminus 157 887Eminus 157 239Eminus 157 718Eminus 157 128Eminus 157 384Eminus 157 247Eminus 163 000E+ 002000 20 879Eminus 177 000E+ 00 177Eminus 184 000E+ 00 349Eminus 183 000E+00 341Eminus186 000E+ 003000 30 123Eminus 82 368Evminus 82 125Eminus 116 374Eminus 116 599Eminus 130 599Eminus 130 460Eminus 134 138Eminus 133

81000 10 439Eminus 200 000E+ 00 198Eminus 194 000E+ 00 451Eminus 197 000E+ 00 899Eminus 201 000E+ 002000 20 157Eminus 20 470Eminus 20 104Eminus 93 313Eminus 93 110Eminus 148 330Eminus 148 409Eminus 151 123Eminus 1503000 30 189Eminus 09 568Eminus 09 454Eminus 10 136Eminus 09 114Eminus 08 343Eminus 08 134Eminus 09 403Eminus 09

91000 10 549Eminus 01 672Eminus 01 130Eminus 01 202Eminus 01 202Eminus01 573Eminus 01 142Eminus 01 142Eminus 012000 20 205E+ 00 131E+ 00 783Eminus 01 143E+ 00 683Eminus 01 129E+ 00 432Eminus 01 108E+ 003000 30 112E+ 00 239E+ 00 999Eminus 01 230E+ 00 956Eminus 01 252E+ 00 912Eminus 01 223E+ 00

101000 10 223Eminus 138 223Eminus 138 223Eminus 138 315Eminus 137 435Eminus 137 131Eminus 136 110Eminus 139 331Eminus 1392000 20 379Eminus 148 114Eminus 147 787Eminus 149 236Eminus 148 419Eminus 147 126Eminus 146 873Eminus 153 262Eminus 1523000 30 443Eminus 126 133Eminus 125 752Eminus 133 226Eminus 132 157Eminus 128 471Eminus 128 138Eminus 133 414Eminus 133

111000 10 375Eminus 187 000E+ 00 157Eminus 192 000E+ 00 215Eminus 191 000E+ 00 899Eminus 198 000E+ 002000 20 529Eminus 193 000E+ 00 253Eminus 195 000E+ 00 845Eminus 195 000E+ 00 983Eminus 197 000E+ 003000 30 482Eminus 154 144Eminus 153 884Eminus 159 265Eminus 158 549Eminus 168 000E+ 00 575Eminus173 000E+ 00

121000 10 113Eminus 01 340Eminus 01 167Eminus 02 502Eminus 02 228Eminus 02 685Eminus 02 289Eminus 03 866Eminus 032000 20 139E+ 01 412E+ 01 503E+ 00 150E+ 01 295E+ 00 884E+ 00 167E+ 00 501E+ 003000 30 745E+ 00 223E+ 01 122E+ 01 366E+ 01 874E+ 00 260E+ 01 494E+ 00 148E+ 01

131000 10 804Eminus 26 241Eminus 25 801Eminus 27 240Eminus 26 359Eminus 27 108Eminus 26 141Eminus 27 102Eminus 262000 20 142Eminus 08 426Eminus 08 264Eminus 11 793Eminus 11 329Eminus 10 986Eminus 10 214Eminus 12 643Eminus 123000 30 620Eminus 03 186Eminus 02 141Eminus 03 423Eminus 03 936Eminus 03 281Eminus 02 141Eminus 03 383Eminus 03

141000 10 362Eminus 38 109Eminus 37 362Eminus 38 109Eminus 37 592Eminus 36 177Eminus 35 195Eminus 38 586Eminus 382000 20 627Eminus 10 188Eminus 09 138Eminus 09 414Eminus 09 791Eminus 13 237Eminus 12 117Eminus13 351Eminus 133000 30 256Eminus 06 767Eminus 06 480E+ 01 144E+ 02 134Eminus 06 403Eminus 06 488Eminus 09 146Eminus 08

151000 10 110Eminus 294 000E+ 00 319Eminus 301 000E+ 00 278Eminus 307 000E+ 00 321Eminus 308 000E+ 002000 20 616Eminus 271 000E+ 00 509Eminus 276 000E+ 00 374Eminus 270 000E+ 00 485Eminus 268 000E+ 003000 30 308Eminus 207 000E+ 00 104Eminus 200 000E+ 00 812Eminus 209 000E+ 00 306Eminus 212 000E+ 00

Note ldquolsquoMeanrdquorsquo shows mean value and ldquoStd devrdquo indicates the standard deviation )e best results among the four PSO algorithms are presented in bold

Computational Intelligence and Neuroscience 13

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 14: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

Table 3 Results of Studentrsquos T-test for all techniques

F Iter DIMWE-PSO vs PSO WE-PSO vs SO-PSO WE-PSO vs H-PSO

T-value Sig T-value Sig T-value Sig

11000 10 +102 WE-PSO +099 WE-PSO +075 WE-PSO2000 20 +100 WE-PSO +048 WE-PSO minus083 H-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

21000 10 +3071 WE-PSO +1567 WE-PSO +121 WE-PSO2000 20 +882 WE-PSO +10756 WE-PSO +1113 WE-PSO3000 30 +063 WE-PSO +3429 WE-PSO +065 WE-PSO

31000 10 +099 WE-PSO +100 WE-PSO +083 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +26314 WE-PSO3000 30 +100 WE-PSO +52529 WE-PSO +093 WE-PSO

41000 10 +019 WE-PSO +099 WE-PSO +100 WE-PSO2000 20 +099 WE-PSO +084 WE-PSO minus098 H-PSO3000 30 +086 WE-PSO +026 WE-PSO +097 WE-PSO

51000 10 +079 WE-PSO +044 WE-PSO +098 WE-PSO2000 20 +029 WE-PSO +057 WE-PSO +26314 WE-PSO3000 30 +006 WE-PSO +262244 WE-PSO +096 WE-PSO

61000 10 +080 WE-PSO +098 WE-PSO +017 WE-PSO2000 20 +086 WE-PSO +096 WE-PSO +096 WE-PSO3000 30 +099 WE-PSO +098 WE-PSO +089 WE-PSO

71000 10 +090 WE-PSO +095 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +100 WE-PSO +100 WE-PSO +100 WE-PSO

81000 10 +075 WE-PSO +098 WE-PSO +055 WE-PSO2000 20 +48397 WE-PSO +100 WE-PSO +091 WE-PSO3000 30 +141 WE-PSO minus689 SO-PSO +52224 WE-PSO

91000 10 +5367 WE-PSO minus300 SO-PSO +8230 WE-PSO2000 20 +8484 WE-PSO +3346 WE-PSO +1608 WE-PSO3000 30 +47001 WE-PSO +39054 WE-PSO +41626 WE-PSO

101000 10 +100 WE-PSO +084 WE-PSO +067 WE-PSO2000 20 +100 WE-PSO +081 WE-PSO +089 WE-PSO3000 30 +100 WE-PSO +098 WE-PSO +095 WE-PSO

111000 10 +097 WE-PSO +192 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO +100 WE-PSO +100 WE-PSO3000 30 +087 WE-PSO +098 WE-PSO +100 WE-PSO

121000 10 +091 WE-PSO +058 WE-PSO +027 WE-PSO2000 20 +226 WE-PSO +108 WE-PSO +027 WE-PSO3000 30 +184 WE-PSO +225 WE-PSO +241 WE-PSO

131000 10 +098 WE-PSO +048 WE-PSO +084 WE-PSO2000 20 +072 WE-PSO +078 WE-PSO +098 WE-PSO3000 30 +011 WE-PSO +039 WE-PSO +086 WE-PSO

141000 10 +057 WE-PSO +015 WE-PSO +082 WE-PSO2000 20 +0151 WE-PSO +149 WE-PSO +150 WE-PSO3000 30 +090 WE-PSO +132 WE-PSO +132 WE-PSO

151000 10 +100 WE-PSO +100 WE-PSO +100 WE-PSO2000 20 +100 WE-PSO minus050 SO-PSO +099 WE-PSO3000 30 +083 WE-PSO +100 WE-PSO +100 WE-PSO

Table 4 Dataset description

S no Datasets Number of total units Disc feature Nature No of inputs No of classes1 Iris 150 mdash Real 4 32 Diabetes 768 mdash Real 8 23 Heart 270 mdash Real 13 24 Wine 178 mdash Real 13 35 Seed 210 mdash Real 7 36 Vertebral 310 mdash Real 6 27 Blood tissue 748 mdash Real 5 28 Horse 368 mdash Real 27 29 Mammography 961 mdash Real 6 2

14 Computational Intelligence and Neuroscience

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 15: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

12 Conclusion

)e performance of PSO depends on the initialization ofthe population In our work we have initialized theparticles of PSO by using a novel quasirandom sequencecalled the WELL sequence However the velocity andposition vector of particles are modified in a randomsequence fashion )e importance of initializing theparticles by using a quasirandom sequence is highlightedin this study )e experimental results explicitly statethat the WELL sequence is optimal for the populationinitialization due to its random nature Moreover thesimulation results have shown that WE-PSO outper-forms the PSO S-PSO and H-PSO approaches )etechniques are also applied to neural network trainingand provide significantly better results than conventionaltraining algorithms including standard PSO S-PSO andH-PSO approaches respectively )e solution provideshigher diversity and increases the potential to searchlocally )e experimental results depict that our ap-proach has excellent accuracy of convergence and pre-vents the local optima Our technique is much better

when it is compared to the traditional PSO and otherinitialization approaches for PSO as evident in Figure 21)e use of mutation operators with the initializationtechnique may be evaluated on large-scale search spacesin the future )e core objective of this research isuniversal but relevant to the other stochastic-basedmetaheuristic algorithm which will establish our futuredirection

Data Availability

)e data used to support the findings of this study are availablefrom the corresponding author upon reasonable request

Disclosure

)is work is part of the PhD thesis of the student

Conflicts of Interest

)e authors declare that there are no conflicts of interestregarding the publication of this paper

Table 5 Classification accuracy results

Sno Datasets Type

BPA PSO SO-PSO H-PSO WE-PSOTr

acc ()Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

Tr acc()

Ts acc()

1 Iris 3-Class 982 957 99 966 988 973 989 96 992 982 Diabetes 2-Class 861 653 887 691 893 691 884 716 904 7413 Heart 2-Class 785 683 995 725 9913 675 9913 725 100 7754 Wine 3-Class 673 6217 7424 6111 8181 6666 7575 6744 7575 6965 Seed 3-Class 842 7056 9757 7777 8727 8444 9818 7777 9818 91116 Vertebral 2-Class 914 8495 9603 9285 9642 9285 9640 9285 9761 94647 Blood tissue 2-Class 763 7347 908 786 8694 7866 8389 70 8474 848 Horse 2-Class 644 5787 6902 50 7419 52 7290 56 7935 589 Mammography 2-Class 7736 7126 8082 7666 6894 63 88 85 9771 9666

000

Iris

Dia

bete

s

Hea

rt

Win

e

Seed

Vert

ebra

l

Hor

se

Mam

mog

raph

y

Bloo

d tis

sue

2000

4000

6000()

8000

10000

12000

BPA

PSO

SO-PSO

H-PSO

WE-PSO

Figure 21 Classification testing accuracy results

Computational Intelligence and Neuroscience 15

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 16: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

References

[1] K Deb ldquoMulti-objective optimizationrdquo in Search Method-ologies pp 403ndash449 Springer Berlin Germany 2014

[2] R Vandenberghe N Nelissen E Salmon et al ldquoBinaryclassification of 18F-flutemetamol PET using machinelearning comparison with visual reads and structural MRIrdquoNeuroImage vol 64 pp 517ndash525 2013

[3] V Ganganwar ldquoAn overview of classification algorithms forimbalanced datasetsrdquo International Journal of EmergingTechnology and Advanced Engineering vol 2 no 4 pp 42ndash472012

[4] J Kennedy ldquoSwarm intelligencerdquo in Handbook of Nature-Inspired and Innovative Computing pp 187ndash219 SpringerBerlin Germany 2006

[5] G Beni and J Wang ldquoSwarm intelligence in cellular roboticsystemsrdquo in Robots and Biological Systems Towards a NewBionics pp 703ndash712 Springer Berlin Germany 1993

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoProceedings of ICNNrsquo95mdashInternational Conference on NeuralNetworks pp 1942ndash1948 1995

[7] J Salerno ldquoUsing the particle swarm optimization techniqueto train a recurrent neural modelrdquo in Proceedings of the NinthIEEE International Conference on Tools with Artificial Intel-ligence pp 45ndash49 Newport Beach CA USA November 1997

[8] R Storn and K Price ldquoDifferential evolutionndasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4pp 341ndash359 1997

[9] P P Palmes T Hayasaka and S Usui ldquoMutation-basedgenetic neural networkrdquo IEEE Transactions on Neural Net-works vol 16 no 3 pp 587ndash600 2005

[10] W H Bangyal J Ahmad and H T Rauf ldquoOptimization ofneural network using improved bat algorithm for data clas-sificationrdquo Journal of Medical Imaging and Health Infor-matics vol 9 no 4 pp 670ndash681 2019

[11] A Cervantes I M Galvan and P Isasi ldquoAMPSO a newparticle swarm method for nearest neighborhood classifica-tionrdquo IEEE Transactions on Systems Man and CyberneticsPart B (Cybernetics) vol 39 no 5 pp 1082ndash1091 2009

[12] W H Bangyal J Ahmed and H T Rauf ldquoA modified batalgorithm with torus walk for solving global optimisationproblemsrdquo International Journal of Bio-Inspired Computa-tion vol 15 no 1 pp 1ndash13 2020

[13] C Grosan A Abraham and M Nicoara ldquoSearch optimi-zation using hybrid particle sub-swarms and evolutionaryalgorithmsrdquo International Journal of Simulation SystemsScience amp Technology vol 6 no 10 pp 60ndash79 2005

[14] M Junaid W H Bangyal and J Ahmed ldquoA novel bat al-gorithm using sobol sequence for the initialization of pop-ulationrdquo in IEEE 23rd International Multitopic Conference(INMIC) pp 1ndash6 Bahawalpur Pakistan November 2020

[15] W H Bangyal J Ahmed H T Rauf and S Pervaiz ldquoAnoverview of mutation strategies in bat algorithmrdquo Interna-tional Journal of Advanced Computer Science and Applications(IJACSA) vol 9 pp 523ndash534 2018

[16] D E Knuth Fundamental Algorithms Be Art of ComputerProgramming Addison-Wesley Boston MA USA 1973

[17] J E Gentle Random Number Generation and Monte CarloMethods Springer Science amp Business Media Berlin Ger-many 2006

[18] N Q Uy N X Hoai R I McKay and P M Tuan ldquoIniti-alising PSO with randomised low-discrepancy sequences thecomparative resultsrdquo in Proceedings of the IEEE Congress on

Evolutionary Computation CEC 2007 pp 1985ndash1992 Sin-gapore September 2007

[19] S Kimura and K Matsumura ldquoGenetic algorithms using low-discrepancy sequencesrdquo in Proceedings of the 7th AnnualConference on Genetic and Evolutionary Computation ACMpp 1341ndash1346 Washington DC USA June 2005

[20] R Brits A P Engelbrecht and F Van den Bergh ldquoA nichingparticle swarm optimizerrdquo in Proceedings of the 4th Asia-PacificConference on Simulated Evolution and Learning pp 692ndash696Orchid Country Club Singapore November 2002

[21] J Ander Coput ldquoVerteilungsfunktionen I amp IIrdquoNederl AkadWetensch Procvol 38 pp 1058ndash1066 1935

[22] R A Krohling and L dos Santos Coelho ldquoPSO-E particleswarm with exponential distributionrdquo in Proceedings of theIEEE Congress on Evolutionary Computation CEC 2006pp 1428ndash1433 Vancouver Canada July 2006

[23] R )angaraj M Pant and K Deep ldquoInitializing pso withprobability distributions and low-discrepancy sequencesthe comparative resultsrdquo in Proceedings of the WorldCongress on Nature amp Biologically Inspired ComputingNaBIC 2009 pp 1121ndash1126 IEEE Coimbatore IndiaDecember 2009

[24] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquoNature vol 323no 6088 pp 533ndash536 1986

[25] K E Parsopoulos andM N Vrahatis ldquoInitializing the particleswarm optimizer using the nonlinear simplex methodrdquo Ad-vances in Intelligent Systems Fuzzy Systems EvolutionaryComputation World Scientific and Engineering Academy andSociety Press Stevens Point WI USA 2002

[26] M Richards and D Ventura ldquoChoosing a starting configu-ration for particle swarm optimizationrdquo in Proceedings of theIEEE International Joint Conference on Neural Networkspp 2309ndash2312 Budapest Hungary July 2004

[27] H Jabeen Z Jalil and A R Baig ldquoOpposition based ini-tialization in particle swarm optimization (O-PSO)rdquo inProceedings of the 11th Annual Conference Companion onGenetic and Evolutionary Computation Conference LateBreaking Papers pp 2047ndash2052 Montreal Quebec CanadaJuly 2009

[28] A L Gutierrez ldquoComparison of different pso initializationtechniques for high dimensional search space problems a testwith fss and antenna arraysrdquo in Proceedings of the 5th Eu-ropean Conference on Antennas and Propagation (EUCAP)pp 965ndash969 IEEE Rome Italy April 2011

[29] A Subasi ldquoClassification of EMG signals using PSO opti-mized SVM for diagnosis of neuromuscular disordersrdquoComputers in Biology and Medicine vol 43 no 5 pp 576ndash586 2013

[30] S Dehuri R Roy S-B Cho and A Ghosh ldquoAn improvedswarm optimized functional link artificial neural network(ISO-FLANN) for classificationrdquo Journal of Systems andSoftware vol 85 no 6 pp 1333ndash1345 2012

[31] Z Liu P Zhu W Chen and R-J Yang ldquoImproved particleswarm optimization algorithm using design of experimentand data mining techniquesrdquo Structural andMultidisciplinaryOptimization vol 52 no 4 pp 813ndash826 2015

[32] S Chatterjee S Sarkar S Hore N Dey A S Ashour andV E Balas ldquoParticle swarm optimization trained neuralnetwork for structural failure prediction of multistoried RCbuildingsrdquo Neural Computing and Applications vol 28 no 8pp 2005ndash2016 2016

[33] Y Xue T Tang and A X Liu ldquoLarge-scale feedforwardneural network optimization by a self-adaptive strategy and

16 Computational Intelligence and Neuroscience

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17

Page 17: A New Initialization Approach in Particle Swarm Optimization ...encouragedby other socialanimal colonies,such as bird flocking or fish schooling [4]. In the research of cellular

parameter based particle swarm optimizationrdquo IEEE Accessvol 7 pp 52473ndash52483 2019

[34] F E F Junior and G G Yen ldquoParticle swarm optimization ofdeep neural networks architectures for image classificationrdquoSwarm and Evolutionary Computations vol 49 pp 62ndash742019

[35] O Tarkhaneh and H Shen ldquoTraining of feedforward neuralnetworks for data classification using hybrid particle swarmoptimization Mantegna Levy flight and neighbourhoodsearchrdquo Heliyon vol 5 no 4 Article ID e01275 2019

[36] A Herliana T Arifin S Susanti and A B Hikmah ldquoFeatureselection of diabetic retinopathy disease using particle swarmoptimization and neural networkrdquo in Proceedings of the 20186th International Conference on Cyber and IT Service Man-agement (CITSM) pp 1ndash4 Parapat Indonesia August 2018

[37] M K Sarkaleh and A Shahbahrami ldquoClassification of ECGarrhythmias using discrete wavelet transform and neuralnetworksrdquo International Journal of Computer Science Engi-neering and Applications vol 2 no 1 pp 1ndash13 2012

[38] R J Schalkoff Artificial Neural Networks McGraw-Hill NewYork NY USA 1997

[39] G P Zhang ldquoNeural networks for classification a surveyrdquoIEEE Transactions on Systems Man and Cybernetics Part C(Applications and Reviews) vol 30 no 4 pp 451ndash462 2000

[40] M Castellani ldquoEvolutionary generation of neural networkclassifiers-An empirical comparisonrdquo Neurocomputingvol 99 pp 214ndash229 2013

[41] G E Hinton J L Mcclelland and D Rumelhart ldquoDistributedrepresentationsrdquo Parallel Distributed Processingexplorationsin the Microstructure of Cognition Foundation MIT PressCambridge MA USA 1986

[42] M Matsumoto and T Nishimura ldquoMersenne twister a 623-dimensionally equidistributed uniform pseudo-randomnumber generatorrdquo ACM Transactions on Modeling andComputer Simulation (TOMACS) vol 8 no 1 pp 3ndash30 1995

[43] I Y M Sobolrsquo ldquoOn the distribution of points in a cube and theapproximate evaluation of integralsrdquo Zhurnal VychislitelrsquonoiMatematiki I Matematicheskoi Fiziki vol 7 no 4 pp 784ndash802 1967

[44] J H Halton ldquoAlgorithm 247 radical-inverse quasi-randompoint sequencerdquo Communications of the ACM vol 7 no 12pp 701-702 1964

[45] F Panneton P Lrsquoecuyer and M Matsumoto ldquoImprovedlong-period generators based on linear recurrences modulo2rdquo ACM Transactions on Mathematical Software (TOMS)vol 32 pp 11ndash16 2006

Computational Intelligence and Neuroscience 17