This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Research ArticleBare-Bones Teaching-Learning-Based Optimization
Feng Zou12 Lei Wang1 Xinhong Hei1 Debao Chen2 Qiaoyong Jiang1 and Hongye Li1
1 School of Computer Science and Engineering Xirsquoan University of Technology Xirsquoan 710048 China2 School of Physics and Electronic Information Huaibei Normal University Huaibei 235000 China
Correspondence should be addressed to Lei Wang wangleeei163com
Received 20 February 2014 Accepted 7 April 2014 Published 10 June 2014
Academic Editors S Balochian and Y Zhang
Copyright copy 2014 Feng Zou et alThis is an open access article distributed under theCreative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is oneof the recently proposed swarm intelligent (SI) algorithms In this paper a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems In this method each learner of teacher phaseemploys an interactive learning strategy which is the hybridization of the learning strategy of teacher phase in the standard TLBOand Gaussian sampling learning based on neighborhood search and each learner of learner phase employs the learning strategyof learner phase in the standard TLBO or the new neighborhood search strategy To verify the performance of our approaches20 benchmark functions and two real-world problems are utilized Conducted experiments can been observed that the BBTLBOperforms significantly better than or at least comparable to TLBO and some existing bare-bones algorithms The results indicatethat the proposed algorithm is competitive to some other optimization algorithms
1 Introduction
Many real-life optimization problems are becomingmore andmore complex and difficult with the development of scientifictechnology So how to resolve these complex problems in anexactmanner within a reasonable time cost is very importantThe traditional optimization algorithms are difficult to solvethese complex nonlinear problems In recent years nature-inspired optimization algorithms which simulate naturalphenomena and have different design philosophies andcharacteristics such as evolutionary algorithms [1ndash3] andswarm intelligence algorithms [4ndash7] are a research fieldwhich simulates different natural phenomena to solve a widerange of problems In these algorithms the convergence rateof the algorithm is given prime importance for solving real-world optimization problems The ability of the algorithmsto obtain the global optima value is one aspect and the fasterconvergence is the other aspect
As a stochastic search scheme TLBO [8 9] is a newlypopulation-based algorithm based on swarm intelligence andhas characters of simple computation and rapid convergenceit has been extended to the function optimization engineer-ing optimization multiobjective optimization clustering
and so forth [9ndash17] TLBO is a parameter-free evolutionarytechnique and is also gaining popularity due to its abilityto achieve better results in comparatively faster convergencetime to genetic algorithms (GA) [1] particle swarm optimizer(PSO) [5] and artificial bee colony algorithm (ABC) [6]However in evolutionary computation research there havebeen always attempts to improve any given findings furtherand further This work is an attempt to improve the con-vergence characteristics of TLBO further without sacrificingthe accuracies obtained in TLBO and in some occasionstrying to even better the accuracies The aims of this paperare of threefold First authors propose an improved versionof TLBO namely BBTLBO Next the proposed techniqueis validated on unimodal and multimodal functions basedon different performance indicators The result of BBTLBOis compared with other algorithms Results of both thealgorithms are also compared using statistical paired 119905-testThirdly it is applied to solve the real-world optimizationproblem
The remainder of this paper is organized as followsThe TLBO algorithm is introduced in Section 2 Section 3presents a brief overview of some recently proposed
Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 136920 17 pageshttpdxdoiorg1011552014136920
2 The Scientific World Journal
(1) Begin(2) Initialize119873 (number of learners) and119863 (number of dimensions)(3) Initialize learners119883 and evaluate all learners119883(4) Donate the best learner as Teacher and the mean of all learners119883 asMean(5) while (stopping condition not met)(6) for each learner119883
(25) endfor(26) Update the Teacher and theMean(27) endwhile(28) end
Algorithm 1 TLBO( )
bare-bones algorithms Section 4 describes the improvedteaching-learning-based optimization algorithm usingneighborhood search (BBTLBO) Section 5 presents the testson several benchmark functions and the experiments areconducted along with statistical tests The applications fortraining artificial neural network are shown in Section 6Conclusions are given in Section 7
2 Teaching-Learning-Based Optimization
Rao et al [8 9] first proposed a novel teaching-learning-based optimization (TLBO) inspired from the philosophy ofteaching and learning The TLBO algorithm is based on theeffect of the influence of a teacher on the output of learners ina class which is considered in terms of results or grades Theprocess ofworking of TLBO is divided into twopartsThefirstpart consists of ldquoteacher phaserdquo and the second part consistsof ldquolearner phaserdquo The ldquoteacher phaserdquo means learning fromthe teacher and the ldquolearner phaserdquo means learning throughthe interaction between learners
A good teacher is one who brings his or her learners up tohis or her level in terms of knowledge But in practice this isnot possible and a teacher can only move the mean of a classup to some extent depending on the capability of the classThis follows a random process depending on many factorsLet119872 be the mean and let 119879 be the teacher at any iteration 119879will try tomovemean119872 toward its own level so now the new
mean will be 119879 designated as119872new The solution is updatedaccording to the difference between the existing and the newmean according to the following expression
119899119890119908119883 = 119883 + 119903 times (119872new minus TF times119872) (1)
where TF is a teaching factor that decides the value of meanto be changed and 119903 is a random vector in which each elementis a random number in the range [0 1] The value of TF canbe either 1 or 2 which is again a heuristic step and decidedrandomly with equal probability as
TF = round [1 + rand (0 1)] (2)
Learners increase their knowledge by two differentmeans one through input from the teacher and the otherthrough interaction between themselves A learner interactsrandomly with other learners with the help of group discus-sions presentations formal communications and so forthA learner learns something new if the other learner hasmore knowledge than him or her Learner modification isexpressed as
119899119890119908119883119894=
119883119894+ 119903 lowast (119883
119894minus 119883119895) if119891 (119883
119894) lt 119891 (119883
119895)
119883119894+ 119903 lowast (119883
119895minus 119883119894) otherwise
(3)
As explained above the pseudocode for the implementa-tion of TLBO is summarized in Algorithm 1
The Scientific World Journal 3
3 Bare-Bones Algorithm
In this section we only presented a brief overview of somerecently proposed bare-bones algorithms
31 BBPSO and BBExp PSO is a swarm intelligence-basedalgorithm which is inspired by the behavior of birds flocking[5] In PSO each particle is attracted by its personal bestposition (119901best) and the global best position (119892best) foundso far Theoretical studies [18 19] proved that each particleconverges to the weighted average of 119901best and 119892best
lim119905rarrinfin
119883119894 (119905) =
1198881sdot 119892best + 1198882 sdot 119901best
1198881+ 1198882
(4)
where 1198881and 1198882are two leaning factors in PSO
Based on the convergence characteristic of PSO Kennedy[20] proposed a new PSO variant called bare-bones PSO(BBPSO) Bare-bones PSO retains the standard PSO socialcommunication but replaces dynamical particle update withsampling from a probability distribution based on 119892best and119901best119894 as follows
119909119894119895 (119905 + 1) = 119873(
119892best + 119901best119894119895 (119905)
2
100381610038161003816100381610038161003816119892best minus 119901best119894119895 (119905)
100381610038161003816100381610038161003816) (5)
where 119909119894119895(119905 + 1) is the 119895th dimension of the 119894th particle in
the population and119873 represents a Gaussian distributionwithmean (119892best + 119901best119894119895(119905))2 and standard deviation |119892best minus119901best119894119895(119905)|
Kennedy [20] proposed also an alternative version of theBBPSO denoted by BBExp where (5) is replaced by119909119894119895 (119905 + 1)
=
119873(
119892best + 119901best119894119895 (119905)
2
100381610038161003816100381610038161003816119892best minus 119901best119894119895 (119905)
(6)where rand (01) is a random value within [0 1] for the 119895thdimension For the alternative mechanism there is a 50chance that the search process is focusing on the previous bestpositions
32 BBDE GBDE and MGBDE Inspired by the BBPSOand DE Omran et al [21] proposed a new and efficientDE variant called bare-bones differential evolution (BBDE)The BBDE is a new almost parameter-free optimizationalgorithm that is a hybrid of the bare-bones particle swarmoptimizer and differential evolution Differential evolution isused tomutate for each particle the attractor associated withthat particle defined as a weighted average of its personal andneighborhood best positions For the BBDE the individual isupdated as follows119909119894119895 (119905 + 1)
where 119901best and 119892best are personal best position and the globalbest position 119903
1119895 is a random value within [0 1] for the 119895th
dimensionBased on the idea that the Gaussian sampling is a fine
tuning procedure which starts during exploration and iscontinued to exploitation Wang et al [22] proposed a newparameter-freeDE algorithm calledGBDE In theGBDE themutation strategy uses a Gaussian sampling method which isdefined by
10038161003816100381610038161003816119883best119895 (119905) minus 119909119894119895 (119905)
10038161003816100381610038161003816)
119909119894119895 (119905) otherwise(9)
where 119873 represents a Gaussian distribution with mean(119883best119895(119905)+119909119894119895(119905))2 and standard deviation |119883best119895(119905)minus119909119894119895(119905)|and CR is the probability of crossover
To balance the global search ability and convergence rateWang et al [22] proposed amodifiedGBDE (calledMGBDE)The mutation strategy uses a hybridization of GBDE andDEbest1 as follows
The bare-bones PSO utilizes this information by samplingcandidate solutions normally distributed around the for-mally derived attractor point That is the new position isgenerated by a Gaussian distribution for sampling the searchspace based on the 119892best and the 119901best at the current iterationAs a result the new position will be centered around theweighted average of 119901best and 119892best Generally speaking atthe initial evolutionary stages the search process focuses onexploration due to the large deviation With an increasingnumber of generations the deviation becomes smaller andthe search process will focus on exploitation From the searchbehavior of BBPSO the Gaussian sampling is a fine tuningprocedure which starts during exploration and is continuedto exploitation This can be beneficial for the search of manyevolutionary optimization algorithms Additionally the bare-bones PSO has no parameters to be tuned
Based on a previous explanation a new bare-bones TLBO(BBTLBO) with neighborhood search is proposed in this
4 The Scientific World Journal
Begin
Initialize learners size (NP) dimension (D) and hybridization factor (u)
Calculate the NTeacher and NMean of each learner
Modify each learner Xi in the class= + r lowast (NTeacher minus TF lowast NMean)= N((NTeacher + Xi)2 (NTeacher minus Xi))
newXi = u lowast newX1 + (1 minus u) lowast newX2Teacher phase
newXi better Xi
Xi = newXi Xi = Xi
Yes
Yes
Yes
Yes
No
No
No
No
Denote the NTeacheri and randomly select a Xk for each Xi
Learner phase
rand(0 1) lt 05
The original TLBO learning Neighborhood search strategy
newXi better Xi
Xi = newXi Xi = Xi
Termination criteria satisfied
End
gen = gen + 1
XiV1V2
Figure 1 Flow chart showing the working of BBTLBO algorithm
paper In fact for TLBO if the new learner has a betterfunction value than that of the old learner it is replacedwith the old one in the memory Otherwise the old one isretained in the memory In other words a greedy selectionmechanism is employed as the selection operation betweenthe old and the candidate one Hence the new teacher and thenew learner are the global best (119892best) and learnerrsquos personalbest (119901best) found so far respectively The complete flowchartof the BBTLBO algorithm is shown in Figure 1
41 Neighborhood Search It is known that birds of a featherflock together and people of a mind fall into the samegroup Just like evolutionary algorithms themselves thenotion of neighborhood is inspired by nature Neighborhoodtechnique is an efficient method to maintain diversity of
the solutions It plays an important role in evolutionaryalgorithms and is often introduced by researchers in orderto allow maintenance of a population of diverse individualsand improve the exploration capability of population-basedheuristic algorithms [23ndash26] In fact learners with similarinterests form different learning groups Because of his orher favor characteristic the learner maybe learns from theexcellent individual in the learning group
For the implementation of grouping various types ofconnected distances may be used Here we have used aring topology [27] based on the indexes of learners for thesake of simplicity In a ring topology the first individualis the neighbor of the last individual and vice versa Basedon the ring topology a 119896-neighborhood radius is definedwhere 119896 is a predefined integer number For each individual
The Scientific World Journal 5
NeighborhoodiXi
Ximinus1
Xi+1
Figure 2 Ring neighborhood topology with three members
its 119896-neighborhood radius consists of 2119896 + 1 individuals(including oneself) which are 119883
119894minus119896 119883
119894 119883
119894+119896 That is
the neighborhood size is 2119896 + 1 for a 119896-neighborhood Forsimplicity 119896 is set to 1 (Figure 2) in our algorithmThismeansthat there are 3 individuals in each learning group Oncegroups are constructed we can utilize them for updating thelearners of the corresponding group
42 Teacher Phase To balance the global and local searchability a modified interactive learning strategy is proposed inteacher phase In this learning phase each learner employs aninteractive learning strategy (the hybridization of the learningstrategy of teacher phase in the standard TLBO and Gaussiansampling learning) based on neighborhood search
In BBTLBO the updating formula of the learning for alearner 119883
119894in teacher phase is proposed by the hybridization
of the learning strategy of teacher phase and the Gaussiansampling learning as follows
where 119906 called the hybridization factor is a random numberin the range [0 1] for the 119895th dimension 119873119879119890119886119888ℎ119890119903 and119873119872119890119886119899 are the existing neighborhood best solution and theneighborhood mean solution of each learner and TF is ateaching factor which can be either 1 or 2 randomly
In the BBTLBO there is a (119906 lowast 100) chance that the119895th dimension of the 119894th learner in the population follows thebehavior of the learning strategy of teacher phase while theremaining (100 minus 119906lowast 100) follow the search behavior of theGaussian sampling in teacher phase This will be helpful tobalance the advantages of fast convergence rate (the attraction
of the learning strategy of teacher phase) and exploration (theGaussian sampling) in BBTLBO
43 Learner Phase At the same time in the learner phase alearner interacts randomly with other learners for enhancinghis or her knowledge in the class This learning method canbe treated as the global search strategy (shown in (3))
In this paper we introduce a new learning strategy inwhich each learner learns from the neighborhood teacher andthe other learner selected randomly of his or her correspond-ing neighborhood in learner phaseThis learningmethod canbe treated as the neighborhood search strategy Let 119899119890119908119883
119894
represent the interactive learning result of the learner119883119894This
neighborhood search strategy can be expressed as follows
where 1199031and 1199032are random vectors in which each element
is a random number in the range [0 1] 119873119879119890119886119888ℎ119890119903 is theteacher of the learner 119883
119894rsquos corresponding neighborhood
and the learner 119883119896is selected randomly from the learnerrsquos
corresponding neighborhoodIn BBTLBO each learner is probabilistically learning by
means of the global search strategy or the neighborhoodsearch strategy in learner phaseThat is about 50of learnersin the population execute the learning strategy of learnerphase in the standard TLBO (shown in (3)) while theremaining 50execute neighborhood search strategy (shownin (12)) This will be helpful to balance the global search andlocal search in learner phase
Moreover compared to the original TLBO BBTLBOonlymodifies the learning strategies Therefore both the originalTLBO and BBTLBO have the same time complexity 119874 (NP sdot119863 sdot Genmax) where NP is the number of the population 119863is the number of dimensions and Genmax is the maximumnumber of generations
As explained above the pseudocode for the implementa-tion of BBTLBO is summarized in Algorithm 2
5 Functions Optimization
In this section to illustrate the effectiveness of the proposedmethod 20 benchmark functions are used to test the effi-ciency of BBTLBO To compare the search performance ofBBTLBO with some other methods other different algo-rithms are also simulated in the paper
51 Benchmark Functions Thedetails of 20 benchmark func-tions are shown in Table 1 Among 20 benchmark functions1198651to 1198659are unimodal functions and 119865
10to 11986520
are multi-modal functions The searching range and theory optima forall functions are also shown in Table 1
52 Parameter Settings All the experiments are carried outon the same machine with a Celoron 226GHz CPU 2GBmemory andWindows XP operating system withMatlab 79
6 The Scientific World Journal
(1) Begin(2) Initialize119873 (number of learners)119863 (number of dimensions) and hybridization factor 119906(3) Initialize learners 119883 and evaluate all learners119883(4) while (stopping condition not met)(5) for each learner119883
119894of the class Teaching phase
(6) TF = round(1 + rand(0 1))(7) Donate the119873 119879119890119886119888ℎ119890119903 and the119873 119872119890119886119899 in its neighborhood for each learner(8) Updating each learner according (11)(9) Accept 119899119890119908119883
119894if 119891(119899119890119908119883
119894) is better than 119891(119883
119894)
(10) endfor(11) for each learner119883
119894of the class Learning phase
(12) Randomly select one learner119883119896 such that 119894 = 119896
(13) if rand(0 1) lt 05(14) Updating each learner according (3)(15) else(16) Donate the119873119879119890119886119888ℎ119890119903 in its neighborhood for each learner(17) Updating each learner according (12)(18) endif(19) Accept 119899119890119908119883
119894if 119891(119899119890119908119883119894) is better than 119891(119883
119894)
(20) endfor(21) endwhile(22) end
Algorithm 2 BBTLBO( )
For the purpose of reducing statistical errors each algorithmis independently simulated 50 times For all algorithms thepopulation size was set to 20 Population-based stochasticalgorithms use the same stopping criterion that is reachinga certain number of function evaluations (FEs)
53 Effect of Variation in Parameter 119906 The hybridizationfactor u is set to 00 01 03 05 07 09 10 Comparativetests have been performed using different 119906 In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions Table 2 showsthe mean optimum solutions and the standard deviation ofthe solutions obtained using different hybridization factor119906 in the 50 independent runs The best results amongthe algorithms are shown in bold Figure 3 presents therepresentative convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved byusing different hybridization factor 119906 on all test functionsDue to the tight space limitation some sample graphs areillustrated
The comparisons in Table 2 and Figure 3 show that whenthe hybridization factor 119906 is set to 09 BBTLBOoffers the bestperformance on 20 test functions Hence the hybridizationfactor 119906 is set to 09 in the following experiments
54 Comparison of BBTLBO with Some Similar Bare-BonesAlgorithms In this section we compare BBTLBO with fiveother recently proposed three bare-bones DE variants andtwo bare-bones PSO algorithms Our experiment includestwo series of comparisons in terms of the solution accuracyand the solution convergence (convergence speed and successrate) We compared the performance of BBTLBO with other
similar bare-bones algorithms including BBPSO [20] BBExp[20] BBDE [21] GBDE [22] and MGBDE [22]
541 Comparisons on the Solution Accuracy In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions The resultsare shown in Table 3 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the 50independent runs by each algorithm on 20 test functionsThebest results among the algorithms are shown in bold Figure 4presents the convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved by 7algorithms for 50 independent runs Due to the tight spacelimitation some sample graphs are illustrated
From Table 3 it can be observed that the mean optimumsolution and the standard deviation of all algorithms performwell for the functions 119865
15and 11986517 Although BBExp performs
better than BBTLBO on function 1198659and MGBDE performs
better than BBTLBO on function 11986520 our approach BBTLBO
achieves better results than other algorithms on the rest of testfunctions Table 3 and Figure 4 conclude that the BBTLBOhas a good performance of the solution accuracy for testfunctions in this paper
542 Comparison of the Convergence Speed and SR In orderto compare the convergence speed and successful rate (SR)of different algorithms we select a threshold value of theobjective function for each test function For other functionsthe threshold values are listed in Table 4 In our experimentthe stopping criterion is that each algorithm is terminatedwhen the best fitness value so far is below the predefinedthreshold value (119879 Value) or the number of FEs reaches to
The Scientific World Journal 7
Table 1 Details of numerical benchmarks used
Function Formula 119863 Range Optima
Sphere 1198651(119909) =
119863
sum
119894=1
1199092
11989430 [minus100 100] 0
Sum square 1198652(119909) =
119863
sum
119894=1
1198941199092
11989430 [minus100 100] 0
Quadric 1198653(119909) =
119863
sum
119894=1
1198941199094
119894+ random(0 1) 30 [minus128 128] 0
Step 1198654(119909) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 30 [minus100 100] 0
Schwefel 12 1198655(119909) =
119863
sum
119894=1
(
119894
sum
119895=1
119909119895)
2
30 [minus100 100] 0
Schwefel 221 1198656(119909) = max 1003816100381610038161003816119909119894
1003816100381610038161003816 1 le 119894 le 119863 30 [minus100 100] 0
Schwefel 222 1198657(119909) =
119863
sum
119894=1
1003816100381610038161003816119909119894
1003816100381610038161003816+
119863
prod
119894=1
1003816100381610038161003816119909119894
100381610038161003816100381630 [minus10 10] 0
Zakharov 1198658(119909) =
119863
sum
119894=1
1199092
119894+ (
119863
sum
119894=1
05119894119909119894)
2
+ (
119863
sum
119894=1
05119894119909119894)
4
30 [minus100 100] 0
Rosenbrock 1198659(119909) =
119863minus1
sum
119894=1
lfloor100(1199092
119894minus 119909119894+1)2
+ (119909119894minus 1)2
rfloor 30 [minus2048 2048] 0
Ackley 11986510(119909) = 20 minus 20 exp((minus1
5)radic(
1
119863)
119863
sum
119894=1
1199092
119894) minus exp(( 1
119863)
119863
sum
119894=1
cos (2120587119909119894)) + 119890 30 [minus32 32] 0
Rastrigin 11986511(119909) =
119863
sum
119894=1
(1199092
119894minus 10 cos (2120587119909
119894) + 10) 30 [minus512 512] 0
Weierstrass11986512(119909) =
119863
sum
119894=1
(
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 (119909
119894+ 05))]) minus 119863
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 times 05)]
119886 = 05 119887 = 3 119896max = 20
30 [minus05 05] 0
Griewank 11986513(119909) =
119863
sum
119894=1
(1199092
119894
4000) minus
119899
prod
119894=1
cos(119909119894
radic119894
) + 1 30 [minus600 600] 0
Schwefel 11986514(119909) = 4189829119863 +
119863
sum
119894=1
(minus119909119894sinradicabs(119909
119894)) 30 [minus500 500] 0
Bohachevsky1 11986515(119909) = 119909
2
1+ 21199092
2minus 03 cos (3120587119909
1) minus 04 cos (4120587119909
2) + 07 2 [minus100 100] 0
Bohachevsky2 11986516(119909) = 119909
2
1+ 21199092
2minus 03cos (3120587119909
1) lowast cos (4120587119909
2) + 03 2 [minus100 100] 0
Bohachevsky3 11986517(119909) = 119909
2
1+ 21199092
2minus 03cos((3120587119909
1) + (4120587119909
2)) + 03 2 [minus100 100] 0
Shekel5 11986518(119909) = minus
5
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus101532
Shekel7 11986519(119909) = minus
7
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus104029
Shekel10 11986520(119909) = minus
10
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus105364
the maximal FEs 40000 The results are shown in Table 4in terms of the mean number of FEs (MFEs) required toconverge to the threshold and successful rate (SR) in the50 independent runs ldquoNaNrdquo represents that no runs of thecorresponding algorithm converged below the predefinedthreshold before meeting the maximum number of FEs Thebest results among the six algorithms are shown in boldface
FromTable 5 it can be observed that all algorithms hardlyconverge to the threshold for unimodal functions 119865
3 1198655 1198656
and 1198658and multimodal functions 119865
11 11986512 and 119865
14 BBTLBO
converges to the threshold except for functions 1198653 1198659 and
11986514 From the results of total average FEs BBTLBO converges
faster than other algorithms on all unimodal functions andthe majority of multimodal functions except for functions119865151198651611986519 and119865
20The acceleration rates between BBTLBO
and other algorithms are mostly 10 for functions 1198651 1198652 1198654
1198657 1198659 11986510 and 119865
13 From the results of total average SR
BBTLBO achieves the highest SR for those test functions ofwhich BBTLBO successfully converges to the threshold valueIt can be concluded that the BBTLBOhas a good performanceof convergence speed and successful rate (SR) of the solutionsfor test functions in this paper
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
(1) Begin(2) Initialize119873 (number of learners) and119863 (number of dimensions)(3) Initialize learners119883 and evaluate all learners119883(4) Donate the best learner as Teacher and the mean of all learners119883 asMean(5) while (stopping condition not met)(6) for each learner119883
(25) endfor(26) Update the Teacher and theMean(27) endwhile(28) end
Algorithm 1 TLBO( )
bare-bones algorithms Section 4 describes the improvedteaching-learning-based optimization algorithm usingneighborhood search (BBTLBO) Section 5 presents the testson several benchmark functions and the experiments areconducted along with statistical tests The applications fortraining artificial neural network are shown in Section 6Conclusions are given in Section 7
2 Teaching-Learning-Based Optimization
Rao et al [8 9] first proposed a novel teaching-learning-based optimization (TLBO) inspired from the philosophy ofteaching and learning The TLBO algorithm is based on theeffect of the influence of a teacher on the output of learners ina class which is considered in terms of results or grades Theprocess ofworking of TLBO is divided into twopartsThefirstpart consists of ldquoteacher phaserdquo and the second part consistsof ldquolearner phaserdquo The ldquoteacher phaserdquo means learning fromthe teacher and the ldquolearner phaserdquo means learning throughthe interaction between learners
A good teacher is one who brings his or her learners up tohis or her level in terms of knowledge But in practice this isnot possible and a teacher can only move the mean of a classup to some extent depending on the capability of the classThis follows a random process depending on many factorsLet119872 be the mean and let 119879 be the teacher at any iteration 119879will try tomovemean119872 toward its own level so now the new
mean will be 119879 designated as119872new The solution is updatedaccording to the difference between the existing and the newmean according to the following expression
119899119890119908119883 = 119883 + 119903 times (119872new minus TF times119872) (1)
where TF is a teaching factor that decides the value of meanto be changed and 119903 is a random vector in which each elementis a random number in the range [0 1] The value of TF canbe either 1 or 2 which is again a heuristic step and decidedrandomly with equal probability as
TF = round [1 + rand (0 1)] (2)
Learners increase their knowledge by two differentmeans one through input from the teacher and the otherthrough interaction between themselves A learner interactsrandomly with other learners with the help of group discus-sions presentations formal communications and so forthA learner learns something new if the other learner hasmore knowledge than him or her Learner modification isexpressed as
119899119890119908119883119894=
119883119894+ 119903 lowast (119883
119894minus 119883119895) if119891 (119883
119894) lt 119891 (119883
119895)
119883119894+ 119903 lowast (119883
119895minus 119883119894) otherwise
(3)
As explained above the pseudocode for the implementa-tion of TLBO is summarized in Algorithm 1
The Scientific World Journal 3
3 Bare-Bones Algorithm
In this section we only presented a brief overview of somerecently proposed bare-bones algorithms
31 BBPSO and BBExp PSO is a swarm intelligence-basedalgorithm which is inspired by the behavior of birds flocking[5] In PSO each particle is attracted by its personal bestposition (119901best) and the global best position (119892best) foundso far Theoretical studies [18 19] proved that each particleconverges to the weighted average of 119901best and 119892best
lim119905rarrinfin
119883119894 (119905) =
1198881sdot 119892best + 1198882 sdot 119901best
1198881+ 1198882
(4)
where 1198881and 1198882are two leaning factors in PSO
Based on the convergence characteristic of PSO Kennedy[20] proposed a new PSO variant called bare-bones PSO(BBPSO) Bare-bones PSO retains the standard PSO socialcommunication but replaces dynamical particle update withsampling from a probability distribution based on 119892best and119901best119894 as follows
119909119894119895 (119905 + 1) = 119873(
119892best + 119901best119894119895 (119905)
2
100381610038161003816100381610038161003816119892best minus 119901best119894119895 (119905)
100381610038161003816100381610038161003816) (5)
where 119909119894119895(119905 + 1) is the 119895th dimension of the 119894th particle in
the population and119873 represents a Gaussian distributionwithmean (119892best + 119901best119894119895(119905))2 and standard deviation |119892best minus119901best119894119895(119905)|
Kennedy [20] proposed also an alternative version of theBBPSO denoted by BBExp where (5) is replaced by119909119894119895 (119905 + 1)
=
119873(
119892best + 119901best119894119895 (119905)
2
100381610038161003816100381610038161003816119892best minus 119901best119894119895 (119905)
(6)where rand (01) is a random value within [0 1] for the 119895thdimension For the alternative mechanism there is a 50chance that the search process is focusing on the previous bestpositions
32 BBDE GBDE and MGBDE Inspired by the BBPSOand DE Omran et al [21] proposed a new and efficientDE variant called bare-bones differential evolution (BBDE)The BBDE is a new almost parameter-free optimizationalgorithm that is a hybrid of the bare-bones particle swarmoptimizer and differential evolution Differential evolution isused tomutate for each particle the attractor associated withthat particle defined as a weighted average of its personal andneighborhood best positions For the BBDE the individual isupdated as follows119909119894119895 (119905 + 1)
where 119901best and 119892best are personal best position and the globalbest position 119903
1119895 is a random value within [0 1] for the 119895th
dimensionBased on the idea that the Gaussian sampling is a fine
tuning procedure which starts during exploration and iscontinued to exploitation Wang et al [22] proposed a newparameter-freeDE algorithm calledGBDE In theGBDE themutation strategy uses a Gaussian sampling method which isdefined by
10038161003816100381610038161003816119883best119895 (119905) minus 119909119894119895 (119905)
10038161003816100381610038161003816)
119909119894119895 (119905) otherwise(9)
where 119873 represents a Gaussian distribution with mean(119883best119895(119905)+119909119894119895(119905))2 and standard deviation |119883best119895(119905)minus119909119894119895(119905)|and CR is the probability of crossover
To balance the global search ability and convergence rateWang et al [22] proposed amodifiedGBDE (calledMGBDE)The mutation strategy uses a hybridization of GBDE andDEbest1 as follows
The bare-bones PSO utilizes this information by samplingcandidate solutions normally distributed around the for-mally derived attractor point That is the new position isgenerated by a Gaussian distribution for sampling the searchspace based on the 119892best and the 119901best at the current iterationAs a result the new position will be centered around theweighted average of 119901best and 119892best Generally speaking atthe initial evolutionary stages the search process focuses onexploration due to the large deviation With an increasingnumber of generations the deviation becomes smaller andthe search process will focus on exploitation From the searchbehavior of BBPSO the Gaussian sampling is a fine tuningprocedure which starts during exploration and is continuedto exploitation This can be beneficial for the search of manyevolutionary optimization algorithms Additionally the bare-bones PSO has no parameters to be tuned
Based on a previous explanation a new bare-bones TLBO(BBTLBO) with neighborhood search is proposed in this
4 The Scientific World Journal
Begin
Initialize learners size (NP) dimension (D) and hybridization factor (u)
Calculate the NTeacher and NMean of each learner
Modify each learner Xi in the class= + r lowast (NTeacher minus TF lowast NMean)= N((NTeacher + Xi)2 (NTeacher minus Xi))
newXi = u lowast newX1 + (1 minus u) lowast newX2Teacher phase
newXi better Xi
Xi = newXi Xi = Xi
Yes
Yes
Yes
Yes
No
No
No
No
Denote the NTeacheri and randomly select a Xk for each Xi
Learner phase
rand(0 1) lt 05
The original TLBO learning Neighborhood search strategy
newXi better Xi
Xi = newXi Xi = Xi
Termination criteria satisfied
End
gen = gen + 1
XiV1V2
Figure 1 Flow chart showing the working of BBTLBO algorithm
paper In fact for TLBO if the new learner has a betterfunction value than that of the old learner it is replacedwith the old one in the memory Otherwise the old one isretained in the memory In other words a greedy selectionmechanism is employed as the selection operation betweenthe old and the candidate one Hence the new teacher and thenew learner are the global best (119892best) and learnerrsquos personalbest (119901best) found so far respectively The complete flowchartof the BBTLBO algorithm is shown in Figure 1
41 Neighborhood Search It is known that birds of a featherflock together and people of a mind fall into the samegroup Just like evolutionary algorithms themselves thenotion of neighborhood is inspired by nature Neighborhoodtechnique is an efficient method to maintain diversity of
the solutions It plays an important role in evolutionaryalgorithms and is often introduced by researchers in orderto allow maintenance of a population of diverse individualsand improve the exploration capability of population-basedheuristic algorithms [23ndash26] In fact learners with similarinterests form different learning groups Because of his orher favor characteristic the learner maybe learns from theexcellent individual in the learning group
For the implementation of grouping various types ofconnected distances may be used Here we have used aring topology [27] based on the indexes of learners for thesake of simplicity In a ring topology the first individualis the neighbor of the last individual and vice versa Basedon the ring topology a 119896-neighborhood radius is definedwhere 119896 is a predefined integer number For each individual
The Scientific World Journal 5
NeighborhoodiXi
Ximinus1
Xi+1
Figure 2 Ring neighborhood topology with three members
its 119896-neighborhood radius consists of 2119896 + 1 individuals(including oneself) which are 119883
119894minus119896 119883
119894 119883
119894+119896 That is
the neighborhood size is 2119896 + 1 for a 119896-neighborhood Forsimplicity 119896 is set to 1 (Figure 2) in our algorithmThismeansthat there are 3 individuals in each learning group Oncegroups are constructed we can utilize them for updating thelearners of the corresponding group
42 Teacher Phase To balance the global and local searchability a modified interactive learning strategy is proposed inteacher phase In this learning phase each learner employs aninteractive learning strategy (the hybridization of the learningstrategy of teacher phase in the standard TLBO and Gaussiansampling learning) based on neighborhood search
In BBTLBO the updating formula of the learning for alearner 119883
119894in teacher phase is proposed by the hybridization
of the learning strategy of teacher phase and the Gaussiansampling learning as follows
where 119906 called the hybridization factor is a random numberin the range [0 1] for the 119895th dimension 119873119879119890119886119888ℎ119890119903 and119873119872119890119886119899 are the existing neighborhood best solution and theneighborhood mean solution of each learner and TF is ateaching factor which can be either 1 or 2 randomly
In the BBTLBO there is a (119906 lowast 100) chance that the119895th dimension of the 119894th learner in the population follows thebehavior of the learning strategy of teacher phase while theremaining (100 minus 119906lowast 100) follow the search behavior of theGaussian sampling in teacher phase This will be helpful tobalance the advantages of fast convergence rate (the attraction
of the learning strategy of teacher phase) and exploration (theGaussian sampling) in BBTLBO
43 Learner Phase At the same time in the learner phase alearner interacts randomly with other learners for enhancinghis or her knowledge in the class This learning method canbe treated as the global search strategy (shown in (3))
In this paper we introduce a new learning strategy inwhich each learner learns from the neighborhood teacher andthe other learner selected randomly of his or her correspond-ing neighborhood in learner phaseThis learningmethod canbe treated as the neighborhood search strategy Let 119899119890119908119883
119894
represent the interactive learning result of the learner119883119894This
neighborhood search strategy can be expressed as follows
where 1199031and 1199032are random vectors in which each element
is a random number in the range [0 1] 119873119879119890119886119888ℎ119890119903 is theteacher of the learner 119883
119894rsquos corresponding neighborhood
and the learner 119883119896is selected randomly from the learnerrsquos
corresponding neighborhoodIn BBTLBO each learner is probabilistically learning by
means of the global search strategy or the neighborhoodsearch strategy in learner phaseThat is about 50of learnersin the population execute the learning strategy of learnerphase in the standard TLBO (shown in (3)) while theremaining 50execute neighborhood search strategy (shownin (12)) This will be helpful to balance the global search andlocal search in learner phase
Moreover compared to the original TLBO BBTLBOonlymodifies the learning strategies Therefore both the originalTLBO and BBTLBO have the same time complexity 119874 (NP sdot119863 sdot Genmax) where NP is the number of the population 119863is the number of dimensions and Genmax is the maximumnumber of generations
As explained above the pseudocode for the implementa-tion of BBTLBO is summarized in Algorithm 2
5 Functions Optimization
In this section to illustrate the effectiveness of the proposedmethod 20 benchmark functions are used to test the effi-ciency of BBTLBO To compare the search performance ofBBTLBO with some other methods other different algo-rithms are also simulated in the paper
51 Benchmark Functions Thedetails of 20 benchmark func-tions are shown in Table 1 Among 20 benchmark functions1198651to 1198659are unimodal functions and 119865
10to 11986520
are multi-modal functions The searching range and theory optima forall functions are also shown in Table 1
52 Parameter Settings All the experiments are carried outon the same machine with a Celoron 226GHz CPU 2GBmemory andWindows XP operating system withMatlab 79
6 The Scientific World Journal
(1) Begin(2) Initialize119873 (number of learners)119863 (number of dimensions) and hybridization factor 119906(3) Initialize learners 119883 and evaluate all learners119883(4) while (stopping condition not met)(5) for each learner119883
119894of the class Teaching phase
(6) TF = round(1 + rand(0 1))(7) Donate the119873 119879119890119886119888ℎ119890119903 and the119873 119872119890119886119899 in its neighborhood for each learner(8) Updating each learner according (11)(9) Accept 119899119890119908119883
119894if 119891(119899119890119908119883
119894) is better than 119891(119883
119894)
(10) endfor(11) for each learner119883
119894of the class Learning phase
(12) Randomly select one learner119883119896 such that 119894 = 119896
(13) if rand(0 1) lt 05(14) Updating each learner according (3)(15) else(16) Donate the119873119879119890119886119888ℎ119890119903 in its neighborhood for each learner(17) Updating each learner according (12)(18) endif(19) Accept 119899119890119908119883
119894if 119891(119899119890119908119883119894) is better than 119891(119883
119894)
(20) endfor(21) endwhile(22) end
Algorithm 2 BBTLBO( )
For the purpose of reducing statistical errors each algorithmis independently simulated 50 times For all algorithms thepopulation size was set to 20 Population-based stochasticalgorithms use the same stopping criterion that is reachinga certain number of function evaluations (FEs)
53 Effect of Variation in Parameter 119906 The hybridizationfactor u is set to 00 01 03 05 07 09 10 Comparativetests have been performed using different 119906 In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions Table 2 showsthe mean optimum solutions and the standard deviation ofthe solutions obtained using different hybridization factor119906 in the 50 independent runs The best results amongthe algorithms are shown in bold Figure 3 presents therepresentative convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved byusing different hybridization factor 119906 on all test functionsDue to the tight space limitation some sample graphs areillustrated
The comparisons in Table 2 and Figure 3 show that whenthe hybridization factor 119906 is set to 09 BBTLBOoffers the bestperformance on 20 test functions Hence the hybridizationfactor 119906 is set to 09 in the following experiments
54 Comparison of BBTLBO with Some Similar Bare-BonesAlgorithms In this section we compare BBTLBO with fiveother recently proposed three bare-bones DE variants andtwo bare-bones PSO algorithms Our experiment includestwo series of comparisons in terms of the solution accuracyand the solution convergence (convergence speed and successrate) We compared the performance of BBTLBO with other
similar bare-bones algorithms including BBPSO [20] BBExp[20] BBDE [21] GBDE [22] and MGBDE [22]
541 Comparisons on the Solution Accuracy In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions The resultsare shown in Table 3 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the 50independent runs by each algorithm on 20 test functionsThebest results among the algorithms are shown in bold Figure 4presents the convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved by 7algorithms for 50 independent runs Due to the tight spacelimitation some sample graphs are illustrated
From Table 3 it can be observed that the mean optimumsolution and the standard deviation of all algorithms performwell for the functions 119865
15and 11986517 Although BBExp performs
better than BBTLBO on function 1198659and MGBDE performs
better than BBTLBO on function 11986520 our approach BBTLBO
achieves better results than other algorithms on the rest of testfunctions Table 3 and Figure 4 conclude that the BBTLBOhas a good performance of the solution accuracy for testfunctions in this paper
542 Comparison of the Convergence Speed and SR In orderto compare the convergence speed and successful rate (SR)of different algorithms we select a threshold value of theobjective function for each test function For other functionsthe threshold values are listed in Table 4 In our experimentthe stopping criterion is that each algorithm is terminatedwhen the best fitness value so far is below the predefinedthreshold value (119879 Value) or the number of FEs reaches to
The Scientific World Journal 7
Table 1 Details of numerical benchmarks used
Function Formula 119863 Range Optima
Sphere 1198651(119909) =
119863
sum
119894=1
1199092
11989430 [minus100 100] 0
Sum square 1198652(119909) =
119863
sum
119894=1
1198941199092
11989430 [minus100 100] 0
Quadric 1198653(119909) =
119863
sum
119894=1
1198941199094
119894+ random(0 1) 30 [minus128 128] 0
Step 1198654(119909) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 30 [minus100 100] 0
Schwefel 12 1198655(119909) =
119863
sum
119894=1
(
119894
sum
119895=1
119909119895)
2
30 [minus100 100] 0
Schwefel 221 1198656(119909) = max 1003816100381610038161003816119909119894
1003816100381610038161003816 1 le 119894 le 119863 30 [minus100 100] 0
Schwefel 222 1198657(119909) =
119863
sum
119894=1
1003816100381610038161003816119909119894
1003816100381610038161003816+
119863
prod
119894=1
1003816100381610038161003816119909119894
100381610038161003816100381630 [minus10 10] 0
Zakharov 1198658(119909) =
119863
sum
119894=1
1199092
119894+ (
119863
sum
119894=1
05119894119909119894)
2
+ (
119863
sum
119894=1
05119894119909119894)
4
30 [minus100 100] 0
Rosenbrock 1198659(119909) =
119863minus1
sum
119894=1
lfloor100(1199092
119894minus 119909119894+1)2
+ (119909119894minus 1)2
rfloor 30 [minus2048 2048] 0
Ackley 11986510(119909) = 20 minus 20 exp((minus1
5)radic(
1
119863)
119863
sum
119894=1
1199092
119894) minus exp(( 1
119863)
119863
sum
119894=1
cos (2120587119909119894)) + 119890 30 [minus32 32] 0
Rastrigin 11986511(119909) =
119863
sum
119894=1
(1199092
119894minus 10 cos (2120587119909
119894) + 10) 30 [minus512 512] 0
Weierstrass11986512(119909) =
119863
sum
119894=1
(
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 (119909
119894+ 05))]) minus 119863
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 times 05)]
119886 = 05 119887 = 3 119896max = 20
30 [minus05 05] 0
Griewank 11986513(119909) =
119863
sum
119894=1
(1199092
119894
4000) minus
119899
prod
119894=1
cos(119909119894
radic119894
) + 1 30 [minus600 600] 0
Schwefel 11986514(119909) = 4189829119863 +
119863
sum
119894=1
(minus119909119894sinradicabs(119909
119894)) 30 [minus500 500] 0
Bohachevsky1 11986515(119909) = 119909
2
1+ 21199092
2minus 03 cos (3120587119909
1) minus 04 cos (4120587119909
2) + 07 2 [minus100 100] 0
Bohachevsky2 11986516(119909) = 119909
2
1+ 21199092
2minus 03cos (3120587119909
1) lowast cos (4120587119909
2) + 03 2 [minus100 100] 0
Bohachevsky3 11986517(119909) = 119909
2
1+ 21199092
2minus 03cos((3120587119909
1) + (4120587119909
2)) + 03 2 [minus100 100] 0
Shekel5 11986518(119909) = minus
5
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus101532
Shekel7 11986519(119909) = minus
7
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus104029
Shekel10 11986520(119909) = minus
10
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus105364
the maximal FEs 40000 The results are shown in Table 4in terms of the mean number of FEs (MFEs) required toconverge to the threshold and successful rate (SR) in the50 independent runs ldquoNaNrdquo represents that no runs of thecorresponding algorithm converged below the predefinedthreshold before meeting the maximum number of FEs Thebest results among the six algorithms are shown in boldface
FromTable 5 it can be observed that all algorithms hardlyconverge to the threshold for unimodal functions 119865
3 1198655 1198656
and 1198658and multimodal functions 119865
11 11986512 and 119865
14 BBTLBO
converges to the threshold except for functions 1198653 1198659 and
11986514 From the results of total average FEs BBTLBO converges
faster than other algorithms on all unimodal functions andthe majority of multimodal functions except for functions119865151198651611986519 and119865
20The acceleration rates between BBTLBO
and other algorithms are mostly 10 for functions 1198651 1198652 1198654
1198657 1198659 11986510 and 119865
13 From the results of total average SR
BBTLBO achieves the highest SR for those test functions ofwhich BBTLBO successfully converges to the threshold valueIt can be concluded that the BBTLBOhas a good performanceof convergence speed and successful rate (SR) of the solutionsfor test functions in this paper
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
In this section we only presented a brief overview of somerecently proposed bare-bones algorithms
31 BBPSO and BBExp PSO is a swarm intelligence-basedalgorithm which is inspired by the behavior of birds flocking[5] In PSO each particle is attracted by its personal bestposition (119901best) and the global best position (119892best) foundso far Theoretical studies [18 19] proved that each particleconverges to the weighted average of 119901best and 119892best
lim119905rarrinfin
119883119894 (119905) =
1198881sdot 119892best + 1198882 sdot 119901best
1198881+ 1198882
(4)
where 1198881and 1198882are two leaning factors in PSO
Based on the convergence characteristic of PSO Kennedy[20] proposed a new PSO variant called bare-bones PSO(BBPSO) Bare-bones PSO retains the standard PSO socialcommunication but replaces dynamical particle update withsampling from a probability distribution based on 119892best and119901best119894 as follows
119909119894119895 (119905 + 1) = 119873(
119892best + 119901best119894119895 (119905)
2
100381610038161003816100381610038161003816119892best minus 119901best119894119895 (119905)
100381610038161003816100381610038161003816) (5)
where 119909119894119895(119905 + 1) is the 119895th dimension of the 119894th particle in
the population and119873 represents a Gaussian distributionwithmean (119892best + 119901best119894119895(119905))2 and standard deviation |119892best minus119901best119894119895(119905)|
Kennedy [20] proposed also an alternative version of theBBPSO denoted by BBExp where (5) is replaced by119909119894119895 (119905 + 1)
=
119873(
119892best + 119901best119894119895 (119905)
2
100381610038161003816100381610038161003816119892best minus 119901best119894119895 (119905)
(6)where rand (01) is a random value within [0 1] for the 119895thdimension For the alternative mechanism there is a 50chance that the search process is focusing on the previous bestpositions
32 BBDE GBDE and MGBDE Inspired by the BBPSOand DE Omran et al [21] proposed a new and efficientDE variant called bare-bones differential evolution (BBDE)The BBDE is a new almost parameter-free optimizationalgorithm that is a hybrid of the bare-bones particle swarmoptimizer and differential evolution Differential evolution isused tomutate for each particle the attractor associated withthat particle defined as a weighted average of its personal andneighborhood best positions For the BBDE the individual isupdated as follows119909119894119895 (119905 + 1)
where 119901best and 119892best are personal best position and the globalbest position 119903
1119895 is a random value within [0 1] for the 119895th
dimensionBased on the idea that the Gaussian sampling is a fine
tuning procedure which starts during exploration and iscontinued to exploitation Wang et al [22] proposed a newparameter-freeDE algorithm calledGBDE In theGBDE themutation strategy uses a Gaussian sampling method which isdefined by
10038161003816100381610038161003816119883best119895 (119905) minus 119909119894119895 (119905)
10038161003816100381610038161003816)
119909119894119895 (119905) otherwise(9)
where 119873 represents a Gaussian distribution with mean(119883best119895(119905)+119909119894119895(119905))2 and standard deviation |119883best119895(119905)minus119909119894119895(119905)|and CR is the probability of crossover
To balance the global search ability and convergence rateWang et al [22] proposed amodifiedGBDE (calledMGBDE)The mutation strategy uses a hybridization of GBDE andDEbest1 as follows
The bare-bones PSO utilizes this information by samplingcandidate solutions normally distributed around the for-mally derived attractor point That is the new position isgenerated by a Gaussian distribution for sampling the searchspace based on the 119892best and the 119901best at the current iterationAs a result the new position will be centered around theweighted average of 119901best and 119892best Generally speaking atthe initial evolutionary stages the search process focuses onexploration due to the large deviation With an increasingnumber of generations the deviation becomes smaller andthe search process will focus on exploitation From the searchbehavior of BBPSO the Gaussian sampling is a fine tuningprocedure which starts during exploration and is continuedto exploitation This can be beneficial for the search of manyevolutionary optimization algorithms Additionally the bare-bones PSO has no parameters to be tuned
Based on a previous explanation a new bare-bones TLBO(BBTLBO) with neighborhood search is proposed in this
4 The Scientific World Journal
Begin
Initialize learners size (NP) dimension (D) and hybridization factor (u)
Calculate the NTeacher and NMean of each learner
Modify each learner Xi in the class= + r lowast (NTeacher minus TF lowast NMean)= N((NTeacher + Xi)2 (NTeacher minus Xi))
newXi = u lowast newX1 + (1 minus u) lowast newX2Teacher phase
newXi better Xi
Xi = newXi Xi = Xi
Yes
Yes
Yes
Yes
No
No
No
No
Denote the NTeacheri and randomly select a Xk for each Xi
Learner phase
rand(0 1) lt 05
The original TLBO learning Neighborhood search strategy
newXi better Xi
Xi = newXi Xi = Xi
Termination criteria satisfied
End
gen = gen + 1
XiV1V2
Figure 1 Flow chart showing the working of BBTLBO algorithm
paper In fact for TLBO if the new learner has a betterfunction value than that of the old learner it is replacedwith the old one in the memory Otherwise the old one isretained in the memory In other words a greedy selectionmechanism is employed as the selection operation betweenthe old and the candidate one Hence the new teacher and thenew learner are the global best (119892best) and learnerrsquos personalbest (119901best) found so far respectively The complete flowchartof the BBTLBO algorithm is shown in Figure 1
41 Neighborhood Search It is known that birds of a featherflock together and people of a mind fall into the samegroup Just like evolutionary algorithms themselves thenotion of neighborhood is inspired by nature Neighborhoodtechnique is an efficient method to maintain diversity of
the solutions It plays an important role in evolutionaryalgorithms and is often introduced by researchers in orderto allow maintenance of a population of diverse individualsand improve the exploration capability of population-basedheuristic algorithms [23ndash26] In fact learners with similarinterests form different learning groups Because of his orher favor characteristic the learner maybe learns from theexcellent individual in the learning group
For the implementation of grouping various types ofconnected distances may be used Here we have used aring topology [27] based on the indexes of learners for thesake of simplicity In a ring topology the first individualis the neighbor of the last individual and vice versa Basedon the ring topology a 119896-neighborhood radius is definedwhere 119896 is a predefined integer number For each individual
The Scientific World Journal 5
NeighborhoodiXi
Ximinus1
Xi+1
Figure 2 Ring neighborhood topology with three members
its 119896-neighborhood radius consists of 2119896 + 1 individuals(including oneself) which are 119883
119894minus119896 119883
119894 119883
119894+119896 That is
the neighborhood size is 2119896 + 1 for a 119896-neighborhood Forsimplicity 119896 is set to 1 (Figure 2) in our algorithmThismeansthat there are 3 individuals in each learning group Oncegroups are constructed we can utilize them for updating thelearners of the corresponding group
42 Teacher Phase To balance the global and local searchability a modified interactive learning strategy is proposed inteacher phase In this learning phase each learner employs aninteractive learning strategy (the hybridization of the learningstrategy of teacher phase in the standard TLBO and Gaussiansampling learning) based on neighborhood search
In BBTLBO the updating formula of the learning for alearner 119883
119894in teacher phase is proposed by the hybridization
of the learning strategy of teacher phase and the Gaussiansampling learning as follows
where 119906 called the hybridization factor is a random numberin the range [0 1] for the 119895th dimension 119873119879119890119886119888ℎ119890119903 and119873119872119890119886119899 are the existing neighborhood best solution and theneighborhood mean solution of each learner and TF is ateaching factor which can be either 1 or 2 randomly
In the BBTLBO there is a (119906 lowast 100) chance that the119895th dimension of the 119894th learner in the population follows thebehavior of the learning strategy of teacher phase while theremaining (100 minus 119906lowast 100) follow the search behavior of theGaussian sampling in teacher phase This will be helpful tobalance the advantages of fast convergence rate (the attraction
of the learning strategy of teacher phase) and exploration (theGaussian sampling) in BBTLBO
43 Learner Phase At the same time in the learner phase alearner interacts randomly with other learners for enhancinghis or her knowledge in the class This learning method canbe treated as the global search strategy (shown in (3))
In this paper we introduce a new learning strategy inwhich each learner learns from the neighborhood teacher andthe other learner selected randomly of his or her correspond-ing neighborhood in learner phaseThis learningmethod canbe treated as the neighborhood search strategy Let 119899119890119908119883
119894
represent the interactive learning result of the learner119883119894This
neighborhood search strategy can be expressed as follows
where 1199031and 1199032are random vectors in which each element
is a random number in the range [0 1] 119873119879119890119886119888ℎ119890119903 is theteacher of the learner 119883
119894rsquos corresponding neighborhood
and the learner 119883119896is selected randomly from the learnerrsquos
corresponding neighborhoodIn BBTLBO each learner is probabilistically learning by
means of the global search strategy or the neighborhoodsearch strategy in learner phaseThat is about 50of learnersin the population execute the learning strategy of learnerphase in the standard TLBO (shown in (3)) while theremaining 50execute neighborhood search strategy (shownin (12)) This will be helpful to balance the global search andlocal search in learner phase
Moreover compared to the original TLBO BBTLBOonlymodifies the learning strategies Therefore both the originalTLBO and BBTLBO have the same time complexity 119874 (NP sdot119863 sdot Genmax) where NP is the number of the population 119863is the number of dimensions and Genmax is the maximumnumber of generations
As explained above the pseudocode for the implementa-tion of BBTLBO is summarized in Algorithm 2
5 Functions Optimization
In this section to illustrate the effectiveness of the proposedmethod 20 benchmark functions are used to test the effi-ciency of BBTLBO To compare the search performance ofBBTLBO with some other methods other different algo-rithms are also simulated in the paper
51 Benchmark Functions Thedetails of 20 benchmark func-tions are shown in Table 1 Among 20 benchmark functions1198651to 1198659are unimodal functions and 119865
10to 11986520
are multi-modal functions The searching range and theory optima forall functions are also shown in Table 1
52 Parameter Settings All the experiments are carried outon the same machine with a Celoron 226GHz CPU 2GBmemory andWindows XP operating system withMatlab 79
6 The Scientific World Journal
(1) Begin(2) Initialize119873 (number of learners)119863 (number of dimensions) and hybridization factor 119906(3) Initialize learners 119883 and evaluate all learners119883(4) while (stopping condition not met)(5) for each learner119883
119894of the class Teaching phase
(6) TF = round(1 + rand(0 1))(7) Donate the119873 119879119890119886119888ℎ119890119903 and the119873 119872119890119886119899 in its neighborhood for each learner(8) Updating each learner according (11)(9) Accept 119899119890119908119883
119894if 119891(119899119890119908119883
119894) is better than 119891(119883
119894)
(10) endfor(11) for each learner119883
119894of the class Learning phase
(12) Randomly select one learner119883119896 such that 119894 = 119896
(13) if rand(0 1) lt 05(14) Updating each learner according (3)(15) else(16) Donate the119873119879119890119886119888ℎ119890119903 in its neighborhood for each learner(17) Updating each learner according (12)(18) endif(19) Accept 119899119890119908119883
119894if 119891(119899119890119908119883119894) is better than 119891(119883
119894)
(20) endfor(21) endwhile(22) end
Algorithm 2 BBTLBO( )
For the purpose of reducing statistical errors each algorithmis independently simulated 50 times For all algorithms thepopulation size was set to 20 Population-based stochasticalgorithms use the same stopping criterion that is reachinga certain number of function evaluations (FEs)
53 Effect of Variation in Parameter 119906 The hybridizationfactor u is set to 00 01 03 05 07 09 10 Comparativetests have been performed using different 119906 In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions Table 2 showsthe mean optimum solutions and the standard deviation ofthe solutions obtained using different hybridization factor119906 in the 50 independent runs The best results amongthe algorithms are shown in bold Figure 3 presents therepresentative convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved byusing different hybridization factor 119906 on all test functionsDue to the tight space limitation some sample graphs areillustrated
The comparisons in Table 2 and Figure 3 show that whenthe hybridization factor 119906 is set to 09 BBTLBOoffers the bestperformance on 20 test functions Hence the hybridizationfactor 119906 is set to 09 in the following experiments
54 Comparison of BBTLBO with Some Similar Bare-BonesAlgorithms In this section we compare BBTLBO with fiveother recently proposed three bare-bones DE variants andtwo bare-bones PSO algorithms Our experiment includestwo series of comparisons in terms of the solution accuracyand the solution convergence (convergence speed and successrate) We compared the performance of BBTLBO with other
similar bare-bones algorithms including BBPSO [20] BBExp[20] BBDE [21] GBDE [22] and MGBDE [22]
541 Comparisons on the Solution Accuracy In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions The resultsare shown in Table 3 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the 50independent runs by each algorithm on 20 test functionsThebest results among the algorithms are shown in bold Figure 4presents the convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved by 7algorithms for 50 independent runs Due to the tight spacelimitation some sample graphs are illustrated
From Table 3 it can be observed that the mean optimumsolution and the standard deviation of all algorithms performwell for the functions 119865
15and 11986517 Although BBExp performs
better than BBTLBO on function 1198659and MGBDE performs
better than BBTLBO on function 11986520 our approach BBTLBO
achieves better results than other algorithms on the rest of testfunctions Table 3 and Figure 4 conclude that the BBTLBOhas a good performance of the solution accuracy for testfunctions in this paper
542 Comparison of the Convergence Speed and SR In orderto compare the convergence speed and successful rate (SR)of different algorithms we select a threshold value of theobjective function for each test function For other functionsthe threshold values are listed in Table 4 In our experimentthe stopping criterion is that each algorithm is terminatedwhen the best fitness value so far is below the predefinedthreshold value (119879 Value) or the number of FEs reaches to
The Scientific World Journal 7
Table 1 Details of numerical benchmarks used
Function Formula 119863 Range Optima
Sphere 1198651(119909) =
119863
sum
119894=1
1199092
11989430 [minus100 100] 0
Sum square 1198652(119909) =
119863
sum
119894=1
1198941199092
11989430 [minus100 100] 0
Quadric 1198653(119909) =
119863
sum
119894=1
1198941199094
119894+ random(0 1) 30 [minus128 128] 0
Step 1198654(119909) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 30 [minus100 100] 0
Schwefel 12 1198655(119909) =
119863
sum
119894=1
(
119894
sum
119895=1
119909119895)
2
30 [minus100 100] 0
Schwefel 221 1198656(119909) = max 1003816100381610038161003816119909119894
1003816100381610038161003816 1 le 119894 le 119863 30 [minus100 100] 0
Schwefel 222 1198657(119909) =
119863
sum
119894=1
1003816100381610038161003816119909119894
1003816100381610038161003816+
119863
prod
119894=1
1003816100381610038161003816119909119894
100381610038161003816100381630 [minus10 10] 0
Zakharov 1198658(119909) =
119863
sum
119894=1
1199092
119894+ (
119863
sum
119894=1
05119894119909119894)
2
+ (
119863
sum
119894=1
05119894119909119894)
4
30 [minus100 100] 0
Rosenbrock 1198659(119909) =
119863minus1
sum
119894=1
lfloor100(1199092
119894minus 119909119894+1)2
+ (119909119894minus 1)2
rfloor 30 [minus2048 2048] 0
Ackley 11986510(119909) = 20 minus 20 exp((minus1
5)radic(
1
119863)
119863
sum
119894=1
1199092
119894) minus exp(( 1
119863)
119863
sum
119894=1
cos (2120587119909119894)) + 119890 30 [minus32 32] 0
Rastrigin 11986511(119909) =
119863
sum
119894=1
(1199092
119894minus 10 cos (2120587119909
119894) + 10) 30 [minus512 512] 0
Weierstrass11986512(119909) =
119863
sum
119894=1
(
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 (119909
119894+ 05))]) minus 119863
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 times 05)]
119886 = 05 119887 = 3 119896max = 20
30 [minus05 05] 0
Griewank 11986513(119909) =
119863
sum
119894=1
(1199092
119894
4000) minus
119899
prod
119894=1
cos(119909119894
radic119894
) + 1 30 [minus600 600] 0
Schwefel 11986514(119909) = 4189829119863 +
119863
sum
119894=1
(minus119909119894sinradicabs(119909
119894)) 30 [minus500 500] 0
Bohachevsky1 11986515(119909) = 119909
2
1+ 21199092
2minus 03 cos (3120587119909
1) minus 04 cos (4120587119909
2) + 07 2 [minus100 100] 0
Bohachevsky2 11986516(119909) = 119909
2
1+ 21199092
2minus 03cos (3120587119909
1) lowast cos (4120587119909
2) + 03 2 [minus100 100] 0
Bohachevsky3 11986517(119909) = 119909
2
1+ 21199092
2minus 03cos((3120587119909
1) + (4120587119909
2)) + 03 2 [minus100 100] 0
Shekel5 11986518(119909) = minus
5
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus101532
Shekel7 11986519(119909) = minus
7
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus104029
Shekel10 11986520(119909) = minus
10
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus105364
the maximal FEs 40000 The results are shown in Table 4in terms of the mean number of FEs (MFEs) required toconverge to the threshold and successful rate (SR) in the50 independent runs ldquoNaNrdquo represents that no runs of thecorresponding algorithm converged below the predefinedthreshold before meeting the maximum number of FEs Thebest results among the six algorithms are shown in boldface
FromTable 5 it can be observed that all algorithms hardlyconverge to the threshold for unimodal functions 119865
3 1198655 1198656
and 1198658and multimodal functions 119865
11 11986512 and 119865
14 BBTLBO
converges to the threshold except for functions 1198653 1198659 and
11986514 From the results of total average FEs BBTLBO converges
faster than other algorithms on all unimodal functions andthe majority of multimodal functions except for functions119865151198651611986519 and119865
20The acceleration rates between BBTLBO
and other algorithms are mostly 10 for functions 1198651 1198652 1198654
1198657 1198659 11986510 and 119865
13 From the results of total average SR
BBTLBO achieves the highest SR for those test functions ofwhich BBTLBO successfully converges to the threshold valueIt can be concluded that the BBTLBOhas a good performanceof convergence speed and successful rate (SR) of the solutionsfor test functions in this paper
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
Initialize learners size (NP) dimension (D) and hybridization factor (u)
Calculate the NTeacher and NMean of each learner
Modify each learner Xi in the class= + r lowast (NTeacher minus TF lowast NMean)= N((NTeacher + Xi)2 (NTeacher minus Xi))
newXi = u lowast newX1 + (1 minus u) lowast newX2Teacher phase
newXi better Xi
Xi = newXi Xi = Xi
Yes
Yes
Yes
Yes
No
No
No
No
Denote the NTeacheri and randomly select a Xk for each Xi
Learner phase
rand(0 1) lt 05
The original TLBO learning Neighborhood search strategy
newXi better Xi
Xi = newXi Xi = Xi
Termination criteria satisfied
End
gen = gen + 1
XiV1V2
Figure 1 Flow chart showing the working of BBTLBO algorithm
paper In fact for TLBO if the new learner has a betterfunction value than that of the old learner it is replacedwith the old one in the memory Otherwise the old one isretained in the memory In other words a greedy selectionmechanism is employed as the selection operation betweenthe old and the candidate one Hence the new teacher and thenew learner are the global best (119892best) and learnerrsquos personalbest (119901best) found so far respectively The complete flowchartof the BBTLBO algorithm is shown in Figure 1
41 Neighborhood Search It is known that birds of a featherflock together and people of a mind fall into the samegroup Just like evolutionary algorithms themselves thenotion of neighborhood is inspired by nature Neighborhoodtechnique is an efficient method to maintain diversity of
the solutions It plays an important role in evolutionaryalgorithms and is often introduced by researchers in orderto allow maintenance of a population of diverse individualsand improve the exploration capability of population-basedheuristic algorithms [23ndash26] In fact learners with similarinterests form different learning groups Because of his orher favor characteristic the learner maybe learns from theexcellent individual in the learning group
For the implementation of grouping various types ofconnected distances may be used Here we have used aring topology [27] based on the indexes of learners for thesake of simplicity In a ring topology the first individualis the neighbor of the last individual and vice versa Basedon the ring topology a 119896-neighborhood radius is definedwhere 119896 is a predefined integer number For each individual
The Scientific World Journal 5
NeighborhoodiXi
Ximinus1
Xi+1
Figure 2 Ring neighborhood topology with three members
its 119896-neighborhood radius consists of 2119896 + 1 individuals(including oneself) which are 119883
119894minus119896 119883
119894 119883
119894+119896 That is
the neighborhood size is 2119896 + 1 for a 119896-neighborhood Forsimplicity 119896 is set to 1 (Figure 2) in our algorithmThismeansthat there are 3 individuals in each learning group Oncegroups are constructed we can utilize them for updating thelearners of the corresponding group
42 Teacher Phase To balance the global and local searchability a modified interactive learning strategy is proposed inteacher phase In this learning phase each learner employs aninteractive learning strategy (the hybridization of the learningstrategy of teacher phase in the standard TLBO and Gaussiansampling learning) based on neighborhood search
In BBTLBO the updating formula of the learning for alearner 119883
119894in teacher phase is proposed by the hybridization
of the learning strategy of teacher phase and the Gaussiansampling learning as follows
where 119906 called the hybridization factor is a random numberin the range [0 1] for the 119895th dimension 119873119879119890119886119888ℎ119890119903 and119873119872119890119886119899 are the existing neighborhood best solution and theneighborhood mean solution of each learner and TF is ateaching factor which can be either 1 or 2 randomly
In the BBTLBO there is a (119906 lowast 100) chance that the119895th dimension of the 119894th learner in the population follows thebehavior of the learning strategy of teacher phase while theremaining (100 minus 119906lowast 100) follow the search behavior of theGaussian sampling in teacher phase This will be helpful tobalance the advantages of fast convergence rate (the attraction
of the learning strategy of teacher phase) and exploration (theGaussian sampling) in BBTLBO
43 Learner Phase At the same time in the learner phase alearner interacts randomly with other learners for enhancinghis or her knowledge in the class This learning method canbe treated as the global search strategy (shown in (3))
In this paper we introduce a new learning strategy inwhich each learner learns from the neighborhood teacher andthe other learner selected randomly of his or her correspond-ing neighborhood in learner phaseThis learningmethod canbe treated as the neighborhood search strategy Let 119899119890119908119883
119894
represent the interactive learning result of the learner119883119894This
neighborhood search strategy can be expressed as follows
where 1199031and 1199032are random vectors in which each element
is a random number in the range [0 1] 119873119879119890119886119888ℎ119890119903 is theteacher of the learner 119883
119894rsquos corresponding neighborhood
and the learner 119883119896is selected randomly from the learnerrsquos
corresponding neighborhoodIn BBTLBO each learner is probabilistically learning by
means of the global search strategy or the neighborhoodsearch strategy in learner phaseThat is about 50of learnersin the population execute the learning strategy of learnerphase in the standard TLBO (shown in (3)) while theremaining 50execute neighborhood search strategy (shownin (12)) This will be helpful to balance the global search andlocal search in learner phase
Moreover compared to the original TLBO BBTLBOonlymodifies the learning strategies Therefore both the originalTLBO and BBTLBO have the same time complexity 119874 (NP sdot119863 sdot Genmax) where NP is the number of the population 119863is the number of dimensions and Genmax is the maximumnumber of generations
As explained above the pseudocode for the implementa-tion of BBTLBO is summarized in Algorithm 2
5 Functions Optimization
In this section to illustrate the effectiveness of the proposedmethod 20 benchmark functions are used to test the effi-ciency of BBTLBO To compare the search performance ofBBTLBO with some other methods other different algo-rithms are also simulated in the paper
51 Benchmark Functions Thedetails of 20 benchmark func-tions are shown in Table 1 Among 20 benchmark functions1198651to 1198659are unimodal functions and 119865
10to 11986520
are multi-modal functions The searching range and theory optima forall functions are also shown in Table 1
52 Parameter Settings All the experiments are carried outon the same machine with a Celoron 226GHz CPU 2GBmemory andWindows XP operating system withMatlab 79
6 The Scientific World Journal
(1) Begin(2) Initialize119873 (number of learners)119863 (number of dimensions) and hybridization factor 119906(3) Initialize learners 119883 and evaluate all learners119883(4) while (stopping condition not met)(5) for each learner119883
119894of the class Teaching phase
(6) TF = round(1 + rand(0 1))(7) Donate the119873 119879119890119886119888ℎ119890119903 and the119873 119872119890119886119899 in its neighborhood for each learner(8) Updating each learner according (11)(9) Accept 119899119890119908119883
119894if 119891(119899119890119908119883
119894) is better than 119891(119883
119894)
(10) endfor(11) for each learner119883
119894of the class Learning phase
(12) Randomly select one learner119883119896 such that 119894 = 119896
(13) if rand(0 1) lt 05(14) Updating each learner according (3)(15) else(16) Donate the119873119879119890119886119888ℎ119890119903 in its neighborhood for each learner(17) Updating each learner according (12)(18) endif(19) Accept 119899119890119908119883
119894if 119891(119899119890119908119883119894) is better than 119891(119883
119894)
(20) endfor(21) endwhile(22) end
Algorithm 2 BBTLBO( )
For the purpose of reducing statistical errors each algorithmis independently simulated 50 times For all algorithms thepopulation size was set to 20 Population-based stochasticalgorithms use the same stopping criterion that is reachinga certain number of function evaluations (FEs)
53 Effect of Variation in Parameter 119906 The hybridizationfactor u is set to 00 01 03 05 07 09 10 Comparativetests have been performed using different 119906 In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions Table 2 showsthe mean optimum solutions and the standard deviation ofthe solutions obtained using different hybridization factor119906 in the 50 independent runs The best results amongthe algorithms are shown in bold Figure 3 presents therepresentative convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved byusing different hybridization factor 119906 on all test functionsDue to the tight space limitation some sample graphs areillustrated
The comparisons in Table 2 and Figure 3 show that whenthe hybridization factor 119906 is set to 09 BBTLBOoffers the bestperformance on 20 test functions Hence the hybridizationfactor 119906 is set to 09 in the following experiments
54 Comparison of BBTLBO with Some Similar Bare-BonesAlgorithms In this section we compare BBTLBO with fiveother recently proposed three bare-bones DE variants andtwo bare-bones PSO algorithms Our experiment includestwo series of comparisons in terms of the solution accuracyand the solution convergence (convergence speed and successrate) We compared the performance of BBTLBO with other
similar bare-bones algorithms including BBPSO [20] BBExp[20] BBDE [21] GBDE [22] and MGBDE [22]
541 Comparisons on the Solution Accuracy In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions The resultsare shown in Table 3 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the 50independent runs by each algorithm on 20 test functionsThebest results among the algorithms are shown in bold Figure 4presents the convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved by 7algorithms for 50 independent runs Due to the tight spacelimitation some sample graphs are illustrated
From Table 3 it can be observed that the mean optimumsolution and the standard deviation of all algorithms performwell for the functions 119865
15and 11986517 Although BBExp performs
better than BBTLBO on function 1198659and MGBDE performs
better than BBTLBO on function 11986520 our approach BBTLBO
achieves better results than other algorithms on the rest of testfunctions Table 3 and Figure 4 conclude that the BBTLBOhas a good performance of the solution accuracy for testfunctions in this paper
542 Comparison of the Convergence Speed and SR In orderto compare the convergence speed and successful rate (SR)of different algorithms we select a threshold value of theobjective function for each test function For other functionsthe threshold values are listed in Table 4 In our experimentthe stopping criterion is that each algorithm is terminatedwhen the best fitness value so far is below the predefinedthreshold value (119879 Value) or the number of FEs reaches to
The Scientific World Journal 7
Table 1 Details of numerical benchmarks used
Function Formula 119863 Range Optima
Sphere 1198651(119909) =
119863
sum
119894=1
1199092
11989430 [minus100 100] 0
Sum square 1198652(119909) =
119863
sum
119894=1
1198941199092
11989430 [minus100 100] 0
Quadric 1198653(119909) =
119863
sum
119894=1
1198941199094
119894+ random(0 1) 30 [minus128 128] 0
Step 1198654(119909) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 30 [minus100 100] 0
Schwefel 12 1198655(119909) =
119863
sum
119894=1
(
119894
sum
119895=1
119909119895)
2
30 [minus100 100] 0
Schwefel 221 1198656(119909) = max 1003816100381610038161003816119909119894
1003816100381610038161003816 1 le 119894 le 119863 30 [minus100 100] 0
Schwefel 222 1198657(119909) =
119863
sum
119894=1
1003816100381610038161003816119909119894
1003816100381610038161003816+
119863
prod
119894=1
1003816100381610038161003816119909119894
100381610038161003816100381630 [minus10 10] 0
Zakharov 1198658(119909) =
119863
sum
119894=1
1199092
119894+ (
119863
sum
119894=1
05119894119909119894)
2
+ (
119863
sum
119894=1
05119894119909119894)
4
30 [minus100 100] 0
Rosenbrock 1198659(119909) =
119863minus1
sum
119894=1
lfloor100(1199092
119894minus 119909119894+1)2
+ (119909119894minus 1)2
rfloor 30 [minus2048 2048] 0
Ackley 11986510(119909) = 20 minus 20 exp((minus1
5)radic(
1
119863)
119863
sum
119894=1
1199092
119894) minus exp(( 1
119863)
119863
sum
119894=1
cos (2120587119909119894)) + 119890 30 [minus32 32] 0
Rastrigin 11986511(119909) =
119863
sum
119894=1
(1199092
119894minus 10 cos (2120587119909
119894) + 10) 30 [minus512 512] 0
Weierstrass11986512(119909) =
119863
sum
119894=1
(
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 (119909
119894+ 05))]) minus 119863
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 times 05)]
119886 = 05 119887 = 3 119896max = 20
30 [minus05 05] 0
Griewank 11986513(119909) =
119863
sum
119894=1
(1199092
119894
4000) minus
119899
prod
119894=1
cos(119909119894
radic119894
) + 1 30 [minus600 600] 0
Schwefel 11986514(119909) = 4189829119863 +
119863
sum
119894=1
(minus119909119894sinradicabs(119909
119894)) 30 [minus500 500] 0
Bohachevsky1 11986515(119909) = 119909
2
1+ 21199092
2minus 03 cos (3120587119909
1) minus 04 cos (4120587119909
2) + 07 2 [minus100 100] 0
Bohachevsky2 11986516(119909) = 119909
2
1+ 21199092
2minus 03cos (3120587119909
1) lowast cos (4120587119909
2) + 03 2 [minus100 100] 0
Bohachevsky3 11986517(119909) = 119909
2
1+ 21199092
2minus 03cos((3120587119909
1) + (4120587119909
2)) + 03 2 [minus100 100] 0
Shekel5 11986518(119909) = minus
5
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus101532
Shekel7 11986519(119909) = minus
7
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus104029
Shekel10 11986520(119909) = minus
10
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus105364
the maximal FEs 40000 The results are shown in Table 4in terms of the mean number of FEs (MFEs) required toconverge to the threshold and successful rate (SR) in the50 independent runs ldquoNaNrdquo represents that no runs of thecorresponding algorithm converged below the predefinedthreshold before meeting the maximum number of FEs Thebest results among the six algorithms are shown in boldface
FromTable 5 it can be observed that all algorithms hardlyconverge to the threshold for unimodal functions 119865
3 1198655 1198656
and 1198658and multimodal functions 119865
11 11986512 and 119865
14 BBTLBO
converges to the threshold except for functions 1198653 1198659 and
11986514 From the results of total average FEs BBTLBO converges
faster than other algorithms on all unimodal functions andthe majority of multimodal functions except for functions119865151198651611986519 and119865
20The acceleration rates between BBTLBO
and other algorithms are mostly 10 for functions 1198651 1198652 1198654
1198657 1198659 11986510 and 119865
13 From the results of total average SR
BBTLBO achieves the highest SR for those test functions ofwhich BBTLBO successfully converges to the threshold valueIt can be concluded that the BBTLBOhas a good performanceof convergence speed and successful rate (SR) of the solutionsfor test functions in this paper
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
Figure 2 Ring neighborhood topology with three members
its 119896-neighborhood radius consists of 2119896 + 1 individuals(including oneself) which are 119883
119894minus119896 119883
119894 119883
119894+119896 That is
the neighborhood size is 2119896 + 1 for a 119896-neighborhood Forsimplicity 119896 is set to 1 (Figure 2) in our algorithmThismeansthat there are 3 individuals in each learning group Oncegroups are constructed we can utilize them for updating thelearners of the corresponding group
42 Teacher Phase To balance the global and local searchability a modified interactive learning strategy is proposed inteacher phase In this learning phase each learner employs aninteractive learning strategy (the hybridization of the learningstrategy of teacher phase in the standard TLBO and Gaussiansampling learning) based on neighborhood search
In BBTLBO the updating formula of the learning for alearner 119883
119894in teacher phase is proposed by the hybridization
of the learning strategy of teacher phase and the Gaussiansampling learning as follows
where 119906 called the hybridization factor is a random numberin the range [0 1] for the 119895th dimension 119873119879119890119886119888ℎ119890119903 and119873119872119890119886119899 are the existing neighborhood best solution and theneighborhood mean solution of each learner and TF is ateaching factor which can be either 1 or 2 randomly
In the BBTLBO there is a (119906 lowast 100) chance that the119895th dimension of the 119894th learner in the population follows thebehavior of the learning strategy of teacher phase while theremaining (100 minus 119906lowast 100) follow the search behavior of theGaussian sampling in teacher phase This will be helpful tobalance the advantages of fast convergence rate (the attraction
of the learning strategy of teacher phase) and exploration (theGaussian sampling) in BBTLBO
43 Learner Phase At the same time in the learner phase alearner interacts randomly with other learners for enhancinghis or her knowledge in the class This learning method canbe treated as the global search strategy (shown in (3))
In this paper we introduce a new learning strategy inwhich each learner learns from the neighborhood teacher andthe other learner selected randomly of his or her correspond-ing neighborhood in learner phaseThis learningmethod canbe treated as the neighborhood search strategy Let 119899119890119908119883
119894
represent the interactive learning result of the learner119883119894This
neighborhood search strategy can be expressed as follows
where 1199031and 1199032are random vectors in which each element
is a random number in the range [0 1] 119873119879119890119886119888ℎ119890119903 is theteacher of the learner 119883
119894rsquos corresponding neighborhood
and the learner 119883119896is selected randomly from the learnerrsquos
corresponding neighborhoodIn BBTLBO each learner is probabilistically learning by
means of the global search strategy or the neighborhoodsearch strategy in learner phaseThat is about 50of learnersin the population execute the learning strategy of learnerphase in the standard TLBO (shown in (3)) while theremaining 50execute neighborhood search strategy (shownin (12)) This will be helpful to balance the global search andlocal search in learner phase
Moreover compared to the original TLBO BBTLBOonlymodifies the learning strategies Therefore both the originalTLBO and BBTLBO have the same time complexity 119874 (NP sdot119863 sdot Genmax) where NP is the number of the population 119863is the number of dimensions and Genmax is the maximumnumber of generations
As explained above the pseudocode for the implementa-tion of BBTLBO is summarized in Algorithm 2
5 Functions Optimization
In this section to illustrate the effectiveness of the proposedmethod 20 benchmark functions are used to test the effi-ciency of BBTLBO To compare the search performance ofBBTLBO with some other methods other different algo-rithms are also simulated in the paper
51 Benchmark Functions Thedetails of 20 benchmark func-tions are shown in Table 1 Among 20 benchmark functions1198651to 1198659are unimodal functions and 119865
10to 11986520
are multi-modal functions The searching range and theory optima forall functions are also shown in Table 1
52 Parameter Settings All the experiments are carried outon the same machine with a Celoron 226GHz CPU 2GBmemory andWindows XP operating system withMatlab 79
6 The Scientific World Journal
(1) Begin(2) Initialize119873 (number of learners)119863 (number of dimensions) and hybridization factor 119906(3) Initialize learners 119883 and evaluate all learners119883(4) while (stopping condition not met)(5) for each learner119883
119894of the class Teaching phase
(6) TF = round(1 + rand(0 1))(7) Donate the119873 119879119890119886119888ℎ119890119903 and the119873 119872119890119886119899 in its neighborhood for each learner(8) Updating each learner according (11)(9) Accept 119899119890119908119883
119894if 119891(119899119890119908119883
119894) is better than 119891(119883
119894)
(10) endfor(11) for each learner119883
119894of the class Learning phase
(12) Randomly select one learner119883119896 such that 119894 = 119896
(13) if rand(0 1) lt 05(14) Updating each learner according (3)(15) else(16) Donate the119873119879119890119886119888ℎ119890119903 in its neighborhood for each learner(17) Updating each learner according (12)(18) endif(19) Accept 119899119890119908119883
119894if 119891(119899119890119908119883119894) is better than 119891(119883
119894)
(20) endfor(21) endwhile(22) end
Algorithm 2 BBTLBO( )
For the purpose of reducing statistical errors each algorithmis independently simulated 50 times For all algorithms thepopulation size was set to 20 Population-based stochasticalgorithms use the same stopping criterion that is reachinga certain number of function evaluations (FEs)
53 Effect of Variation in Parameter 119906 The hybridizationfactor u is set to 00 01 03 05 07 09 10 Comparativetests have been performed using different 119906 In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions Table 2 showsthe mean optimum solutions and the standard deviation ofthe solutions obtained using different hybridization factor119906 in the 50 independent runs The best results amongthe algorithms are shown in bold Figure 3 presents therepresentative convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved byusing different hybridization factor 119906 on all test functionsDue to the tight space limitation some sample graphs areillustrated
The comparisons in Table 2 and Figure 3 show that whenthe hybridization factor 119906 is set to 09 BBTLBOoffers the bestperformance on 20 test functions Hence the hybridizationfactor 119906 is set to 09 in the following experiments
54 Comparison of BBTLBO with Some Similar Bare-BonesAlgorithms In this section we compare BBTLBO with fiveother recently proposed three bare-bones DE variants andtwo bare-bones PSO algorithms Our experiment includestwo series of comparisons in terms of the solution accuracyand the solution convergence (convergence speed and successrate) We compared the performance of BBTLBO with other
similar bare-bones algorithms including BBPSO [20] BBExp[20] BBDE [21] GBDE [22] and MGBDE [22]
541 Comparisons on the Solution Accuracy In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions The resultsare shown in Table 3 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the 50independent runs by each algorithm on 20 test functionsThebest results among the algorithms are shown in bold Figure 4presents the convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved by 7algorithms for 50 independent runs Due to the tight spacelimitation some sample graphs are illustrated
From Table 3 it can be observed that the mean optimumsolution and the standard deviation of all algorithms performwell for the functions 119865
15and 11986517 Although BBExp performs
better than BBTLBO on function 1198659and MGBDE performs
better than BBTLBO on function 11986520 our approach BBTLBO
achieves better results than other algorithms on the rest of testfunctions Table 3 and Figure 4 conclude that the BBTLBOhas a good performance of the solution accuracy for testfunctions in this paper
542 Comparison of the Convergence Speed and SR In orderto compare the convergence speed and successful rate (SR)of different algorithms we select a threshold value of theobjective function for each test function For other functionsthe threshold values are listed in Table 4 In our experimentthe stopping criterion is that each algorithm is terminatedwhen the best fitness value so far is below the predefinedthreshold value (119879 Value) or the number of FEs reaches to
The Scientific World Journal 7
Table 1 Details of numerical benchmarks used
Function Formula 119863 Range Optima
Sphere 1198651(119909) =
119863
sum
119894=1
1199092
11989430 [minus100 100] 0
Sum square 1198652(119909) =
119863
sum
119894=1
1198941199092
11989430 [minus100 100] 0
Quadric 1198653(119909) =
119863
sum
119894=1
1198941199094
119894+ random(0 1) 30 [minus128 128] 0
Step 1198654(119909) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 30 [minus100 100] 0
Schwefel 12 1198655(119909) =
119863
sum
119894=1
(
119894
sum
119895=1
119909119895)
2
30 [minus100 100] 0
Schwefel 221 1198656(119909) = max 1003816100381610038161003816119909119894
1003816100381610038161003816 1 le 119894 le 119863 30 [minus100 100] 0
Schwefel 222 1198657(119909) =
119863
sum
119894=1
1003816100381610038161003816119909119894
1003816100381610038161003816+
119863
prod
119894=1
1003816100381610038161003816119909119894
100381610038161003816100381630 [minus10 10] 0
Zakharov 1198658(119909) =
119863
sum
119894=1
1199092
119894+ (
119863
sum
119894=1
05119894119909119894)
2
+ (
119863
sum
119894=1
05119894119909119894)
4
30 [minus100 100] 0
Rosenbrock 1198659(119909) =
119863minus1
sum
119894=1
lfloor100(1199092
119894minus 119909119894+1)2
+ (119909119894minus 1)2
rfloor 30 [minus2048 2048] 0
Ackley 11986510(119909) = 20 minus 20 exp((minus1
5)radic(
1
119863)
119863
sum
119894=1
1199092
119894) minus exp(( 1
119863)
119863
sum
119894=1
cos (2120587119909119894)) + 119890 30 [minus32 32] 0
Rastrigin 11986511(119909) =
119863
sum
119894=1
(1199092
119894minus 10 cos (2120587119909
119894) + 10) 30 [minus512 512] 0
Weierstrass11986512(119909) =
119863
sum
119894=1
(
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 (119909
119894+ 05))]) minus 119863
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 times 05)]
119886 = 05 119887 = 3 119896max = 20
30 [minus05 05] 0
Griewank 11986513(119909) =
119863
sum
119894=1
(1199092
119894
4000) minus
119899
prod
119894=1
cos(119909119894
radic119894
) + 1 30 [minus600 600] 0
Schwefel 11986514(119909) = 4189829119863 +
119863
sum
119894=1
(minus119909119894sinradicabs(119909
119894)) 30 [minus500 500] 0
Bohachevsky1 11986515(119909) = 119909
2
1+ 21199092
2minus 03 cos (3120587119909
1) minus 04 cos (4120587119909
2) + 07 2 [minus100 100] 0
Bohachevsky2 11986516(119909) = 119909
2
1+ 21199092
2minus 03cos (3120587119909
1) lowast cos (4120587119909
2) + 03 2 [minus100 100] 0
Bohachevsky3 11986517(119909) = 119909
2
1+ 21199092
2minus 03cos((3120587119909
1) + (4120587119909
2)) + 03 2 [minus100 100] 0
Shekel5 11986518(119909) = minus
5
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus101532
Shekel7 11986519(119909) = minus
7
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus104029
Shekel10 11986520(119909) = minus
10
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus105364
the maximal FEs 40000 The results are shown in Table 4in terms of the mean number of FEs (MFEs) required toconverge to the threshold and successful rate (SR) in the50 independent runs ldquoNaNrdquo represents that no runs of thecorresponding algorithm converged below the predefinedthreshold before meeting the maximum number of FEs Thebest results among the six algorithms are shown in boldface
FromTable 5 it can be observed that all algorithms hardlyconverge to the threshold for unimodal functions 119865
3 1198655 1198656
and 1198658and multimodal functions 119865
11 11986512 and 119865
14 BBTLBO
converges to the threshold except for functions 1198653 1198659 and
11986514 From the results of total average FEs BBTLBO converges
faster than other algorithms on all unimodal functions andthe majority of multimodal functions except for functions119865151198651611986519 and119865
20The acceleration rates between BBTLBO
and other algorithms are mostly 10 for functions 1198651 1198652 1198654
1198657 1198659 11986510 and 119865
13 From the results of total average SR
BBTLBO achieves the highest SR for those test functions ofwhich BBTLBO successfully converges to the threshold valueIt can be concluded that the BBTLBOhas a good performanceof convergence speed and successful rate (SR) of the solutionsfor test functions in this paper
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
(1) Begin(2) Initialize119873 (number of learners)119863 (number of dimensions) and hybridization factor 119906(3) Initialize learners 119883 and evaluate all learners119883(4) while (stopping condition not met)(5) for each learner119883
119894of the class Teaching phase
(6) TF = round(1 + rand(0 1))(7) Donate the119873 119879119890119886119888ℎ119890119903 and the119873 119872119890119886119899 in its neighborhood for each learner(8) Updating each learner according (11)(9) Accept 119899119890119908119883
119894if 119891(119899119890119908119883
119894) is better than 119891(119883
119894)
(10) endfor(11) for each learner119883
119894of the class Learning phase
(12) Randomly select one learner119883119896 such that 119894 = 119896
(13) if rand(0 1) lt 05(14) Updating each learner according (3)(15) else(16) Donate the119873119879119890119886119888ℎ119890119903 in its neighborhood for each learner(17) Updating each learner according (12)(18) endif(19) Accept 119899119890119908119883
119894if 119891(119899119890119908119883119894) is better than 119891(119883
119894)
(20) endfor(21) endwhile(22) end
Algorithm 2 BBTLBO( )
For the purpose of reducing statistical errors each algorithmis independently simulated 50 times For all algorithms thepopulation size was set to 20 Population-based stochasticalgorithms use the same stopping criterion that is reachinga certain number of function evaluations (FEs)
53 Effect of Variation in Parameter 119906 The hybridizationfactor u is set to 00 01 03 05 07 09 10 Comparativetests have been performed using different 119906 In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions Table 2 showsthe mean optimum solutions and the standard deviation ofthe solutions obtained using different hybridization factor119906 in the 50 independent runs The best results amongthe algorithms are shown in bold Figure 3 presents therepresentative convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved byusing different hybridization factor 119906 on all test functionsDue to the tight space limitation some sample graphs areillustrated
The comparisons in Table 2 and Figure 3 show that whenthe hybridization factor 119906 is set to 09 BBTLBOoffers the bestperformance on 20 test functions Hence the hybridizationfactor 119906 is set to 09 in the following experiments
54 Comparison of BBTLBO with Some Similar Bare-BonesAlgorithms In this section we compare BBTLBO with fiveother recently proposed three bare-bones DE variants andtwo bare-bones PSO algorithms Our experiment includestwo series of comparisons in terms of the solution accuracyand the solution convergence (convergence speed and successrate) We compared the performance of BBTLBO with other
similar bare-bones algorithms including BBPSO [20] BBExp[20] BBDE [21] GBDE [22] and MGBDE [22]
541 Comparisons on the Solution Accuracy In our exper-iment the maximal FEs are used as ended condition ofalgorithm namely 40000 for all test functions The resultsare shown in Table 3 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the 50independent runs by each algorithm on 20 test functionsThebest results among the algorithms are shown in bold Figure 4presents the convergence graphs of different benchmarkfunctions in terms of the mean fitness values achieved by 7algorithms for 50 independent runs Due to the tight spacelimitation some sample graphs are illustrated
From Table 3 it can be observed that the mean optimumsolution and the standard deviation of all algorithms performwell for the functions 119865
15and 11986517 Although BBExp performs
better than BBTLBO on function 1198659and MGBDE performs
better than BBTLBO on function 11986520 our approach BBTLBO
achieves better results than other algorithms on the rest of testfunctions Table 3 and Figure 4 conclude that the BBTLBOhas a good performance of the solution accuracy for testfunctions in this paper
542 Comparison of the Convergence Speed and SR In orderto compare the convergence speed and successful rate (SR)of different algorithms we select a threshold value of theobjective function for each test function For other functionsthe threshold values are listed in Table 4 In our experimentthe stopping criterion is that each algorithm is terminatedwhen the best fitness value so far is below the predefinedthreshold value (119879 Value) or the number of FEs reaches to
The Scientific World Journal 7
Table 1 Details of numerical benchmarks used
Function Formula 119863 Range Optima
Sphere 1198651(119909) =
119863
sum
119894=1
1199092
11989430 [minus100 100] 0
Sum square 1198652(119909) =
119863
sum
119894=1
1198941199092
11989430 [minus100 100] 0
Quadric 1198653(119909) =
119863
sum
119894=1
1198941199094
119894+ random(0 1) 30 [minus128 128] 0
Step 1198654(119909) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 30 [minus100 100] 0
Schwefel 12 1198655(119909) =
119863
sum
119894=1
(
119894
sum
119895=1
119909119895)
2
30 [minus100 100] 0
Schwefel 221 1198656(119909) = max 1003816100381610038161003816119909119894
1003816100381610038161003816 1 le 119894 le 119863 30 [minus100 100] 0
Schwefel 222 1198657(119909) =
119863
sum
119894=1
1003816100381610038161003816119909119894
1003816100381610038161003816+
119863
prod
119894=1
1003816100381610038161003816119909119894
100381610038161003816100381630 [minus10 10] 0
Zakharov 1198658(119909) =
119863
sum
119894=1
1199092
119894+ (
119863
sum
119894=1
05119894119909119894)
2
+ (
119863
sum
119894=1
05119894119909119894)
4
30 [minus100 100] 0
Rosenbrock 1198659(119909) =
119863minus1
sum
119894=1
lfloor100(1199092
119894minus 119909119894+1)2
+ (119909119894minus 1)2
rfloor 30 [minus2048 2048] 0
Ackley 11986510(119909) = 20 minus 20 exp((minus1
5)radic(
1
119863)
119863
sum
119894=1
1199092
119894) minus exp(( 1
119863)
119863
sum
119894=1
cos (2120587119909119894)) + 119890 30 [minus32 32] 0
Rastrigin 11986511(119909) =
119863
sum
119894=1
(1199092
119894minus 10 cos (2120587119909
119894) + 10) 30 [minus512 512] 0
Weierstrass11986512(119909) =
119863
sum
119894=1
(
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 (119909
119894+ 05))]) minus 119863
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 times 05)]
119886 = 05 119887 = 3 119896max = 20
30 [minus05 05] 0
Griewank 11986513(119909) =
119863
sum
119894=1
(1199092
119894
4000) minus
119899
prod
119894=1
cos(119909119894
radic119894
) + 1 30 [minus600 600] 0
Schwefel 11986514(119909) = 4189829119863 +
119863
sum
119894=1
(minus119909119894sinradicabs(119909
119894)) 30 [minus500 500] 0
Bohachevsky1 11986515(119909) = 119909
2
1+ 21199092
2minus 03 cos (3120587119909
1) minus 04 cos (4120587119909
2) + 07 2 [minus100 100] 0
Bohachevsky2 11986516(119909) = 119909
2
1+ 21199092
2minus 03cos (3120587119909
1) lowast cos (4120587119909
2) + 03 2 [minus100 100] 0
Bohachevsky3 11986517(119909) = 119909
2
1+ 21199092
2minus 03cos((3120587119909
1) + (4120587119909
2)) + 03 2 [minus100 100] 0
Shekel5 11986518(119909) = minus
5
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus101532
Shekel7 11986519(119909) = minus
7
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus104029
Shekel10 11986520(119909) = minus
10
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus105364
the maximal FEs 40000 The results are shown in Table 4in terms of the mean number of FEs (MFEs) required toconverge to the threshold and successful rate (SR) in the50 independent runs ldquoNaNrdquo represents that no runs of thecorresponding algorithm converged below the predefinedthreshold before meeting the maximum number of FEs Thebest results among the six algorithms are shown in boldface
FromTable 5 it can be observed that all algorithms hardlyconverge to the threshold for unimodal functions 119865
3 1198655 1198656
and 1198658and multimodal functions 119865
11 11986512 and 119865
14 BBTLBO
converges to the threshold except for functions 1198653 1198659 and
11986514 From the results of total average FEs BBTLBO converges
faster than other algorithms on all unimodal functions andthe majority of multimodal functions except for functions119865151198651611986519 and119865
20The acceleration rates between BBTLBO
and other algorithms are mostly 10 for functions 1198651 1198652 1198654
1198657 1198659 11986510 and 119865
13 From the results of total average SR
BBTLBO achieves the highest SR for those test functions ofwhich BBTLBO successfully converges to the threshold valueIt can be concluded that the BBTLBOhas a good performanceof convergence speed and successful rate (SR) of the solutionsfor test functions in this paper
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
Schwefel 221 1198656(119909) = max 1003816100381610038161003816119909119894
1003816100381610038161003816 1 le 119894 le 119863 30 [minus100 100] 0
Schwefel 222 1198657(119909) =
119863
sum
119894=1
1003816100381610038161003816119909119894
1003816100381610038161003816+
119863
prod
119894=1
1003816100381610038161003816119909119894
100381610038161003816100381630 [minus10 10] 0
Zakharov 1198658(119909) =
119863
sum
119894=1
1199092
119894+ (
119863
sum
119894=1
05119894119909119894)
2
+ (
119863
sum
119894=1
05119894119909119894)
4
30 [minus100 100] 0
Rosenbrock 1198659(119909) =
119863minus1
sum
119894=1
lfloor100(1199092
119894minus 119909119894+1)2
+ (119909119894minus 1)2
rfloor 30 [minus2048 2048] 0
Ackley 11986510(119909) = 20 minus 20 exp((minus1
5)radic(
1
119863)
119863
sum
119894=1
1199092
119894) minus exp(( 1
119863)
119863
sum
119894=1
cos (2120587119909119894)) + 119890 30 [minus32 32] 0
Rastrigin 11986511(119909) =
119863
sum
119894=1
(1199092
119894minus 10 cos (2120587119909
119894) + 10) 30 [minus512 512] 0
Weierstrass11986512(119909) =
119863
sum
119894=1
(
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 (119909
119894+ 05))]) minus 119863
119896maxsum
119896=0
[119886119896 cos (2120587119887119896 times 05)]
119886 = 05 119887 = 3 119896max = 20
30 [minus05 05] 0
Griewank 11986513(119909) =
119863
sum
119894=1
(1199092
119894
4000) minus
119899
prod
119894=1
cos(119909119894
radic119894
) + 1 30 [minus600 600] 0
Schwefel 11986514(119909) = 4189829119863 +
119863
sum
119894=1
(minus119909119894sinradicabs(119909
119894)) 30 [minus500 500] 0
Bohachevsky1 11986515(119909) = 119909
2
1+ 21199092
2minus 03 cos (3120587119909
1) minus 04 cos (4120587119909
2) + 07 2 [minus100 100] 0
Bohachevsky2 11986516(119909) = 119909
2
1+ 21199092
2minus 03cos (3120587119909
1) lowast cos (4120587119909
2) + 03 2 [minus100 100] 0
Bohachevsky3 11986517(119909) = 119909
2
1+ 21199092
2minus 03cos((3120587119909
1) + (4120587119909
2)) + 03 2 [minus100 100] 0
Shekel5 11986518(119909) = minus
5
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus101532
Shekel7 11986519(119909) = minus
7
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus104029
Shekel10 11986520(119909) = minus
10
sum
119894=1
[(119909 minus 119886119894)(119909 minus 119886
119894)119879+ 119888119894]
minus1
4 [0 10] minus105364
the maximal FEs 40000 The results are shown in Table 4in terms of the mean number of FEs (MFEs) required toconverge to the threshold and successful rate (SR) in the50 independent runs ldquoNaNrdquo represents that no runs of thecorresponding algorithm converged below the predefinedthreshold before meeting the maximum number of FEs Thebest results among the six algorithms are shown in boldface
FromTable 5 it can be observed that all algorithms hardlyconverge to the threshold for unimodal functions 119865
3 1198655 1198656
and 1198658and multimodal functions 119865
11 11986512 and 119865
14 BBTLBO
converges to the threshold except for functions 1198653 1198659 and
11986514 From the results of total average FEs BBTLBO converges
faster than other algorithms on all unimodal functions andthe majority of multimodal functions except for functions119865151198651611986519 and119865
20The acceleration rates between BBTLBO
and other algorithms are mostly 10 for functions 1198651 1198652 1198654
1198657 1198659 11986510 and 119865
13 From the results of total average SR
BBTLBO achieves the highest SR for those test functions ofwhich BBTLBO successfully converges to the threshold valueIt can be concluded that the BBTLBOhas a good performanceof convergence speed and successful rate (SR) of the solutionsfor test functions in this paper
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
Figure 3 Comparison of the performance curves using different 119906
55 Comparison of BBTLBO with DE Variants PSO Variantsand Some TLBO Variants In this section we comparedthe performance of BBTLBO with other optimization algo-rithms including jDE [28] SaDE [29] PSOcfLocal [27]PSOwFIPS [30] and TLBO [8 9] In our experiment themaximal FEs are used as the stopping criterion of all algo-rithms namely 40000 for all test functions The results areshown in Table 5 in terms of the mean optimum solutionand the standard deviation of the solutions obtained in the50 independent runs by each algorithm on 20 test functions
where ldquo119908119905119897rdquo summarizes the competition results amongBBTLBO and other algorithms The best results among thealgorithms are shown in boldface
The comparisons in Table 5 show that that all algorithmsperform well for 119865
15 11986516 and 119865
17 Although SaDE outper-
forms BBTLBOon11986514 PSOcfLocal outperforms BBTLBOon
1198659and PSOwFIPS outperforms BBTLBO on 119865
19and 11986520 and
BBTLBO offers the highest accuracy on functions 1198653 1198654 1198655
1198657 1198658 11986510 11986511 and 119865
18 ldquo119908119905119897rdquo shows that BBTLBO offers
well accuracy for the majority of test functions in this paper
The Scientific World Journal 11
0 05 1 15 2 25 3 35 4
0
1
2
3
FEs
minus4
minus3
minus2
minus1
times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(a) 1198653 Quadric
0 05 1 15 2 25 3 35 41
15
2
25
3
35
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)(b) 1198659 Rosenbrock
0 05 1 15 2 25 3 35 433
34
35
36
37
38
39
4
41
42
FEs times104
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
log10
(mea
n fit
ness
)
(c) 11986518 Shekel5
0
0
05 1 15 2 25 3 35 4FEs
Mea
n fit
ness
times104
minus10
minus9
minus8
minus7
minus6
minus5
minus4
minus3
minus2
minus1
BBPSOBBExpBBDE
GBDEMGBDEBBTLBO
(d) 11986514 Schwefel
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
Figure 4 Comparison of the performance curves using different algorithms
Table 5 concludes that BBTLBO has a good performance ofthe solution accuracy for all unimodal optimization problemsand most complex multimodal optimization problems
6 Two Real-World Optimization Problems
In this section to show the effectiveness of the proposedmethod the proposed BBTLBO algorithm is applied toestimate parameters of two real-world problems
61 Nonlinear Function Approximation The artificial neuralnetwork trained by our BBTLBO algorithm is a three-layer
Input x Output y
Desired output dBBTLBO algorithm
ANN
minus
Figure 5 BBTLBO-based ANN
feed-forward network and the basic structure of the proposedscheme is depicted in Figure 5 The inputs are connectedto all the hidden units which in turn all connected to all
12 The Scientific World Journal
Table 4 The mean number of FEs and SR with acceptable solutions using different algorithms
Fun 119905 value BBPSO BBExp BBDE GBDE MGBDE BBTLBOMFEs SR MFEs SR MFEs SR MFEs SR MFEs SR MFEs SR
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
the outputs The variables consist of neural network weightsand biases Suppose a three-layer forward neural networkarchitecture with 119872 input units 119873 hidden units and 119870
output units and the number of the variables is shown asfollows
For neural network training the aim is to find a set ofweights with the smallest error measure Here the objectivefunction is the mean sum of squared errors (MSE) over alltraining patterns which is shown as follows
MSE = 1
119876 lowast 119870
119876
sum
119894=1
119870
sum
119895
1
2(119889119894119895minus 119910119894119895)2
(14)
where 119876 is the number of training data set 119870 is the numberof output units 119889
119894119895is desired output and 119910
119894119895is output inferred
from neural networkIn this example a three-layer feed-forward ANN with
one input unit five hidden units and one output unit isconstructed tomodel the curve of a nonlinear functionwhichis described by the following equation [31]
119910 = sin (2119909) exp (minus2119909) (15)
In this case activation function used in the output layer isthe sigma function and activation function used in the outputlayer is linear The number (dimension) of the variables is16 for BBTLBO-based ANN In order to train the ANN
200 pairs of data are chosen from the real model For eachalgorithm 50 runs are performed The other parametersare the same as those of the previous investigations Theresults are shown in Table 6 in terms of the mean MSEand the standard deviation obtained in the 50 independentruns for three methods Figure 6 shows the predicted timeseries for training and test using different algorithms It canconclude that the approximation achieved by BBTLBO hasgood performance
62 Tuning of PID Controller The continuous form of adiscrete-type PID controller with a small sampling period Δ119905is described as follows [32]
119890 [119894] sdot Δ119905 + 119870119863 sdot119890 [119896] minus 119890 [119896 minus 1]
Δ119905
(16)
where 119906[119896] is the controlled output respectively 119890[119896] = 119903[119896]minus119910[119896] is the error signal 119903[119896] and 119910[119896] are the reference signaland the system output and 119870
119875 119870119868 and 119870
119863represent the
proportional integral and derivate gains respectivelyFor an unknown plant the goal of this problem is to
minimize the integral absolute error (IAE) which is given asfollow [32 33]
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
and 119910(119905) is the output of the controlled objectiveIn our simulation the formulas for the plant examined
are given as follows [34]
119866 (119904) =1958
1199043+ 1789119904
2+ 1033119904 + 1908
(19)
The system sampling time is Δ119905 = 005 second and thecontrol value 119906 is limited in the range of [minus10 10] Other rel-evant system variables are 119870
119875isin [0 1] 119870
119868isin [0 1] and 119870
119863isin
[0 1] The weight coefficients of the cost function are set as1205961= 0999 120596
2= 0001 120596
3= 2 and 120596 = 100 in this example
In the simulations the step response of PID controlsystem tuned by the proposed BBTLBO is compared withthat tuned by the standard genetic algorithm (GA) and thestandard PSO (PSO) The population sizes of GA PSO andBBTLBO are 50 and the corresponding maximum numbersof iterations are 50 50 and 50 respectively In addition thecrossover rate is set as 090 and the mutation rate is 010 forGA
The optimal parameters and the corresponding perfor-mance values of the PID controllers are listed in Table 7 andthe corresponding performance curves and step responsescurves are given in Figures 7 and 8 It can be seen fromFigure 7 and Table 7 that the PID controller tuned byBBTLBO has the minimum cost function and CPU time
0 05 1 15 2 25 3 35 4 45 50
02
04
06
08
1
Time (s)
Step response curves using different methods
GAPSOBBTLBO
r in
you
t
Figure 8 Step response curves using different methods
Although PID controllers tuned by PSO have a smaller peaktime and rise time their maximum overshoots are muchlarger than the overshoot tuned by BBTLBO It concludes thatthe PID controller tuned by the BBTLBO could perform thebest control performance in the simulations
7 Conclusion
In this paper TLBO has been extended to BBTLBO whichuses the hybridization of the learning strategy in the stan-dard TLBO and Gaussian sampling learning to balance theexploration and the exploitation in teacher phase and uses amodified mutation operation so as to eliminate the duplicatelearners in learner phase The proposed BBTLBO algorithmis utilized to optimize 20 benchmark functions and tworeal-world optimization problems From the analysis andexperiments the BBTLBO algorithm significantly improvesthe performance of the original TLBO although it needs tospend more CPU time than the standard TLBO algorithmin each generation From the results compared with otheralgorithms on the 20 chosen test problems it can be observedthat the BBTLBO algorithm has good performance by usingneighborhood search more effectively to generate betterquality solutions although the BBTLBO algorithm does notalways have the best performance in all experiments cases ofthis paper It can be also observed that the BBTLBOalgorithm
16 The Scientific World Journal
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
gives the best performance on two real-world optimizationproblems compared with other algorithms in the paper
Further work includes research into neighborhood searchbased on different topological structures Moreover thealgorithm may be further applied to constrained dynamicand noisy single-objective and multiobjective optimizationdomain It is expected that BBTLBOwill be used tomore real-world optimization problems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was partially supported by the National NaturalScience Foundation of China (61100173 61100009 61272283and 61304082) This work is partially supported by theNatural Science Foundation of Anhui Province China (Grantno 1308085MF82) and the Doctoral Innovation Foundationof Xirsquoan University of Technology (207-002J1305)
References
[1] D E Goldberg Genetic Algorithms in Search Optimization andMachine Learning Addison-Wesley Reading Mass USA 1989
[2] L C Jiao and L Wang ldquoA novel genetic algorithm based onimmunityrdquo IEEE Transactions on SystemsMan and CyberneticsA Systems and Humans vol 30 no 5 pp 552ndash561 2000
[3] R Storn and K Price ldquoDifferential evolution a simple andefficient Heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[4] M Dorigo and T Stutzle Ant Colony Optimization MIT Press2004
[5] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995
[6] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing Journalvol 8 no 1 pp 687ndash697 2008
[7] D Simon ldquoBiogeography-based optimizationrdquo IEEE Transac-tions on Evolutionary Computation vol 12 no 6 pp 702ndash7132008
[8] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization a novelmethod for constrainedmechanicaldesign optimization problemsrdquo CAD Computer Aided Designvol 43 no 3 pp 303ndash315 2011
[9] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization an optimization method for continuousnon-linear large scale problemsrdquo Information Sciences vol 183no 1 pp 1ndash15 2012
[10] R V Rao V J Savsani and D P Vakharia ldquoTeaching-learning-based optimization algorithm for unconstrained and con-strained real-parameter optimization problemsrdquo EngineeringOptimization vol 44 no 12 pp 1447ndash1462 2011
[11] V Togan ldquoDesign of planar steel frames using teaching-learningbased optimizationrdquo Engineering Structures vol 34 pp 225ndash232 2012
[12] R V Rao and V Patel ldquoAn elitist teaching-learning-based opti-mization algorithm for solving complex constrained optimiza-tion problemsrdquo International Journal of Industrial EngineeringComputations vol 3 pp 535ndash560 2012
[13] S O Degertekin and M S Hayalioglu ldquoSizing truss structuresusing teaching-learning-based optimizationrdquo Computers andStructures vol 119 pp 177ndash188 2013
[14] R V Rao and V Patel ldquoAn improved teaching-learning-basedoptimization algorithm for solving unconstrained optimizationproblemsrdquo Scientia Iranica vol 20 no 3 pp 710ndash720 2013
[15] R V Rao and V Patel ldquoMulti-objective optimization of com-bined Brayton and inverse Brayton cycles using advancedoptimization algorithmsrdquo Engineering Optimization vol 44 no8 pp 965ndash983 2011
[16] T Niknam F Golestaneh and M S Sadeghi ldquoTheta-multi-objective teaching-learning-based optimization for dynamiceconomic emission dispatchrdquo IEEE Systems Journal vol 6 no2 pp 341ndash352 2012
[17] R V Rao and V Patel ldquoMulti-objective optimization of heatexchangers using a modified teaching-learning-based opti-mization algorithmrdquo Applied Mathematical Modelling vol 37no 3 pp 1147ndash1162 2013
[18] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002
[19] F van den Bergh and A P Engelbrecht ldquoA study of particleswarm optimization particle trajectoriesrdquo Information Sciencesvol 176 no 8 pp 937ndash971 2006
[20] J Kennedy ldquoBare bones particle swarmsrdquo in Proceedings of theSwarm Intelligence Symposium (SIS rsquo03) pp 80ndash87 2003
[21] M G H Omran A P Engelbrecht and A Salman ldquoBarebones differential evolutionrdquo European Journal of OperationalResearch vol 196 no 1 pp 128ndash139 2009
[22] H Wang S Rahnamayan H Sun and M G H OmranldquoGaussian bare-bones differential evolutionrdquo IEEE Transactionson Cybernetics vol 43 no 2 pp 634ndash647 2013
[23] X H Hu and R Eberhart ldquoMultiobjective optimization usingdynamic neighborhood particle swarm optimizationrdquo in Pro-ceedings of the Congress on Evolutionary Computation pp 677ndash1681 2002
[24] M G Omran A P Engelbrecht and A Salman ldquoUsingthe ring neighborhood topology with self-adaptive differentialevolutionrdquo in Advances in Natural Computation pp 976ndash979Springer Berlin Germany 2006
[25] X Li ldquoNiching without niching parameters particle swarmoptimization using a ring topologyrdquo IEEE Transactions onEvolutionary Computation vol 14 no 1 pp 150ndash169 2010
[26] I Maruta T H Kim D Song and T Sugie ldquoSynthesis of fixed-structure robust controllers using a constrained particle swarmoptimizer with cyclic neighborhood topologyrdquo Expert Systemswith Applications vol 40 no 9 pp 3595ndash3605 2013
[27] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the International Con-ference on Evolutionary Computation pp 1671ndash1676 HonoluluHawaii USA 2002
[28] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006
The Scientific World Journal 17
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007
[29] A K Qin V L Huang and P N Suganthan ldquoDifferential evo-lution algorithm with strategy adaptation for global numericaloptimizationrdquo IEEE Transactions on Evolutionary Computationvol 13 no 2 pp 398ndash417 2009
[30] R Mendes J Kennedy and J Neves ldquoThe fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004
[31] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash62 2000
[32] J Liu Advanced PID Control and MATLAB Simulation Elec-tronic Industry Press 2003
[33] J Zhang J Zhuang H Du and S Wang ldquoSelf-organizinggenetic algorithm based tuning of PID controllersrdquo InformationSciences vol 179 no 7 pp 1007ndash1017 2009
[34] R Haber-Haber R Haber M Schmittdiel and R M delToro ldquoA classic solution for the control of a high-performancedrilling processrdquo International Journal of Machine Tools andManufacture vol 47 no 15 pp 2290ndash2297 2007