Dynamic Multimodal Optimization Benchmark Functions Brain Storm Optimization Dynamic Multimodal Optimization: A Preliminary Study Shi Cheng School of Computer Science March 29, 2019 Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 1 / 62
62
Embed
Dynamic Multimodal Optimization: A Preliminary Study
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
A dynamic multimodal optimization problem is defined as anoptimization problem with multiple global optima and thecharacteristics of global optima are changed during the search process.
The benchmark problems have played a fundamental role in verifyingthe algorithm’s search ability. Two cases are used to illustrate theapplication scenario of DMO.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 3 / 62
A set of benchmark functions on DMO, which contains eightproblems, are proposed to show the difficulty of DMO. The propertiesof the proposed benchmark problems, such as the distribution ofsolutions, the scalability, the number of global/local optima, arediscussed.
Brain storm optimization (BSO) algorithm was used to solve theDMO problem. The effectiveness of the BSO algorithm was validatedon a test function.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 4 / 62
An optimization problem in Rn, or simply an optimization problem, isa mapping f : Rn → Rm, where Rn is term as solution space (orparameter space, problem space, decision space), and Rm is term asobjective space.
The optimization problem is to find
arg minx∈S
f (x) or maxx∈S
f (x)
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 8 / 62
Many optimization algorithms are designed for locating a single globalsolution. Nevertheless, many real-world problems may have multiplesatisfactory solutions exist.
The multimodal optimization problem is a function with multipleglobal/local optimal values.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 11 / 62
For multimodal optimization, the objective is to locate multiplepeaks/optima in a single run, and to keep these found optima untilthe end of a run.
An algorithm on solving multimodal optimization problems shouldhave two kinds of abilities: find global/local optima as many aspossible and preserve these found solutions until the end of the search.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 12 / 62
Several targets are followed by multiple trackers. It is not fixedbetween the target and a tracker. The position of each tracker ischanged with the changing of targets.
1 Figure (a) shows the initial state of all targets and trackers.2 The trackers are surrounded all targets at time ta in Figure (b).3 Figure (c) shows that all targets are moved to new positions at time tb.4 Figure (d) shows that all targets are surrounded again at time tc .
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 15 / 62
To verify the algorithm’s search ability, various benchmark functionswere proposed and compared for different type of optimizationproblems, such as multimodal multi-objective optimization problems,dynamic multiobjective optimization problems.
In the multimodal optimization, for example, 20 multimodal functionsare formulated as maximization problems in [1] and 15 scalablemultimodal functions are formulated as minimization problems in [2].
For simplicity and clarity, we only considered the simple functions,and the rotation is not used here.
1 X. Li, A. Engelbrecht, and M. G. Epitropakis, “Benchmark functions for CEC’2013 special session and competition onniching methods for multimodal function optimization,” Evolutionary Computation and Machine Learning Group, RMITUniversity, Australia, Tech. Rep., 2013.
2 B. Qu, J. Liang, Z. Wang, Q. Chen, and P. Suganthan, “Novel benchmark functions for continuous multimodaloptimization with comparative results,” Swarm and Evolutionary Computation, vol. 26, pp. 2334, 2016.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 19 / 62
The difference between xi and xi ,t is a dynamic changed value shifti ,t .To construct a mapping from xi and xi ,t and two solutions are in thesame search space. An example of shift value is as follows:
shifti ,t =
{− t
T × R if xi ≤ Ubound
− tT × R − R if xi > Ubound
(2)
where t is the number of current iteration, T is the total number ofiteration, R = Ubound − Lbound is the search range, and Ubound andLbound are the upper and lower bound for the search range,respectively.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 21 / 62
The difference between xi and xi ,t is a dynamic changed value shifti ,t .To construct a mapping from xi and xi ,t and two solutions are in thesame search space. An example of shift value is as follows:
shifti ,t =
{− t
T × R if xi ≤ Ubound
− tT × R − R if xi > Ubound
(3)
where t is the number of current iteration, T is the total number ofiteration, R = Ubound − Lbound is the search range, and Ubound andLbound are the upper and lower bound for the search range,respectively.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 23 / 62
−80(2.5− xi ) if 0 ≤ xi < 2.5−64(xi − 2.5) if 2.5 ≤ xi < 5−64(7.5− xi ) if 5 ≤ xi < 7.5−28(xi − 7.5) if 7.5 ≤ xi < 12.5−28(17.5− xi ) if 12.5 ≤ xi < 17.5−32(xi − 17.5) if 15 ≤ xi < 22.5−32(27.5− xi ) if 22.5 ≤ xi < 27.5−80(xi − 27.5) if 27.5 ≤ xi ≤ 30−200 + (xi − 30)2 if xi > 30
where xi = xi ,t + shifti ,t , and xi ∈ [−5, 35].
The original global optima is xi = 0 or 30 for i = 1, 2, · · · ,D, thus forthe f2(xt), the global optima is x∗i ,t = 0− shifti ,t or 30− shifti ,t , andf2(x∗) = 0.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 26 / 62
x2i if xi < 0− sin6(5πxi ) if 0 ≤ xi ≤ 1(xi − 1)2 if xi > 1
where xi = xi ,t + shifti ,t , and xi ∈ [−0.5, 1.5].
The original global optima is xi = 0.1, 0.3, 0.5, 0.7, or 0.9 fori = 1, 2, · · · ,D, thus for the f3(xt), the global optima is x∗i ,t =0.1− shifti ,t , 0.3− shifti ,t , 0.5− shifti ,t , 0.7− shifti ,t , or 0.9− shifti ,t ,and f3(x∗) = 0.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 28 / 62
where xi = xi ,t + shifti ,t , and xi ∈ [−0.5, 1.5].
The original global optima is xi = 0.07969939, 0.24665545,0.45062669, 0.68142022, or 0.93389520 for i = 1, 2, · · · ,D, thus forthe f3(xt), the global optima is x∗t = xi − shifti ,t , and f5(x∗) = 0.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 32 / 62
where D must be an even number, xi = xi ,t + shifti ,t , andxi ∈ [−10, 10].
The original global optima is xi = [0, 0], or [−6.779310,−5.283185],or [0.584428,−3.848126], or [−5.805118, 1.131312], fori = 1, 2, · · · , D2 , thus for the f6(xt), the global optima isx∗t = xi − shifti ,t , and f6(x∗) = 0.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 34 / 62
{xi − 0.089842 if i%2 == 1xi + 0.712656 if i%2 == 0
where D must be an even number, xi = xi ,t + shifti ,t , and xi ∈ [−2, 2].
The original global optima is xi = [0, 0], or[0.17968401,−1.4253124], for i = 1, 2, · · · , D2 , thus for the f7(xt),the global optima is x∗t = xi − shifti ,t , and f7(x∗) = 0.The differencebetween xi and xi ,t is a dynamic changed value shifti ,t .
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 36 / 62
2 − sin(10 log(0.25)) if xi < 0.25− sin(10 log(xi )) if 0.25 ≤ xi ≤ 10(xi − 10)2 − sin(10 log(10)) if xi > 10
where xi = xi ,t + shifti ,t , and xi ∈ [−0.5, 11], f8(x∗) = 0.
The original global optima is xi = 0.333018, 0.624228, 1.170088,2.193280, 4.111207, or 7.706277 for i = 1, 2, · · · ,D, thus for thef8(xt), the global optima is x∗t = xi − shifti ,t , and f8(x∗) = 0.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 38 / 62
The value and number of found global optima could be used inperformance criteria.
Two criteria are used to measure the number of found global optima.One is the total number of global optima found in all runs. The otherindicator is the quality of the global optima found, i.e., the precisionfor the solutions, over multiple runs [1]. The equations of PRcalculation are given in Eq. (4).
PR =
∑NRrun=1 NPFi
NKP × NR=
NPF
NKP × NR(4)
where NPFi denotes the number of global optima found in the end ofthe i-th run, NKP the number of known global optima [1].
1 X. Li, A. Engelbrecht, and M. G. Epitropakis, “Benchmark functions for CEC’2013 special session and competition onniching methods for multimodal function optimization,” Evolutionary Computation and Machine Learning Group, RMITUniversity, Australia, Tech. Rep., 2013.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 41 / 62
The position and the number of optima are not changed for a specificproblem in the multimodal optimization.
One calculation in the performance criteria is enough for staticproblems. However, this calculation should be evaluated at leastseveral times for dynamic problems.
Calculating the performance criteria value several times andcomparing the mean value may be a good way to verify analgorithm’s effectiveness in dynamic multimodal optimization.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 42 / 62
There are many iterations in the swarm intelligence algorithm, whichindicates that massive solutions will be evaluated during the search.For the reason of computing and storage efficiency, there is noalgorithm to store all the evaluated solutions.
Several representative solutions, such as the local best or personalbest solutions in particle swarm optimization algorithm, are used as amemory system for search algorithm.
The distribution of solutions is used in BSO algorithms, and there isno explicit memory system of the previous iterations.
This memory system will be beneficial when algorithm solvesproblems in the static environment. However, for problems in thedynamic environment, this memory may mislead the search direction.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 54 / 62
The DMO problem is defined as an optimization problem withmultiple global optima and the characteristics of global optima arechanged during the search process.
The benchmark problems have played an important role in verifyingthe algorithm’s search ability. A set of DMO benchmark functionswere designed and discussed.
In future, different swarm intelligence algorithms could be used forsolving DMO problems.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 55 / 62
Shi Cheng and Yuhui Shi. Brain Storm Optimization Algorithms:Concepts, Principles and Applications.Adaptation, Learning, and Optimization. Springer InternationalPublishing AG, 2019.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 57 / 62
Y. Shi, “Brain storm optimization algorithm,” in Advances in SwarmIntelligence (Y. Tan, Y. Shi, Y. Chai, and G. Wang, eds.), vol. 6728 ofLecture Notes in Computer Science, pp. 303–309, SpringerBerlin/Heidelberg, 2011.
Y. Shi, “An optimization algorithm based on brainstorming process,”International Journal of Swarm Intelligence Research (IJSIR), vol. 2,pp. 35–62, October-December 2011.
Y. Shi, “Brain storm optimization algorithm in objective space,” inProceedings of 2015 IEEE Congress on Evolutionary Computation, (CEC2015), (Sendai, Japan), pp. 1227–1234, IEEE, 2015.
S. Cheng, Q. Qin, J. Chen, and Y. Shi, “Brain storm optimization algorithm:A review,” Artificial Intelligence Review, vol. 46, no. 4, pp. 445–458, 2016.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 59 / 62
S. Cheng, Y. Sun, J. Chen, Q. Qin, X. Chu, X. Lei, and Y. Shi, “Acomprehensive survey of brain storm optimization algorithms,” inProceedings of 2017 IEEE Congress on Evolutionary Computation (CEC2017), (Donostia, San Sebastian, Spain), pp. 1637–1644, IEEE, 2017.
S. Cheng, J. Chen, Y. Sun, and Y. Shi, “Developmental brain stormoptimization algorithms: From a data-driven perspective,” Journal ofZhengzhou University (Engineering Science), vol. 39, no. 3, pp. 22–28, 2018.
S. Cheng, H. Lu, W. Song, J. Chen, and Y. Shi, “Dynamic multimodaloptimization using brain storm optimization algorithms,” in Bio-inspiredComputing: Theories and Applications (BIC-TA 2018), pp. 236–245,Springer Singapore, 2018.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 60 / 62
S. Cheng, Y.-n. Guo, X. Lei, Y. Zhang, J. Liang, and Y. Shi, “Dynamicmultimodal optimization: A preliminary study,” in Proceedings of 2019 IEEECongress on Evolutionary Computation (CEC 2019), (Wellington, NewZealand), IEEE, 2019.
Shi Cheng ([email protected]) Dynamic Multimodal Optimization March 29, 2019 61 / 62