DCMOGA: Distributed Cooperation model of Multi-Objective Genetic Algorithm
Tamaki Okuda ● Tomoyuki Hiroyasu
Mitsunori Miki Shinya Watanabe
Doshisha University, Kyoto Japan
Multi-objective Optimization Problems(MOPs)In the optimization problems, when there are several objective functions, the problems are called multi-objective or multi-criterion problems.
Design variables
X = { x1, x2, … , xn }
Objective function
F = { f1(x), f2(x), … , fm(x) }
Constrains
Gi(x) < 0 ( i = 1, 2, … , k )
Multi-objective Optimization Problems
Feasible region
Pareto-optimal front
f1(x): Maximumf2(x): Maximum
Non-Dominated solutions
EMOs: Evolutionary Multi-criterion Optimization
Typical method on EMOs:– VEGA : Schaffer (1985)– MOGA : Fonseca (1993)– SPEA2 : Zitzler (2001)– NPGA2 : Erickson, Mayer, Horn (2001)– NSGA-II : Deb (2001)
Good non-dominated solutions:– Minimal distance to the Pareto-optimal front– Uniform distribution– Maximum spreading
>> Proposed a new model of EMOs.This model searches for non-dominated solutions, which
are widespread and closed to pareto-optimal front, and can make the existing algorithms more efficient.
DCMOGA
DCMOGA: Distributed Cooperation model
of Multi-Objective GA
DC-scheme for MOGA
The features of DC-scheme:– N+1 sub populations (when N objects)
1 group: searches for pareto optimum by MOGA
N groups: searches for optimum by SOGA
(Single Objective GA)– Cooperative search
Exchanged between each group
Adjustment of each sub population size
N+1 groups (sub populations)
MOGA group:The Pareto-optimal solutions are searched by MOGA
SOGA groups:The optimum of ith objective function is searched by SOGA.
Cooperative search (1)
Exchange of the solutions:Exchange the best solutions between the MOGA group and
each SOGA group.
Cooperative search (2)
Adjustment of each sub population size:– Some individuals move to other group.– This adjustment is according to the function values of best
solution in each group.– Each sub population size changes, but the whole population
size don’t change.
>> The difference between search ability of each group is reduce
The algorithm of DC-scheme>> N objective function1. all individuals are
initialized. 2. all individuals are divided
into N+1 groups.3. In the MOGA group, the
pareto optimal solutions are searched, and in each SOGA group the optimum is searched.
4. After some iterations, exchange the solutions between MOGA group and SOGA.
5. The exchanged solutions are compared, and sub each population size is adjusted.
6. The terminal condition is checked,
>> 2 objective functions
Combined MOGA and SOGA method
DCMOGA:Distributed Cooperation model of MOGADC-scheme
>> Combined MOGA and SOGA
DC-scheme can combine any MOGA and SOGA.
Used algorithms:
SOGA: DGA
MOGA: MOGA (Fonseca)
SPEA2 (Zitzler)
NSGA-II (Deb)
Test Problem: KP750-m
0/1 Knapsack Problem (750items, m knapsacks)
-Combination problems
Objectives:
Constraints:
3,2,1)(max 750
1,
jjjii ixpxf
750
1,
jijji cxw
1,0),,( 75021 jxxxxx
profit of item j according to knapsack i jip ,
weight of item j according to knapsack i jiw ,
capacity of knapsack i ic
Test Problem: KUR
KUR (2 objective function, 100 design variables )
- Continuous- Multi-modal
100
1
21
21 ))2.0exp(10()(min
i ii xxxf
38.02 )sin(5||)(min ii xxxf
100,,1,]5,5[ nnixi
Test Problem: ZDT4
ZDT4 ( 2 objective function, 10 design variables )- Continuous- Multi-modal
]5,5[]1,0[
)4cos(1091)(
)(1)()(min
)(min
1
10
2
2
12
11
i
iii
xx
xxxg
xg
xxgxf
xxf
Performance Metrics Function(C)
Coverage of two sets: C
||
|}:|{|:),(
B
baAaBbBAC
A: front1
B: front2
C(A,B) = 1/5 = 0.2
C(B,A) = 2/4 = 0.5
Applied models and Parameters
Applied models– MOGA / DCMOGA– SPEA2 / DCSPEA2– NSGA-II / DCNSGA-II
Parameters
GA operator
Crossover:
2 points crossover
Mutation:bit flip
KP750-2 KUR ZDT4
Chromosome length 750 2000 200
Population size 250 100 100
Crossover Rate 1.0
Mutation Rate 1/L (L: Chromosome length)
Terminal Condition 5 x 105 10 x 106 2.5 x 104
Number of trial 30
Results: KP750-3MOGADCMOGA
<< with DC-scheme is more widespread
<< without DC-scheme is closer to Pareto optimum
Results: KP750-3SPEA2DCSPEA2
<< with DC-scheme is more widespread
Results: KP750-3NSGA-IIDCNSGA-II
<< with DC-scheme is more widespread
<< without DC-scheme is closer to Pareto optimum
Results: KP750-3
The Coverage of two sets
>> both results are almost the same.
<< Without DC superior than with DC
<< With DC superior than without DC
<< Without DC superior than with DC
Results: KUR
MOGADCMOGA
SPEA2DCSPEA2
Results: KUR
<< With DC superior than without DC
<< With DC superior than without DC
<< With DC superior than without DC
>> With DC-scheme is superior
NSGA-IIDCNSGA-II
Results: ZDT4
Results: ZDT4
<< With DC superior than without DC
<< With DC superior than without DC
<< With DC superior than without DC
>> With DC-scheme is superior
Conclusion
We proposed a new model of EMOs.– DCMOGA (DC-scheme):
Distributed Cooperation model of MOGA
Compared the algorithms combined with DC-scheme against algorithm without DC-scheme.– The algorithms combined DC-scheme derives the efficient
results.– DC-scheme can be combined any other EMOs, and the
algorithm is more efficient algorithm than without DC-scheme.
>> DC-scheme is efficient scheme for EMOs.
Performance Metrics
RNI: Ratio of Non-dominated Individualsderived from 2 types of non-dominated solutions
Set A: front1
Set B: front2
Set A: 3/5 = 0.6Set B: 2/5 = 0.4
Results: ZDT6
Results: ZDT6
Results: KP750-2
Results: KP750-2
Performance Metrics: Coverage(D)
Coverage difference of two sets: D
)()(:),( BSBASBAD
)(AS
)(BS
)( BAS
),( BAD
),( ABD
The improved MOGA
MOGA is proposed by Fonseca
Improved MOGA:
MOGA added to the Sharing and using pareto archive.
SOGA and MOGA
DC-scheme are compared SOGA and MOGA
>> SOGA and MOGASOGA: 300 generation x 2MOGA: 400