Two New Robust Genetic Algorithms for the Flowshop Scheduling Problem Rubén Ruiz ∗ , Concepción Maroto, Javier Alcaraz Dpto. de Estadística e Investigación Operativa Aplicadas y Calidad. Universidad Politécnica de Valencia Edificio I-3. Camino deVera S/N, 46021, Valencia, SPAIN 22nd January 2005 Abstract The flowshop scheduling problem has been widely studied in the literature and many techniques for its solution have been proposed. Some authors have concluded that genetic algorithms are not suitable for this hard, combinatorial problem unless hybridization is used. This work proposes new genetic algorithms for solving the permutation flowshop scheduling problem that prove to be com- petitive when compared to many other well known algorithms. The optimization criterion considered is the minimization of the total completion time or makespan (C max ). We show a robust genetic al- gorithm and a fast hybrid implementation. These algorithms use new genetic operators, advanced techniques like hybridization with local search and an efficient population initialization as well as a new generational scheme. A complete evaluation of the different parameters and operators of the al- gorithms by means of a Design of Experiments approach is also given. The algorithm’s effectiveness is compared against 11 other methods, including genetic algorithms, tabu search, simulated annealing and other advanced and recent techniques. For the evaluations we use Taillard’s well known standard benchmark. The results show that the proposed algorithms are very effective and at the same time are easy to implement. Keywords: flowshop, genetic algorithms, local search 1 Introduction In a flowshop scheduling problem (FSP) we have n independent jobs {J 1 ,...,J n } that have to be processed on m different machines {M 1 ,...,M m }. Every job is composed of m operations, and every operation requires a different machine. O ij denotes the operation on machine i of job j . Every operation * Corresponding author: Rubén Ruiz García. Universidad Politécnica de Valencia. Departamento de Estadística e Investi- gación Operativa Aplicadas y Calidad. Camino de Vera S/N, 46021, Valencia, SPAIN. Tel: +34 96 387 70 07, ext: 74946. Fax: +34 96 387 74 99. e-mail: [email protected]1
30
Embed
Two new robust genetic algorithms for the flowshop scheduling problem
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Two New Robust Genetic Algorithms for the Flowshop
Scheduling Problem
Rubén Ruiz∗, Concepción Maroto, Javier Alcaraz
Dpto. de Estadística e Investigación Operativa Aplicadas yCalidad.
Universidad Politécnica de Valencia
Edificio I-3. Camino de Vera S/N, 46021, Valencia, SPAIN
22nd January 2005
Abstract
The flowshop scheduling problem has been widely studied in the literature and many techniques
for its solution have been proposed. Some authors have concluded that genetic algorithms are not
suitable for this hard, combinatorial problem unless hybridization is used. This work proposes new
genetic algorithms for solving the permutation flowshop scheduling problem that prove to be com-
petitive when compared to many other well known algorithms.The optimization criterion considered
is the minimization of the total completion time or makespan(Cmax). We show a robust genetic al-
gorithm and a fast hybrid implementation. These algorithmsuse new genetic operators, advanced
techniques like hybridization with local search and an efficient population initialization as well as a
new generational scheme. A complete evaluation of the different parameters and operators of the al-
gorithms by means of a Design of Experiments approach is alsogiven. The algorithm’s effectiveness
is compared against 11 other methods, including genetic algorithms, tabu search, simulated annealing
and other advanced and recent techniques. For the evaluations we use Taillard’s well known standard
benchmark. The results show that the proposed algorithms are very effective and at the same time
are easy to implement.
Keywords: flowshop, genetic algorithms, local search
1 Introduction
In a flowshop scheduling problem (FSP) we haven independent jobs{J1, . . . , Jn} that have to be
processed onm different machines{M1, . . . , Mm}. Every job is composed ofm operations, and every
operation requires a different machine.Oij denotes the operation on machinei of job j. Every operation
∗Corresponding author: Rubén Ruiz García. Universidad Politécnica deValencia. Departamento de Estadística e Investi-gación Operativa Aplicadas y Calidad. Camino de Vera S/N, 46021, Valencia, SPAIN. Tel: +34 96 387 70 07, ext: 74946. Fax:+34 96 387 74 99. e-mail: [email protected]
1
requires a processing timepij . In a flowshop all jobs have the same processing order on machines since
there is a technological order on the machines for the different jobs to follow. The objective is to find an
ordering of the jobs on the machines orsequencethat optimizes some given criterion. By far, the most
common criterion is the minimization of the total completion time of the schedule, often referred to as
makespan (Cmax). There are some other interesting variations to this problem (see Baker [6] or Pinedo
[33]).
It is important to note that there are several additional conditions to this problem [6]:
• All operations are independent and available for processing at time 0.
• All m machines are continuously available.
• Each machinei can process at most one jobj at a time.
• Each jobj can be processed only on one machinei at a time.
• The processing of a given operationOij cannot be interrupted, that is, no preemption is allowed.
• Setup and removal times are sequence independent and are included in theprocessing times or
otherwise are negligible and can be ignored.
• In-process inventory is allowed. If a given operation needs an unavailable machine then the oper-
ation joins a queue of unlimited size at that machine.
Following the four parameter notationA/B/C/D of Conway et al. [9] the problem is classified as
n/m/F/Fmax. A more recent three parameter notation (α/β/γ) was proposed by Graham et al. [17]
and the problem is denoted asF//Cmax.
The FSP is known to beNP-Complete in the strong sense whenm ≥ 3 (see Garey et al. [13]), ifm = 2
Johnson’s algorithm ([21]), obtains an optimal solution in polynomial time. In general(n!)m schedules
have to be considered ((n!)m−2 for Cmax criterion).
This paper deals with a simplification of the FSP, which is the permutation flowshopscheduling problem
or PFSP. In the PFSPjob passingis not permitted, i.e. the processing sequence of the jobs is the same
for all machines. Under this considerationn! schedules are possible and the problem is then denoted as
n/m/P/Fmax or asF/prmu/Cmax (see Pinedo [33]).
Some authors have claimed that genetic algorithms (GAs) show inferior performance in the PFSP to
simulated annealing or tabu search (see for example Murata et al. [27]) and that in order to obtain better
results, the genetic algorithm has to be hybridized with some form of local search. In this paper we
show an advanced genetic algorithm that does not need hybridization to outperform other well known
methods. The proposed algorithm includes some original procedures andoperators that have not been
applied to the PFSP before. These include four new types of crossoveroperators that have proved to
be superior to many other crossover operators tested, a new generational scheme and a common restart-
ing method. In a second step, the proposed genetic algorithm is hybridized with a local search based
in an insertion neighbourhood. Both proposed algorithms are compared against some of the best tech-
niques known so far, including genetic algorithms, simulated annealing, tabu search and other recent
2
metaheuristic techniques. For the tests we use the standard benchmark of Taillard [43] that is composed
of 120 different problem instances ranging from 20 jobs and 5 machinesto 500 jobs and 20 machines.
This benchmark contains some instances that have proved to be very difficult to solve in the past 10 years.
The rest of the paper is organized as follows: In Section 2 we present an overview of the existing
methods for the PFSP. Section 3 describes in detail the new proposed genetic algorithm. The proposed
hybrid GA is explained in Section 4. In Section 5, the proposed GAs are calibrated by means of the
Design of Experiments (DOE) approach. An extensive comparison of both algorithms with other existing
methods is given in Section 6. Finally, in Section 7 we provide some conclusionsabout the study, along
with some future research directions.
2 Existing methods for the PFSP
Generally, the PFSP can be solved with either exact or heuristic methods. Exact methods are only prac-
ticable for small instances (less than 15-20 jobs) and even in that case, solution times tend to be high.
However, some forms of exact techniques can be used for obtaining optimal solutions for large instances
when starting them from high quality, near-optimal solutions obtained by advanced metaheuristics. Re-
searchers have mainly focused their efforts towards heuristic approaches. Heuristics for the PFSP can
be divided into constructive and improvement methods. The former are techniques that build a feasible
schedule from scratch, and the latter are algorithms that seek to improve a previously generated schedule.
There are many constructive heuristics available (for a comprehensivereview see Ruiz and Maroto [39]).
One of the earliest methods is the well known Johnson’s algorithm [21]. The heuristics by Palmer [32],
Gupta [18], Campbell et al. [7] (CDS) and Dannenbring’s Rapid Access (RA) procedure ([10]), are also
good examples of constructive methods. It is commonly agreed that the Nawaz et al. heuristic ([28])
(NEH) is the most effective constructive heuristic for the PFSP. However, there are more recent methods
published, for example the heuristics by Koulamas [22] and Davoud Pour [12].
For the improvement heuristics, Dannenbring proposed two different methods, called RACS and RAES.
The algorithm by Ho and Chang [19] and the more recent improvement heuristic by Suliman [41] are
also good examples.
Metaheuristics can also be considered as improvement heuristics. Within this type of techniques we find
GAs, simulated annealing, tabu search and other procedures or hybrid methods.
The first proposed metaheuristics for the FPSP are the simulated annealing algorithms by Osman and
Potts [31] and Ogbu and Smith [30]. Widmer and Hertz [46], Taillard [42], Reeves [37] and Nowicki
and Smutnicki [29] demonstrated different tabu search approaches. Other algorithms are the path-based
method of Werner [45] or the iterated local search of Stützle [40]. Recently, Rajendran and Ziegler [35]
have proposed two very effective ant-colony optimization algorithms and Grabowski and Wodecki [16]
a very fast tabu search approach.
We focus now on work dealing with GAs and the PFSP. One of the earliest algorithms was proposed
by Chen et al. [8]. The initial population in this algorithm is generated by usingseveral heuristic rules.
3
The firstm − 1 population members are generated by them − 1 sequences obtained after applying the
CDS heuristic of Campbell et al. [7], themth member is obtained from theRapid Access(RA) heuris-
tic of Dannenbring. The remaining members are generated from simple job exchanges of the already
generated sequences. Only crossover is applied, there is no mutation. The crossover operator used is
thePartially Mapped Crossover(PMX) by Goldberg and Lingle [14]. Reeves [38] also proposed a GA.
This algorithm uses a different generational scheme, called “termination withprejudice” in which the
offspring generated after each mating do not replace the parents but members of the population with a
fitness value below average. The algorithm uses a C1 crossover, essentially similar to theOne Point
Order Crossover. Another remarkable feature of the algorithm is the adaptive mutation used. Reeves’
algorithm also initializes the population by using heuristics. In this case, one ofthe population members
is generated by applying the NEH heuristic. Murata et al. [27] proposed ahybrid GA with atwo point
order crossover, a shift mutation and elitism strategy. The algorithm is hybridized with local search,
which resulted in a clear performance gain over a non-hybrid version. Another hybrid GA is that of
Reeves and Yamada [36]. In this case a special crossover, calledMulti-Step Crossover Fusionor MSXF
is used, coalescing a typical crossover operator with local search. Ponnambalam et al. [34] evaluate a
genetic algorithm with aGeneralized Position Crossoveror GPX crossover, a shift mutation and random
population initialization. And more recently, Aldowaisan and Allahvedi [3] have proposed a simple yet
effective genetic algorithm for the permutation flowshop with no-wait constraints.
All the previously cited work on genetic algorithms lacks a methodological approach to obtain the cor-
rect choice of operators and parameters. Usually, authors make use ofshort computer simulations or
preliminary experiments to set parameters on a “one factor at a time” basis, i.e.changing one parameter
while maintaining the remaining factors unaltered. While this approach might be useful when setting
some of the algorithms’ parameters, it has several shortcomings when settingsome of the important op-
erators (like crossover operator or population size). In this paper we use a more comprehensive approach
for calibrating the operators and parameters.
3 Proposed Genetic Algorithm
In a genetic algorithm every individual orchromosomeis encoded into a structure that represents its
properties. The set of initial individuals form thepopulation. The population is evaluated and all indi-
viduals are assigned afitnessvalue, the higher this value, the better the individual. Then the population
undergoes a series of operations and the individuals in itevolveuntil some stopping criterion is met. A
complete iteration of a genetic algorithm is called agenerationand can be briefly described as follows: a
selectionmechanism picks individuals of the current population in such a way that anindividual’s chance
of being selected increases with the fitness value. The selected individualsmate and generate new indi-
viduals, called theoffspring. After the matting process (calledcrossover), some offsprings might suffer
a mutation. Afterwards, the new population is evaluated again and the whole processis repeated (see,
Goldberg [15] and Michalewicz [24]).
The effectiveness of GAs greatly depends on the correct choice of the encoding, selection, crossover and
4
mutation operators, as well as the probabilities by which they are applied (seeAlcaraz and Maroto [1]).
In this work we will use the Design of Experiments (DOE) approach (see Montgomery [25]) to set the
parameters and operators of the GA. DOE is a structured, organized method for determining the rela-
tionship between factors affecting the output of a process. In our case, the factors will all be operators
and parameters of the GA and the output will be the relative effectivenessof the GA. The application of
the DOE approach to GAs was proposed by Bagchi and Deb [5], and for GAs applied to the PFSP by
Jain and Bagchi [20]. However, the experiments shown are rather limited (8 runs maximum) and much
more information can be gathered from a more comprehensive experiment. In the following sections we
describe all parameters and operators used in the proposed genetic algorithm.
3.1 Solutions encoding and population initialization
The most frequently used encoding for the PFSP is a simple permutation of the jobs. The relative order
of the jobs in the permutation indicates the processing order of the jobs by the machines in the shop.
Traditionally, in a GA, the initial population is generated randomly. However, adirect conclusion from
the reviewed papers dealing with the PFSP and GAs is that with a random initialization of the population
no good results are obtained. The initialization procedure in such a hard combinatorial problem has to
be made with great care, to ensure convergence to desirable, better makespans in a reasonable amount
of time. The initialization proposed by Chen et al. [8] and Jain and Bagchi [20] creates a population
where there are few differences between individuals (Jain and Bagchi show some GAs where 100% of
the population is generated from Palmer’s heuristic). With such a homogeneous population, premature
convergence problems are likely to occur.
The initialization proposed by Reeves [38] (using a NEH generated member)results in a much more
effective algorithm. In order to better understand this initialization procedure we are going to briefly
describe the NEH procedure, which is based on the idea that jobs with high processing times on all the
machines should be scheduled as early as possible. This heuristic can be divided into three simple steps:
1. The total processing times for the jobs on them machines are calculated:
∀ job i, i = 1, . . . , n, Pi =m∑
j=1
pij
2. The jobs are sorted in descending order ofPi. Then, the first two jobs (those two with higherPi)
are taken and the two possible schedules containing them are evaluated.
3. Take jobi, i = 3, . . . , n and find the best schedule by placing it in all the possiblei positions in
the sequence of jobs that are already scheduled. For example, ifi = 4, the already built sequence
would contain the first three jobs of the sorted list calculated in step 2, then, the fourth job could
be placed either in the first, in the second, in the third or in the last position of thesequence. The
best sequence of the four would be selected for the next iteration.
5
Our initialization procedure is based on this NEH heuristic and on a modification of this heuristic that
can be explained as follows: After having ordered the jobs in step 2 we simplypick two random jobs
from the ordered list and exchange them for the two first jobs. Then we proceed with the rest of step 2
and step 3. Depending on the choice of the two initial jobs considered for step 2, we will have a different
final schedule. In this way, we have an almost limitless supply of very good initial sequences.
We propose the following initialization procedure: the first member of the population is generated by the
standard NEH heuristic (just like in Reeves’ GA), then up toBi% of the initial population is filled with
individuals generated from the modified NEH heuristic. The remaining(100−Bi)% of the population is
filled with completely randomly generated sequences. In this way, we ensurethat aBi% of the population
is formed by fit members.
In preliminary tests, this initialization proved to be superior to the initializations of Reeves and Chen
et al. The difference was so clear than no statistical test was needed to ascertain the correct initialization
method.
3.2 Selection mechanism and generational scheme
For the selection of parents, we have chosen two classical selection schemes, namely ranking and tour-
nament selection (see Goldberg [15] and Michalewicz [24]).
We call a generational scheme the process by which new individuals in a new generation replace old
members from the previous generation. Preliminary experiments showed thata steady stateGA where
the offspring replaced the worst individuals in the population was clearly superior to a regularelitist
generationalGA where offspring directly replaced parents.
Let the worst individual in a population be denoted aspworst and its makespan ascworst. We impose sev-
eral constraints as to when offspring can replacepworst. First, a given offspring can only replacepworst
if its makespan is lower thancworst. This initial approach, together with the high selection pressure of
the two selection mechanisms considered, resulted in a premature convergence of the population; the
main reason being that the fittest members are often selected and the offspring generated replace unfit
members. After just a few generations, all the population individuals were essentially similar. To over-
come this problem, we devised a new mechanism for determining when an offspring can replacepworst.
A new individual will only replacepworst if its makespan is better thancworst and the sequence of the
individual is unique, i.e. the sequence is not repeated in the population. Note that there can be many
different sequences with the same makespan.
One could think that the same outcome can be obtained by examining the makespanvalue of the
offspring and not allowing repeated makespans in the population (this approach is used in Reeves
and Yamada [36]). After all, how many different solutions with equal makespan are there? To an-
swer this we have examined the set of 8 instances of Carlier (car1-car8)taken from the OR-library
(http://mscmga.ms.ic.ac.uk/jeb/orlib/flowshopinfo.html). For every instance,
all the possible solutions (n!) are generated and their corresponding makespans calculated and stored.
We chose Carlier’s benchmark because this benchmark can be solved to optimality, allowing us to show
the relevance of our proposed generational scheme. Table 1 shows, from the total possible solutions, the
6
number of different makespans and the number of optimal makespans as well as other data.
[Insert Table 1 about here]
We can observe that for larger instances there are many different optimum sequences. Also, there are
far fewer different makespan values than possible solutions. For example, instance car1 represents an 11
job, five machine problem and therefore has11! = 39, 916, 800 different permutation schedules. Among
that many schedules only 4,150 different makespans can be found. Thismeans that all schedules are
clustered into 4,150 groups with roughly 9,619 different permutations with identical Cmax value. And,
even more relevant, for this instance there are as many as 8,106 differentoptimum sequences. Disallow-
ing individuals just by judging the makespan and not the entire sequence is not advisable since we would
be losing diversity in the genetic process.
3.3 Crossover
The genetic crossover operator generates new sequences or offspring by combining two other sequences
or parents. The goal is to generate “better” offspring, i.e. to create better sequences after crossing
the parents. Many different general and specific crossover operators have been proposed for the PFSP.
There should be neither repeated nor missing elements otherwise the sequence would be illegal. There
are several available operators suitable for permutation encodings that are commonly used in published
works:
• PMX or Partially Mapped Crossoverfrom Goldberg and Lingle [14].
• OX or Order Crossover, proposed by Davis [11].
• UOB orUniform Order Based, which is a mixture of uniform crossover and order based crossover
(see Michalewicz [24]).
• OP orOne Point Order Crossover. Comes after mixing the ideas of the one point crossover and
the order crossover (see Michalewicz [24]).
• GPX orGeneralized Position Crossoverby Mattfeld [23].
• TP orTwo Point Order Crossover. Again, it comes from the original two point crossover and the
order crossover (see Michalewicz [24]).
Initial simulations showed that crossover often resulted in offspring with worse makespan values than
parents. A closer study revealed that this behavior could be explained bythe fact that crossover tends
to disrupt building blocks, especially in the latter stages of the algorithm. As the proposed generational
scheme avoids the occurrence of identical solutions in the population, we can study the diversity in the
population and how similar the different individuals are. We found that, after a few generations, there
were many similar “blocks” of jobs within the individuals’ sequences. After crossover these similar
blocks or “building blocks” were broken apart many times in the offspring therefore resulting in worse
makespan values. This outcome could be observed in all aforementioned crossover operators.
7
We propose four new crossover operators for the PFSP that try to overcome this problem. These new op-
erators are based on the idea of identifying and maintaining those building blocks in the crossover. In this
way similar blocks or occurrences of jobs in both parents have to be passed over to offspring unaltered. If
there are no similar blocks in the parents (as in the initial stage of the algorithm) thecrossover operators
proposed will behave like the one point order crossover or the two pointorder crossover, depending on
the case.
The first operator is called “Similar Job Order Crossover” or SJOX and can be explained as follows:
First, both parents are examined on a position-by-position basis. Identicaljobs at the same positions are
copied over to both offspring (Figure 1(a)). Then, each offspring directly inherits all jobs from one of
the parents up to a randomly chosen cut point. That is to say, Offspring 1 inherits directly from Parent 1
and Offspring 2 from Parent 2 (Figure 1(b)). Lastly, the missing elementsof each offspring are copied
in the relative order of the other parent (Figure 1(c)).
In Figure 1 we can see an example taken out from instance ta001 from Taillard’s benchmark. In this case
Parents’s permutations yield makespan values of 1,301 and 1,305 respectively in this instance. These two
parents show four identical blocks that are directly copied to the offspring. After crossover, the generated
offspring have aCmax of 1,293 and 1,286 respectively.
[Insert Figure 1 about here]
Another alternative comes after the careful examination of the SJOX operator. There might be situations
in which there can be several, non-consecutive, identical occurrences in both parents. It can be argued
whether these alternated similar positions in parents form a block or not. Thus, we propose another
crossover operator, called “Similar Block Order Crossover” or SBOX.In this case, the first step of the
SJOX crossover is modified in the following way: We consider blocks of at least two consecutive identi-
cal jobs and only those identical blocks that occupy the same positions in bothparents are directly copied
to offspring. The whole operator is depicted in Figure 2, showing anotherexample of the same ta001
instance.
[Insert Figure 2 about here]
Notice how the three similar “isolated” occurrences of jobs 13, 18 and 2 in both parents, located at po-
sitions 7, 12 and 14 respectively, are not copied to the offspring (figure 2(a)). Notice also how the two
different offspring generated do not have jobs 18 and 2 in the same positions as the parents do (figure
2(c)).
The third and fourth proposed crossover operators are similar to the two previously stated operators.
The third operator is called “Similar Job 2-Point Order Crossover” or SJ2OX. The main difference with
the SJOX crossover is that in the second step, two random cut points are taken and the section between
these two points is directly copied to the children. Therefore, instead of copying up to the cut point from
the parents, the whole part between the two cut points is copied. The last proposed operator joins the
ideas of the SJ2OX and SBOX crossovers. This crossover is referred to as “Similar Block 2-Point Order
Crossover” or SB2OX. In this case, only blocks of at least two consecutive jobs are directly copied to
children as in the SBOX crossover in the first step and two cut points are considered in the second step
8
as in the SJ2OX crossover.
3.4 Mutation
Genetic algorithms incorporate a mutation operator mainly to avoid convergenceto local optimum and
to reintroduce lost genetic material and variability in the population. This operator can also be seen as a
simple form of local search.
By mutating an individual we slightly change the sequence, thus allowing a newbut similar permutation.
Mainly three different mutation operators are proposed in the literature forpermutation encodings:
• SWAP mutation. Two randomly selected positions are chosen and their corresponding jobs swapped.
• POSITION mutation. It is a specific case of the SWAP mutation where two adjacent jobs are
swapped.
• SHIFT mutation. In this case, a randomly picked position in the sequence is relocated to another
randomly picked position. The jobs between these two positions move along.
In this paper we apply the mutation probability to each position in the sequence. The insertion points and
SWAP partners are chosen randomly according to a uniform distribution among all possible positions.
More detailed information about these mutation operators can be found in Michalewicz [24]. In pre-
liminary tests, SHIFT mutation proved to be clearly superior to the other two mutationoperators. This,
together with the fact that other authors achieved the same results (see Reeves [38] and Murata et al.
[27]), motivated the use of SHIFT mutation over the other two methods.
3.5 Restart scheme
In a genetic algorithm the population evolves over generations. At some given time the population can
achieve a sufficiently low diversity for the process to stall around a localoptimum (This fact is not to
be confused with the premature convergence problem). To overcome this problem it is common to use
some restart mechanism in genetic algorithms. We apply a restart mechanism based on the ideas of a
similar scheme used by the authors in a related research (Alcaraz et al. [2]). This works as follows:
1. At each generationi, store the minimum makespan,maki.
2. If maki = maki−1 then makecountmak = countmak + 1. Otherwise makecountmak = 0
3. If countmak > Gr then apply the following procedure:
• Sort the population in ascending order ofCmax.
• Skip the first 20% individuals from the sorted list (the best individuals).
• From the remaining 80% individuals, 50% of them are replaced by simple SHIFT mutations
of the first 20% best individuals (one single mutation). 25% are replaced by newly generated
sequences from the modified NEH procedure and the remaining 25% are replaced by newly
randomly generated schedules.
9
• countmak = 0;
With this procedure, every time the lowest makespan in the population does notchange for more thanGr
generations, the restart procedure will replace all but the 20% best individuals in the population hoping
to reintroduce diversity in the population and to escape from local optimum.
4 Hybrid Genetic Algorithm
The idea of adding an improvement phase to a genetic algorithm has been widely exploited before. For
example, Murata et al. [27] suggest an improvement phase by applying a local search step before selec-
tion and crossover in a genetic algorithm. The drawback of this approach isthat applying local search
to all individuals in every generation results in a very slow genetic algorithm. Our proposal is to apply a
local search after selection, crossover and mutation, but not to all individuals in the population. We de-
fine an “enhancement probability” orPenh as follows: We draw a random uniformly distributed number
between 0 an 1 and if this number is less than or equal toPenh the individual will undergo local search.
For the improvement phase we have several alternatives. We can apply the improvement heuristics of
Dannenbring [10], i.e.Rapid Access with Close Order Search(RACS) orRapid Access with Extensive
Search(RAES). We can also apply Ho and Chang’s ([19]) gap improvement heuristic or some other form
of local search.
A simpler and more interesting form of local search follows the ideas of the NEH heuristic. In the
last iteration in step 3 of the NEH schedule building process, the remaining job of the ordered job
list is inserted in all possiblen positions of a sequence that isn − 1 jobs long and the best sched-
ule among then candidates is given as a final result. This is no more than the insertion neighbor-
hood, which has been regarded as adequate for the PFSP (see Osman and Potts [31], Taillard [42] or
Nowicki and Smutnicki [29]). Given a permutationπ of jobs, the insertion neighborhood ofπ is ob-
tained after considering all the possible pairs of positionsj, k ∈ {1, . . . , n} of π, where the job at
positionj is removed fromπ and reinserted at positionk (j 6= k). The resulting neighbor from such
if MinM=Min_Makespan(population) then countmak:=countmak+1;
else countmak:=0; // Controlling the number of generationswithout improvement
MinM:=Min_Makespan(population);
popcount:=0;
// Restart scheme
if countmak>Gr then
countmak:=0;
population:=Restart(population);
Evaluate_individual(population);end;
Return Min_Makespan(population);
Figure 3: Pseudocode for the proposed genetic algorithms.
23
Penh
0.025 0.05 0.075 0.013.16
3.28
3.4
3.52
3.64
3.76
3.88
Means and 95.0 Percent LSD Intervals
Rel
ativ
e P
erce
ntag
e D
evia
tion
(P
RD
)
Figure 4: Means plot forPenh or probability of enhancement factor.
24
Means and 95.0 Percent LSD Intervals
Selection
Rel
ativ
e P
erce
ntag
e D
evia
tion
(P
RD
)
3.38
3.43
3.48
3.53
3.58
3.63
TournamentRanking
Figure 5: Means plot for the type of selection factor.
25
Pm
0 0.005 0.01 0.0153.43
3.46
3.49
3.52
3.55
3.58
3.61
Means and 95.0 Percent LSD Intervals
Rel
ativ
e P
erce
ntag
e D
evia
tion
(P
RD
)
Figure 6: Means plot forPm or probability of mutation factor.
26
Crossover
OP OXPMX
SB2OXSBOX
SJ2OXSJOX
TP3.44
3.47
3.5
3.53
3.56
3.59
Means and 95.0 Percent LSD Intervals
Rel
ativ
e P
erce
ntag
e D
evia
tion
(P
RD
)
Figure 7: Means plot for the type of crossover operator factor.
27
GA_RMA PACO
Algorithm
Means and 95.0 Percent LSD Intervals
Rel
ativ
e P
erce
ntag
e D
evia
tion
(P
RD)
HGA_RMA
ILS
M-MMAS
0.37
0.57
0.77
0.97
1.17
Figure 8: Means plot for the Relative Percentage Deviation (RPD) for Taillard’s benchmark and thebest algorithms tested. Stopping criterion set atn · (n/2) · t milliseconds CPU time witht = 90.
Table 2: Average Relative Percentage Deviation (RPD) over the optimum solution or lowest knownupper bound for Taillard’s instances obtained by the methods evaluated. Stopping criterion set atn ·
(n/2) · t milliseconds CPU time witht = 30, 60 and90.