Computers & Operations Research 36 (2009) 1249 – 1267 www.elsevier.com/locate/cor Minimizing makespan in permutation flow shop scheduling problems using a hybrid metaheuristic algorithm G.I. Zobolas ∗ , C.D. Tarantilis, G. Ioannou Management Science Laboratory, Department of Management Science and Technology, Athens University of Economics and Business, Evelpidon 47A and Leukados 33, 11369 Athens, Greece Available online 8 February 2008 Abstract This paper proposes a hybrid metaheuristic for the minimization of makespan in permutation flow shop scheduling problems. The solution approach is robust, fast, and simply structured, and comprises three components: an initial population generation method based on a greedy randomized constructive heuristic, a genetic algorithm (GA) for solution evolution, and a variable neighbourhood search (VNS) to improve the population. The hybridization of a GA with VNS, combining the advantages of these two individual components, is the key innovative aspect of the approach. Computational experiments on benchmark data sets demonstrate that the proposed hybrid metaheuristic reaches high-quality solutions in short computational times. Furthermore, it requires very few user-defined parameters, rendering it applicable to real-life flow shop scheduling problems. 2008 Elsevier Ltd. All rights reserved. Keywords: Production scheduling; Permutation flow shop; Variable neighbourhood search 1. Introduction The flow shop scheduling problem (FSSP) is a well-known and complex combinatorial optimization problem with many variations. In the permutation FSSP (PFSP), all jobs must enter the machines in the same order and the goal is to find a job permutation that minimizes a specific performance measure (usually makespan or total flowtime). The makespan minimization PFSP is commonly referred to as F m |prmu|C max , where m is the number of machines, prmu denotes that only permutation schedules are allowed, and C max denotes the makespan minimization as the optimization criterion. In the general FSSP, for n jobs and m machines, there are (n!) m different alternatives for sequencing jobs on machines, while in permutation problems, the search space is reduced to n! For the FSSP, Johnson [1] proposed an O(n log n) algorithm, which optimally solves the F 2 C max problem. Under the special circumstance that the middle machine is dominated by the other two, Johnson’s algorithm solves to optimality the F 3 C max problem as well. In the general case though, Garey et al. [2] proved that the F 3 |prmu|C max is strongly NP-hard. Due to the complexity of the problem, approximate algorithms for the general FSSP failed to achieve high- quality solutions for problems of large size in reasonable computational time and, thus, academic research focused on heuristic methods [3–9]. However, even the NEH heuristic developed by Nawaz et al. [6], which is the most powerful construction heuristic to date [10], fails to reach solutions even within 7% from the optimum, in some difficult problem ∗ Corresponding author. E-mail address: [email protected](G.I. Zobolas). 0305-0548/$ - see front matter 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2008.01.007
19
Embed
Minimizing makespan in permutation flow shop scheduling problems using a hybrid metaheuristic algorithm
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Computers & Operations Research 36 (2009) 1249–1267www.elsevier.com/locate/cor
Minimizing makespan in permutation flow shop schedulingproblems using a hybrid metaheuristic algorithm
G.I. Zobolas∗, C.D. Tarantilis, G. IoannouManagement Science Laboratory, Department of Management Science and Technology, Athens University of Economics and Business, Evelpidon
47A and Leukados 33, 11369 Athens, Greece
Available online 8 February 2008
Abstract
This paper proposes a hybrid metaheuristic for the minimization of makespan in permutation flow shop scheduling problems. Thesolution approach is robust, fast, and simply structured, and comprises three components: an initial population generation methodbased on a greedy randomized constructive heuristic, a genetic algorithm (GA) for solution evolution, and a variable neighbourhoodsearch (VNS) to improve the population. The hybridization of a GA with VNS, combining the advantages of these two individualcomponents, is the key innovative aspect of the approach. Computational experiments on benchmark data sets demonstrate thatthe proposed hybrid metaheuristic reaches high-quality solutions in short computational times. Furthermore, it requires very fewuser-defined parameters, rendering it applicable to real-life flow shop scheduling problems.� 2008 Elsevier Ltd. All rights reserved.
Keywords: Production scheduling; Permutation flow shop; Variable neighbourhood search
1. Introduction
The flow shop scheduling problem (FSSP) is a well-known and complex combinatorial optimization problem withmany variations. In the permutation FSSP (PFSP), all jobs must enter the machines in the same order and the goal isto find a job permutation that minimizes a specific performance measure (usually makespan or total flowtime). Themakespan minimization PFSP is commonly referred to as Fm|prmu|Cmax, where m is the number of machines, prmudenotes that only permutation schedules are allowed, and Cmax denotes the makespan minimization as the optimizationcriterion. In the general FSSP, for n jobs and m machines, there are (n!)m different alternatives for sequencing jobs onmachines, while in permutation problems, the search space is reduced to n!
For the FSSP, Johnson [1] proposed an O(n log n) algorithm, which optimally solves the F2‖Cmax problem. Under thespecial circumstance that the middle machine is dominated by the other two, Johnson’s algorithm solves to optimalitythe F3‖Cmax problem as well. In the general case though, Garey et al. [2] proved that the F3|prmu|Cmax is stronglyNP-hard. Due to the complexity of the problem, approximate algorithms for the general FSSP failed to achieve high-quality solutions for problems of large size in reasonable computational time and, thus, academic research focused onheuristic methods [3–9]. However, even the NEH heuristic developed by Nawaz et al. [6], which is the most powerfulconstruction heuristic to date [10], fails to reach solutions even within 7% from the optimum, in some difficult problem
1250 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
instances. Thus, it became evident that new solution approaches should be followed for larger instances, and academicinterest switched to artificial intelligence optimization methods known as metaheuristics; these include simulatedannealing [11,12], tabu search [13,14], genetic algorithms (GAs) [15–18], ant colony optimization [19,20], particleswarm optimization [21], iterated local search (ILS) [22], and differential evolution [23].
The wealth of metaheuristics offered in the literature generated recently a new trend, i.e., the development of hybridapproaches, which combine different concepts or components of more than one metaheuristic [24]. Hybridization,when properly applied, may further enhance the effectiveness of the solution space search, and may overcome anyinherent limitations of single metaheuristic algorithms. Therefore, new opportunities emerge, which may lead to evenmore powerful and flexible solution methods for combinatorial optimization problems.
This paper presents an efficient and effective hybrid metaheuristic algorithm for the PFSP, which innovativelycombines four construction heuristics and two metaheuristic algorithms: the NEH heuristic of Nawaz et al. [6], theCDS heuristic of Campbell et al. [4], Palmer’s [3] heuristic, Gupta’s [5] heuristic, and the well-established GA [25]and variable neighbourhood search (VNS) [26] metaheuristic algorithms. GAs have been proved to be effective for avariety of scheduling and combinatorial optimization problems including flow shop [16–18], job shop [27,28] and openshop [29]. Moreover, a typical VNS algorithm performs a systematic change of neighbourhood in conjunction with aset of typical local search moves, and it has been successfully applied to scheduling problems either as a part of a hybridmethod or as a standalone component [30–34]. The proposed approach, referred to as NEGAVNS, is robust, fast, andsimply structured, and manages to find high-quality solutions in short computational times by efficiently alternatingsearch diversification and intensification. To our knowledge, NEGAVNS is the first to hybridize a GA with VNS for thePSFP, combining the strong characteristics of these two individual metaheuristic components. Furthermore, NEGAVNSrequires a very low number of user-defined parameters, an extremely important feature especially for a potentialimplementation in real-life Decision Support Tools and Enterprise Resource Planning Systems.
The remainder of the paper is organized as follows: Section 2 defines and formulates the PFSP. Section 3 developsthe proposed hybrid algorithm. Section 4 presents the computational results acquired and, finally, Section 5 providesconclusions and suggestions for further research. Additional experimental results are displayed in the Appendix section.
2. PFSP definition and formulation
The PFSP consists of a finite set J of n jobs {Ji}ni=1 to be processed on a finite set M of m machines {Mk}mk=1. Eachjob Ji consists of m operations (Oi1, Oi2, . . . , Oim) and all jobs Ji must be processed on every machine in the samesequence, given by the indexing of the machines. Oik is the kth operation of job Ji , which has to be processed onmachine Mk for an uninterrupted and fixed processing time period pik , while no operation can be pre-empted. In ourstudy, the goal is to find the permutation of jobs that minimizes the makespan. In the general PFSP, each machine canprocess only one job and each job can be processed by only one machine at a time (capacity constraints). According tothe number of jobs (n) and machines (m), the dimensionality of a flow shop instance is designated as n×m; the totalnumber of operations is, therefore, N = n×m.
Let C(k, Ji) denote the completion time of job Ji on machine k and let �= {�1, �2, . . . , �n} denote a permutationof jobs. The completion time for an n-job, m-machine flow shop problem is calculated as follows:
So, the PFSP with the makespan criterion is to find a permutation �∗ in the set of all permutations � such that
Cmax(�∗)�C(m, �n) ∀� ∈ �. (6)
G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267 1251
3. The proposed hybrid metaheuristic
3.1. Overview
To effectively solve the problem of (1)–(6), it is imperative that, apart from an efficient solution scheme, a propersolution representation is selected. The reason is that the solution representation of a combinatorial optimization problemgreatly influences the final outcome of any solution scheme, and should not only depend on the special characteristics ofthe problem itself but on the solution method as well. The PSFP representation is much simpler than that of the job shopscheduling problem [35], since a simple job permutation always represents a feasible and legal solution. Furthermore,in evolutionary-based solution approaches, this job permutation representation has full Lamarkian property [36]; i.e.,the ‘merits’ of the chromosomes can be inherited from one generation to another. Some alternative representations havebeen proposed (see, e.g., [23]), mainly to cope with the illegality that some crossover methods induce. In our work, thegenetic crossover methods are chosen in such a way to ensure the feasibility and legality of all solutions encoded withthe job-based representation (see Section 3.3).
Denoting the population size as i_pop, the solution index as y(y = 1, 2, . . . , i_pop), and the maximum allowedcomputational time as t_max, the steps of the proposed hybrid metaheuristic algorithm (referred to as NEGAVNS) areas follows:
Step 1: Initial population generation. Generate the initial population, which includes the job permutations of fourwell-known constructive heuristics for the PFSP: (a) NEH [6], (b) CDS [4], (c) Palmer [3] and (d) Gupta [5]. Apartfrom these four initial solutions, the initial population includes: (a) (i_pop ∗ a − 4) solutions provided by a greedyrandomized procedure based on the NEH heuristic (referred to as GRNEH) and (b) i_pop∗ (1− a) solutions generatedrandomly. All initial solutions are denoted as Sy_INI. Note that parameter a defines the relative contribution to the initialpopulation of GRNEH; to set this parameter, a series of tests was conducted (see Section 4.1.2).
do{Step 2: Population improvement via a GA. A GA is applied to improve the population. The GA parents are chosen
based on a tournament selection [25]. The two-point crossover (version I) of Murata et al. [15] and the shift mutationoperators [18] are used. Solutions Sy_INI are replaced if better solutions are found (denoted as Sy). At this step, i_pop/2couples are created which in turn mate to generate i_pop offsprings. Thereafter, Steps 3 and 4 are executed and theloop restarts (i_pop/2 couples generating i_pop new individuals) until the termination criterion is met (t = t_max).
Step 3: Intensification phase using VNS. The intensification phase is applied to a subset of the population. Each timea solution is selected for intensification, a different neighbourhood is chosen (total four neighbourhoods). If no betteror better but non-unique solutions are found in these neighbourhoods, the whole procedure restarts with the additionof a shaking function, inherent to generic VNS implementations, in an attempt to escape from local optima; i.e., foreach solution Sy , a solution S′y in another neighbourhood is found (through shaking), and Sy is replaced with the bestsolution within the neighbourhood of S′y (Sy_VNS), if f (Sy_VNS)�f (Sy), and if Sy_VNS represents a unique solution,not included in the current population.
Step 4: Population renewal. Update the population to avoid entrapment in local optima and algorithm stall. Underspecific conditions (see Section 3.5), ‘aged’ solutions are replaced by new ones, which are generated similarly to Step1’s procedure, i.e., with a% probability by the GRNEH component and with (100− a)% probability randomly.}while (t < t_max) (where t is the total elapsed computational time).The proposed hybrid metaheuristic mechanics is schematically presented in Fig. 1.
3.2. Initial population generation
When generating initial solutions, the goal is to acquire a population of diversified, yet of adequate quality solutions.In our implementation, four well-known heuristics (NEH, Gupta, CDS, and Palmer) are initially employed to generate
Initial population Generation
Intensification Phase
Genetic Algorithm
NEGAVNSInitialization
PopulationRenewal End
Fig. 1. The NEGAVNS mechanics.
1252 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
solutions. A GRNEH is then employed to generate (i_pop ∗ a− 4) solutions. According to the original NEH heuristic,the job sequence is determined by initially arranging jobs in a descending order of their total processing time. Then,based on this order, an increasingly larger partial sequence is generated at each step by introducing one job from theunscheduled ones into the partial sequence (until all jobs are scheduled). At each step, a new job is scheduled in allpossible positions (k + 1 positions, where k is the size of last step’s partial sequence), and after choosing the bestplace for this job, regarding the makespan, the new partial sequence is fixed for the remaining procedure [6]. WithinGRNEH, at each step of the NEH method, instead of getting the best partial solution as the original NEH dictates, oneout of the best five partial solutions is randomly chosen with equal probabilities. This way, high-quality individuals,yet different from each other, can be generated. The remaining i_pop(1 − a) solutions of the initial population aregenerated randomly. If y denotes the solution index, k the size of the partial sequence, and n the number of jobs of aparticular problem instance, the pseudo-code of the GRNEH procedure is
y = 5 (the first four solutions are acquired by the classic constructive heuristic methods)Rank jobs in descending order of total processing timeDo {
Remove the first job for the ranked list and insert it as the first element of the partial sequencek = 1Do {
Evaluate k + 1 partial schedules by choosing the first unscheduled job of the ranked list and placing itin the k + 1 possible positionsStore best five (5) partial schedulesSelect randomly a partial schedule out of the five with equal probabilitiesk← k + 1
} while (k�n)
Store complete solutiony ← y + 1
} while (y < i_pop ∗ a).
3.3. Population improvement via a GA
In Step 2 of NEGAVNS, a GA is employed to improve the initial population. In our implementation, the selectionof parents is based on a classic tournament selection scheme where i_pop/2 random couples of parents are selected.For the selection of each couple, a random number of solutions is selected at first, ranging from 3 to 10. There-after, the two fitter parents are selected and they mate to generate two new offsprings. This procedure is repeatedi_pop/2 times at each step (i.e., before every execution of the VNS and population renewal schemes). Concerningchromosome recombination, the latter is performed on a two-point crossover basis. It should be mentioned that di-rectly applying a two-point crossover operator most likely leads to illegal chromosomes, where some jobs are repeatedmore than once within a permutation. To overcome the chromosome illegality, a special variation of the operatoris applied herein, which according to Murata et al. [15], achieves the best performance for PFSP problems; this isthe two-point crossover version I, based on which genes before the first and after the second crossover points aredirectly copied from one parent to the offspring, while the rest of the offspring’s chromosome is filled up by le-gitimate genes from the second parent. The typical and the modified two point crossover operators are illustratedin Fig. 2.
Murata et al. [15] and Ruiz et al. [18] also showed that the best performing mutation operator for PFSP problems isthe shift mutation operation, i.e., a simple insert move (see Fig. 3 for the insert operator). In the final steps of our GA,the offsprings are evaluated (makespan calculation) and the best performing ones are included in the new population,while the worst performing solutions are deleted. According to Ruiz et al. [18] this selection mechanism performsbetter than the direct replacement of parents by their offsprings. To avoid premature convergence, each new found fitoffspring is accepted not only if it yields a better makespan than the worst solution of the current population, but alsoif it represents a unique sequence, not repeated in the population. The pseudo-code of the GA is (for one generation orone loop of Steps 2–4).
G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267 1253
Typical Two Point Crossover
Two Point Crossover ver. I
P1 1 3 5 6 4 2 7 8 P2 3 4 1 7 5 8 6 2
O1 1 3 x x x x 7 8
Step 1: Copy segments from Parent 1 to Offspring 1 and from Parent 2 to Offspring 2. O2 3 4 x x x x 6 2
Step 2 for Offspring 1: Delete the genes of Parent 2 that are included in the inherited segment from Parent 1. Copy remaining genes to available position of Offspring 1
P2 3 4 1 7 5 8 6 2
O1 1 3 4 5 6 2 7 8
Step 2 for Offspring 2: Delete the genes of Parent 1 that are included in the inherited segment from Parent 2. Copy remaining genes to available position of Offspring 2
P1 1 3 5 6 4 2 7 8
O2 3 4 1 5 7 8 6 2
P1 1 3 5 6 4 2 7 8 P2 3 4 1 7 5 8 6 2
O1 x x 5 6 4 2 x x O2 x x 1 7 5 8 x x
Step 2: Copy remaining genes from the other parent. Illegal offsprings as some genes are repeated in the sequence.
P2 3 4 1 7 5 8 6 2
O1 3 4 5 6 4 2 6 2
P1 1 3 5 6 4 2 7 8
O2 1 3 1 7 5 8 7 8
Step 1: Copy segment between two points for Parent 1 to Offspring 1 and from Parent 2 to Offspring 2.
Fig. 2. Typical and modified two-point crossover operators.
Select i_pop/2 couples of parents for new solution generation based on tournament selectionPerform the two-point cut version I crossover operator for each coupleGenerate two offsprings for each coupleMutate offsprings by a random shift (insert) moveEvaluate offspringsReplace the worst individual with the new found offspring if the latter represents a unique permutation AND isfitter than the solution it replaces.
3.4. Intensification phase using VNS
Ishibuchi et al. [37] underlined the importance of balancing genetic search and local search for finding high-qualityPFSP solutions in short computational times. In Step 3 of our implementation, a VNS scheme is applied to explore theneighbourhood of a selected population subset, complementing the GA. The subset changes generation after generationand is composed of the n/2 best found solutions and n/2 randomly chosen. This intensification phase operates in twostages: initially, all solutions Sy are linked to a counter (csy) that is initialized to zero. This indicates that the nextgeneration, when the solutions will be chosen for the intensification phase, will be the first that these solutions willcompete and reproduce. As prevailing solutions enter the intensification phase, four different neighbourhoods areexplored based on counter csy’s value. More specifically, the following actions take place (the four neighbourhoodoperators are schematically presented in Fig. 3):
• If csy = 0, the interchange neighbourhood of solution Sy is explored.• If csy = 1, the insert neighbourhood of solution Sy is explored.• If csy = 2, the 2-opt neighbourhood of solution Sy is explored.• If csy = 3, the or-opt neighbourhood of solution Sy is explored.
The main idea behind this scheme is that simpler neighbourhood structures should be examined first (i.e., interchangeand insert) before proceeding to more complex ones (i.e., 2-opt and or-opt). A special module employed to measurepopulation diversification (see Section 4.1.2) was used to measure solution distance. As expected, the interchange
1254 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
Interchange OperatorSwaps i and j operations
i-1 j+1jj-1i+1i …...
Insert (Relocate) OperatorInserts operation i after j
i-1 j+1jj-1i+1i …...
2-Opt OperatorInterchanges operations i+1 and j and reverses the order of operation execution between them
j+1ji+1i…...
j+1i+1ji…...
Or-Opt OperatorInserts operations i and i+1
between operations j and j+1
i-1 j+1jj-1i+1i …...
Fig. 3. Neighbourhood operators.
and insert moves produced solutions ‘closer’ or less distant to the initial ones in comparison to the 2-opt and or-optmoves. A neighbourhood of a solution is exhaustively searched and the best performing solution that is also unique(i.e., it represents a unique permutation, not previously found) replaces the initial. If no better or a better but not uniquesolution is found, the initial solution is kept and its counter, csy , is increased by one in order to explore a differentneighbourhood in the forthcoming intensification phase (whenever the specific solution will be reselected). If a betterand unique solution is found, it is included in the population and its counter (csy) is reset to zero.
If all four neighbourhoods are examined and no better solutions emerge, the procedure restarts with the addition ofa shaking function. More specifically, for csy �4, before exploring the neighbourhood of a solution, a shaking move isperformed in an attempt to escape from a previously locally explored sub-region. Similarly to the initial procedure:
• If csy = 4, a random insert and interchange move (shaking) leads to solution S′y and, thereafter, the interchangeneighbourhood of solution S′y is explored.• If csy = 5, a random insert and interchange move (shaking) leads to solution S′y and, thereafter, the insert neighbour-
hood of solution S′y is explored.• If csy=6, a random insert and interchange move (shaking) leads to solutionS′y and, thereafter, the 2-opt neighbourhood
of solution S′y is explored.• If csy = 7, a random insert and interchange move (shaking) leads to solution S′y and, thereafter, the or-opt neigh-
bourhood of solution S′y is explored.• If csy = 8, no intensification is performed. The solution is considered ‘aged’ and will be replaced by a new solution,
even of lower quality (see Section 3.5).
The pseudo-code of the intensification phase of NEGAVNS is provided below. Note that w, w′, z, and z′ are randominteger numbers between 1 and N. The operation of performing an interchange, insert, 2-opt, or or-opt move onindividual Sy on positions w and z, and thus getting individual S′y , is represented as S′y = interchange(Sy, w, z),
G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267 1255
S′y = insert(Sy, w, z), S′y = 2-opt(Sy, w, z), and S′y = or-opt(Sy, w, z), respectively. Finally, f (Sy) is equal to themakespan of individual Sy, B is a temporary vector to store a local optimum, and the VNS loop is conducted n(n− 1)
times where n is the number of jobs of a specific problem (chromosome size).
For all individuals Sy , y = 1, 2, . . . , n, where 1�y�n/2 corresponds to the n/2 best solutions andn/2�y�n corresponds ton/2 random solutions
w = rnd(1, N); z= rnd(1, N); w′ = rnd(1, N); z′ = rnd(1, N), w < z, w �= w′, z �= z′if csy �8 then brake and move to next Sy
if 8 > csy �4 then S′y = interchange(Sy, w, z); S′y = insert(S′y, w′, z′)if csy < 4 thenS′y ← Sy
loop= 0; B ← S′yDo { w = rnd(1, N); z= rnd(1, N) w < z
if (csy = 0 or csy = 4) then S′′y = interchange(B, w, z)
if (csy = 1 or csy = 5) then S′′y = insert(B, w, z)
if (csy = 2 or csy = 6) then S′′y = 2-opt(B, w, z)
if (csy = 3 or csy = 7) then S′′y = or-opt(B, w, z)
if (f (S′′y )�f (B) then B ← S′′yloop++
} while (loop < n(n− 1))
if (f (B)�f (Sy) and B represents a unique sequence then {csy = 0; Sy ← B}else csy ← csy + 1
EndFor.
3.5. Population renewal
It was observed that NEGAVNS would get stalled after a number of generations with most solutions having theircsy index equal to or higher than eight. As a result, a module was developed (Step 4) to renew the population. Morespecifically, every solution with csy �8 is considered ‘aged’ and is consequently replaced by a new one that is generatedsimilarly to Step 1’ s procedure, i.e., with a% probability by the GRNEH component and with (100− a)% probabilitycompletely randomly. The only exception is the best solution found so far, which even if it belongs to the ‘aged’solutions subgroup, it is not replaced by the population renewal module.
4. Computational results
The NEGAVNS algorithm was coded in C++ and all tests were conducted on a Pentium� IV PC at 2.4 GHz with1.0 GB of RAM. The proposed hybrid metaheuristic was tested on 120 benchmark problems developed by Taillard [38]which were downloaded from the OR bibliography [39,40]. More specifically, the algorithm was tested on 12 sets ofbenchmark problems (each set consisting of 10 instances) with sizes (n×m) ranging from 20× 5 to 500× 20 denotedas Ta001 to Ta120.
4.1. Parameter settings and sensitivity analysis
Typically, optimization algorithms are measured against two criteria: accuracy (i.e., solution quality) and speed.However, simplicity and flexibility are also essential attributes of good metaheuristics [41]. Limiting the number ofparameters and using fixed parameter settings enhance both the simplicity and flexibility of an algorithm. In ourimplementation, a total of three parameters are required to set up the hybrid algorithm: the initial population size(i_pop), the GRNEH initial population percentage (a), and the maximum running time (t_max). The values of theseparameters greatly affect the algorithm’s performance and thus an extensive series of tests was conducted to investigatetheir impact. The particularly difficult instances Ta051–Ta060, Ta081–090, and Ta101–Ta110 were mainly used forparameter sensitivity analysis. The results of these tests led to the adaptation of fixed or problem size-dependantparameter values, which are presented in Table 1.
1256 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
Table 1Parameter settings for NEGAVNS
Initial population size (i_pop) GRNEH initial solutions (a) (%) Max. running time (t_max) (s)
10n 30 n×m/10
Best Makespan
3940
3960
3980
4000
4020
4040
4060
4080
4100
0Computational time (s) Computational time (s)
Bes
t Mak
espa
n
Average Makespan
3980
4080
4180
4280
4380
4480
4580
4680
4780
4880
Mak
espa
n
i_pop=50 i_pop=100 i_pop=250 i_pop=500 i_pop=1000
302010 0 302010
Fig. 4. Analysis of initial population size effect (Ta051).
4.1.1. Initial population sizeThe initial population size should depend on the problem size, and specifically on the number of jobs as the problem
complexity is equal to n! More specifically, the larger the problem size, the more the solutions that are necessary toensure coverage of the most possible regions of the (larger) solution space. Generally, the larger the population size for agiven instance, the more the computational time that is required for algorithmic convergence, and the better the solutionsacquired after a long period of time. Thus, at this stage it is critical to balance population size and computational timein order to get the best possible solutions. A reduced version of NEGAVNS was employed at this stage, where allinitial solutions were randomly generated (no GRNEH) and the VNS module was deactivated (only the GA was run)to amplify the effects of population size on algorithm’s performance. Moreover, the maximum computational timeallowed was t_max/3 s (see Section 4.1.3). Initially, a very small population size (equal to n) was used, which wasgradually increased in later tests. As expected, computational results revealed superior algorithmic performance whenthe population size was increased up to a critical value after which performance deteriorated. Fig. 4 displays the resultsacquired for the Ta051 instance (n= 50, t_max= 100 s).
The best makespan for t = t_max/3 is achieved for i_pop = 5n = 250 solutions. However, as seen from bothgraphs of best and average makespan, the GA converges prematurely with this setting and does not manage to performbetter in terms of average makespan in comparison to the case where i_pop = 10n = 500. On the other hand, usingi_pop= 20n= 1000 results in both lower average and best solution makespan in the limited time frame allowed. Thesame tests were conducted for other instances (Ta052–Ta60, Ta081–Ta90, Ta101–Ta110) where the aforementionedfindings where validated. Table 2 demonstrates the relative performance of other initial population sizes in comparisonto 10n in all 30 instances (a negative value indicates better performance).
G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267 1257
Table 2Relative performance of i_pop values in comparison to 10n
Fig. 5. Parameter a’s effect on initial solution quality and initial population diversity (Ta051).
It should be mentioned that increasing the computational time available should lead to an increase of the populationsize so as to avoid premature convergence and, conversely, decreasing the computational time available should lead toa decrease of the population size to speed up convergence within the time limit imposed.
4.1.2. Percentage of GRNEH initial solutionsDuring the initial population generation, it is critical to balance solution quality and solution diversification. The
higher the quality of the initial solutions and the more diversified these solutions are, the better the NEGAVNS perfor-mance. In order to determine the best blend of random and GRNEH produced solutions, a module measuring solutiondiversification was developed: for each possible pair of solutions, the relative distance of each job was measured.For example, if solution A had job five in the second position and solution B had job five in the fifth position, thentheir distance concerning job five would be three. By adding all distances for all jobs between two solutions, a met-ric of solution (A to B) distance can be calculated. Then by adding the distances of all possible pairs of solutions(i_pop ∗ [i_pop − 1]/2), a measure of total population diversification can be defined. A series of tests for the Ta051instance was conducted to determine the optimum a value. The results acquired are demonstrated in Fig. 5 (parametera’s effect on average makespan of the initial population and its diversity).
As expected, the higher the percentage of GRNEH generated solutions, the better the average quality of solutions butthe lower the population diversification. An almost linear relation of a and average makespan was observed. Anotherseries of tests was conducted to realize the parameter a’s effect on algorithm performance. Table 3 presents the resultsacquired for various parameter a values. The lower the population diversity (high a value), the faster the algorithmconverges but to inferior solutions. On the other hand, generating solutions randomly (low a value and high diversity)generally increases solution quality at the expense of computational time as significantly more sub-regions of thesolution space are explored. The value of a=30% was finally chosen as the best compromise between average solutionquality and population diversification. More specifically, the value a = 30% resulted in the best solution quality forinstances Ta051 and Ta081 in reasonable computational time. A 0.94% average makespan reduction was achieved withless than 0.36% decrease in population diversification index in comparison to the a = 0 case (where all solutions arerandomly chosen). It should be mentioned that similar results were acquired for other benchmark instances and thatthe trend of the graphs was generally independent of the population size.
1258 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
Table 3Effect of parameter a on NEGAVNS performance
aNEGAVNS′ does not include the four heuristic solutions in the initial population.
0.00%
5.00%
10.00%
15.00%
20.00%
25.00%
30.00%
35.00%
40.00%
45.00%
50.00%
Interchange
Perc
enta
ge o
f new
bes
t fou
nd s
olut
ions
Or-OptInsert 2-Opt
Fig. 6. The contribution of each VNS move to finding a new best solution.
4.1.3. Maximum running timeExtensive testing revealed that using the ‘number of iterations with no recorded improvement’ for convergence
control was not suitable for NEGAVNS, as in many cases the algorithm managed to escape from a local optimumafter a quite large number of iterations. As a result, the ‘maximum running time convergence control’ was adopted.Generally, the larger the number of jobs and machines, the more difficult the instance, and computational time toachieve high-quality solutions increases. Thus, maximum allowed computation time should depend on each instance’sdimensionality and was arbitrarily given the value of N = n×m/10 s, which results in practical computational timesfor all Taillard instances.
4.2. Performance analysis of individual components
In this section, the effect of specific NEGAVNS components is analysed. Firstly, the contribution of the four construc-tion heuristics (initial solution generation) is evaluated followed by an analysis of the performance increase resulting
G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267 1259
Table 5Computational results with (NEGAVNS) and without (NEGAVNS′′ ) the 2-opt and or-opt moves
from the 2-opt and or-opt moves. Finally, the strong wrapping of GA and VNS is tested by a series of tests with threedifferent levels of GA and VNS interconnection.
4.2.1. The contribution of construction heuristicsAs mentioned, the initial population is generated in three stages: (a) the application of a GRNEH that generates
i_pop ∗ a− 4 solutions, (b) the generation of i_pop ∗ (1− a) random solutions, and (c) the generation of four solutionsby four well-known construction heuristics. It would be interesting to see whether the inclusion of the four heuristicallyfound solutions would make any difference to NEGAVNS performance, especially when the initial solution generationis also made by the highly efficient greedy procedure. Table 4 summarizes the results obtained for Ta051, Ta081,and Ta101 with (NEGAVNS) and without (NEGAVNS′) including the four heuristic solutions in the initial population.As seen, computational results are almost identical (a performance improvement is recorded in three repetitions ofTa081 after the inclusion of the four heuristic solutions), but the benefits are much more significant in terms of thecomputational time required to achieve these solutions, when the heuristics are used.
4.2.2. The contribution of additional neighbourhood movesIn order to demonstrate the effect of 2-opt and or-opt moves in the specific scheduling problem, two groups
of experimental tests were designed: (a) the frequency of finding a new best solution by applying these moves
1260 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
Table 8Comparison of NEGAVNS with other metaheuristic algorithms for the PFSP
Problem group NEH SAOP Spirit ILS M-MMAS PACO HGA_RMA PSOVNS NEGAVNS
Average 3.332 2.263 4.516 0.983 0.792 0.668 0.448 0.472a 0.466
aThe authors do not provide results for the 500× 20 instance group.
Table 9Computational efficiency of various computer platforms
Algorithm Processor Mflops/s Scaling factor
NEGAVNS Pentium IV at 2.4 GHz 1128 1.00HGA_RMA Pentium IV at 2.6 GHz 1222 1.08PSOVNS Pentium IV at 2.8 GHz 1317 1.16ILS Pentium IV at 2.6 GHz 1222 1.08SAOP Pentium IV at 2.6 GHz 1222 1.08Spirit Pentium IV at 2.6 GHz 1222 1.08M-MMAS Pentium IV at 2.6 GHz 1222 1.08PACO Pentium IV at 2.6 GHz 1222 1.08
NEGAVNS: the hybrid metaheuristic proposed in this paper; HGA_RMA: the hybrid genetic algorithm of Ruiz et al. [18]; PSOVNS: the particleswarm optimization algorithm of Tasgetiren et al. [34]; ILS: the iterated local search of Stützle [22]; SAOP: the simulated annealing algorithm ofOsman and Potts [11]; Spirit: the tabu search algorithm of Widmer and Hertz [42]; M-MMAS: the first ant colony algorithms of Rajendran andZiegler [20]; PACO: the second ant colony algorithms of Rajendran and Ziegler [20]; scaling factor: the relative performance of each processor incomparison to the Pentium IV at 2.4 GHz.
Table 10Comparison of maximum allowed running times
Problem: the instance tested; PS: the problem size, (number of jobs)× (number of machines); BKS: the best known solution of the instance; LB: thelower bound of the instance; NEH: the results of the NEH heuristic algorithm [6]; BS: the best makespan achieved by NEGAVNS over 10 runs; AS:the average makespan achieved by NEGAVNS for the same instance (in 10 runs); REmin: the deviation of the best makespan achieved from the bestknown makespan; REavg: the deviation of the average makespan achieved from the best known makespan; tmax: the maximum computational timeallowed in seconds (problem size dependant); t : the minimum computational time required to achieve the best solution (in seconds).
1262 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
Table 13Computational results on 50-job Taillard instances
Problem PS BKS LB NEH BS AS REmin (%) REavg (%) tmax t
Problem: the instance tested; PS: the problem size, (number of jobs)× (number of machines); BKS: the best known solution of the instance; LB: thelower bound of the instance; NEH: the results of the NEH heuristic algorithm [6]; BS: the best makespan achieved by NEGAVNS over ten runs; AS:the average makespan achieved by NEGAVNS for the same instance (in 10 runs); REmin: the deviation of the best makespan achieved from the bestknown makespan; REavg: the deviation of the average makespan achieved from the best known makespan; tmax: the maximum computational timeallowed in seconds (problem size dependant); t : the minimum computational time required to achieve the best solution (in seconds).
in comparison to other moves and (b) the performance comparison of NEGAVNS with and without these twomoves.
In the first set of tests, one representative of each problem group was run 10 times. Each time a new best solutionwas found by the algorithm, the move resulting to this improvement was recorded. Fig. 6 demonstrates the averageaggregate results of all tests conducted. As seen the most effective move was the ‘insert’ move followed by ‘or-opt’,‘2-opt’, and ‘interchange’, respectively.
In the second of tests, the 2-opt and or-opt moves were removed from NEGAVNS and the algorithm was rerun forone representative of the difficult problem groups. Table 5 summarizes the results acquired in terms of the averagedeviation from the best known solutions after 10 runs and the average computational time required to achieve theseresults (the algorithm without the 2-opt and or-opt moves is denoted as NEGAVNS′′). As seen, the addition of the 2-optand or-opt moves is beneficial for the performance of NEGAVNS, especially in terms of the computational time requiredto achieve high-quality solutions.
4.2.3. Performance analysis of GA and VNS componentsIn our implementation, the VNS procedure is strongly wrapped within the GA framework; i.e., individual neigh-
bourhoods are searched before a new genetic generation (instead of searching all neighbourhoods before shiftinggenerations). Our decision was based on an additional set of tests were different levels of GA–VNS interconnection
G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267 1263
Table 14Computational results on 100-job Taillard instances
Problem PS BKS LB NEH BS AS REmin (%) REavg (%) tmax t
Problem: the instance tested; PS: the problem size, (number of jobs)× (number of machines); BKS: the best known solution of the instance; LB: thelower bound of the instance; NEH: the results of the NEH heuristic algorithm [6]; BS: the best makespan achieved by NEGAVNS over 10 runs; AS:the average makespan achieved by NEGAVNS for the same instance (in 10 runs); REmin: the deviation of the best makespan achieved from the bestknown makespan; REavg: the deviation of the average makespan achieved from the best known makespan; tmax: the maximum computational timeallowed in seconds (problem size dependant); t : the minimum computational time required to achieve the best solution (in seconds).
were evaluated. More specifically, three major and difficult instances were used to evaluate three cases: (a) the completeneighbourhood search before proceeding to the next generation (denoted as NEGA−VNS), (b) the individual neighbor-hood search as in the final NEGAVNS implementation, and (c) the VNS procedure without a GA function (denotedas NEVNS). Table 6 summarizes the results acquired in terms of the average deviation from the best known solutionsafter 10 runs and the average computational time required to achieve these results. As seen, the standard NEGAVNSperforms best, followed by the NEGA−VNS and the NEVNS implementations.
4.3. Computational results on the Taillard benchmark suite
In this section, a presentation and thorough analysis of the results acquired on the Taillard [38] benchmark suiteare provided (the detailed results acquired for each of the 120 Taillard instances run are presented in Tables 12–15in the Appendix section). The solution quality is mainly measured in terms of the relative increase in makespan withrespect to the best known solutions. Each instance was run 10 consecutive times and both deviations of the best andaverage makespan achieved are calculated (REmin and REavg), respectively. It should be mentioned that the best knownsolutions in difficult instances are usually found with branch and bound techniques or other exact methods in powerfulworkstations run for extended time periods, and thus are not directly comparable to metaheuristic methods designedor intended to run on single processor PCs and provide high-quality solutions in short computational times.
1264 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
Table 15Computational results on 200-job and 500-job Taillard instances
Problem PS BKS LB NEH BS AS REmin (%) REavg (%) tmax t
Problem: the instance tested; PS: the problem size, (number of jobs)× (number of machines); BKS: the best known solution of the instance; LB: thelower bound of the instance; NEH: the results of the NEH heuristic algorithm [6]; BS: the best makespan achieved by NEGAVNS over 10 runs; AS:the average makespan achieved by NEGAVNS for the same instance (in 10 runs); REmin: the deviation of the best makespan achieved from the bestknown makespan; REavg: the deviation of the average makespan achieved from the best known makespan; tmax: the maximum computational timeallowed in seconds (problem size dependant); t : the minimum computational time required to achieve the best solution (in seconds).
In Table 7, the results acquired for all groups of instances are summarized. More specifically, for each of the 12 testinstance groups of different sizes, three performance measures are calculated:
�min =10∑
r=1
((BSr − BKSr )× 100
BKSr
)/10, (7)
�max =10∑
r=1
((WSr − BKSr )× 100
BKSr
)/10, (8)
�avr =10∑
r=1
((ASr − BKSr )× 100
BKSr
)/10, (9)
where SBr denotes the best makespan achieved by NEGAVNS for instance r (the number of instances for each test groupis 10), SWr denotes the worst makespan achieved by NEGAVNS for instance r , SAr the average makespan achieved byNEGAVNS for instance r and BKSr is the best known solution for instance r.
G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267 1265
From Table 7, it is apparent that NEGAVNS was able to find the best solutions for all the instances with n = 20,and for the 50 × 5 and 100 × 5 instances. Most importantly, NEGAVNS produced consistent results in the multipleruns per instance as �max − �min was on average 0.09% for all instance groups, with a highest value of 0.20% on theTa051–Ta060 group. In addition, Table 7 also allows the comparison of the NEGAVNS average deviation with the one(�NEH) produced by the NEH heuristic.
4.4. Comparison with other metaheuristic algorithms
In this section, a comparison of NEGAVNS with state-of-the-art metaheuristics for the PFSP with makespan criterionis performed. More specifically, NEGAVNS is compared to the simulated annealing algorithm (SAOP) of Osman andPotts [11], the tabu search algorithm (Spirit) of Widmer and Hertz [42], the ILS of [22], the two ant colony algorithmsof Rajendran and Ziegler [20] named M-MMAS and PACO, the HGA_RMA of Ruiz et al. [18], and the particleswarm optimization algorithm (PSOVNS) of [34]. It should be mentioned that [18] recoded all implementations (exceptPSOVNS and NEGAVNS) to compare them more faithfully. The comparison results are provided in Table 6 (averagedeviation from best known solutions after 10 runs for each instance):
As seen in Table 8, the HGA_RMA, PSOVNS, and NEGAVNS perform very well in Taillard instances with mostlyminor differences between them. These three algorithms manage to generate much better results than their rivals,with only the PACO algorithm of [20] being close. More specifically, HGA_RMA has the lowest average deviationfrom the best known solutions and produces the best results in 50 × 20, 100 × 20, 200 × 10 and 500 × 20 instances.PSOVNS demonstrates the best results in the 50 × 10 group while NEGAVNS performs better than all its rivals in20 × 5, 20 × 10, 20 × 20, 100 × 10, and 200 × 20 instances. PSOVNS and NEGAVNS find the optimum solutionsof 100 × 5 instances in all iterations and all three algorithms find the optimum solutions in all iterations of 50 × 5instances.
Computational time comparison when different computer platforms are used is rather difficult as processor speedin modern PCs is not the sole indicator of computational power (e.g., dual core processors, CPU architecture, etc.).However, Dongarra [43] has conducted a thorough research concerning the computational power of various computerconfigurations ranging from home PCs to modern supercomputers. Although computational times are also affectedby other parameters such as the programming skills of the developer, the compiler, the programming language, andthe precision used during the execution of the runs, a scaling factor is calculated (see Table 9) to conduct a more faircomparison. Maximum computational times are then compared in Table 10, with and without the scaling factor. Itshould be mentioned that average computational times for individual instances are not reported by Ruiz et al. [18].Tasgetiren et al. [34] on the other hand report average computational times for groups of instances which are comparedto NEGAVNS in Table 11.
5. Conclusions
This paper presented a hybrid method for the PFSP. The algorithm comprises three main components: an initialpopulation generation scheme based on construction heuristics, a GA, and VNS. To our knowledge, this is the firsthybrid implementation of GAs and VNS for the PFSP with the makespan criterion. Throughout the algorithm execution,all solutions are encoded according to the job-based representation (job permutation). The NEGAVNS was tested on a setof 120 benchmark instances and the results demonstrate the efficiency and effectiveness of the method, which providescompetitive results in short computational times. More importantly, the proposed scheme has a simple structure andfew user-defined tuning parameters, which are given fixed or problem size-dependant values after a thorough sensitivityanalysis procedure.
It is suggested that a more sophisticated diversification scheme should be used in future research. More specifically,worse offsprings could be accepted to enter the population and replace fitter parents under some conditions. Thisdecision could be based on their ‘distance’ from current solutions in an attempt to further diversify the population.Moreover, the incorporation of memory in intensification phase (to form a hybrid VNS–tabu search algorithm) woulddecrease the number of unnecessary VNS moves, thus saving computational time in difficult instances and increasingthe chances of obtaining higher quality solutions.
1266 G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267
Acknowledgements
This work was supported by the General Secretariat for Research and Technology under contract GSRT-05-�AB-71.The authors would also like to thank the anonymous referee for his/her constructive comments and contribution to thecompletion of this work.
6. Appendix
Computational results on 20-, 50-, 100-, and 200- and 500-job. Taillard instances are given in Tables 12–15,respectively.
References
[1] Johnson SM. Optimal two- and three-stage production schedules with setup times included. Novel Research Logistics Quarterly 1954;1:61–8.[2] Garey MRD, Johnson DS, Sethi R. The complexity of flowshop and job shop scheduling. Mathematics of Operations Research 1976;1:
117–29.[3] Palmer DS. Sequencing jobs through a multistage process in the minimum total time: a quick method of obtaining a near optimum. Operations
Research Quarterly 1965;16:101–7.[4] Campbell HG, Dudek RA, Smith ML. A heuristic algorithm for the n-job, m-machine problem. Management Science 1970;16:B630–7.[5] Gupta JND. A functional heuristic algorithm for the flow-shop scheduling problem. Operations Research 1971;22:39–47.[6] Nawaz M, Enscore Jr E, Ham I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega—International Journal
of Management Science 1983;11:91–5.[7] Rajendran C. Heuristic for scheduling in flowshop with multiple objectives. European Journal of Operational Research 1995;82:540–55.[8] Framinan JM, Ruiz-Usano R, Leisten R. Sequencing CONWIP flow-shops: analysis and heuristic. International Journal of Production Research
2001;39:2735–49.[9] Lee GC, Kim YD, Choi SW. Bottleneck-focused scheduling for a hybrid flow-shop. International Journal of Production Research 2004;42:
165–81.[10] Taillard E. Some efficient heuristic methods for the flow shop sequencing problem. European Journal of Operational Research 1990;47:65–74.[11] Osman I, Potts C. Simulated annealing for permutation flow shop scheduling. OMEGA 1989;17:551–7.[12] Ogbu FA, Smith DK. The application of the simulated annealing algorithm to the solution of the n/m/Cmax flowshop problem. Comput
Operations Research 1990;17:243–53.[13] Nowicki E, Smutnicki C. A fast tabu search algorithm for the permutation flowshop problem. European Journal of Operational Research
1996;91:160–75.[14] Grabowski J, Wodecki M. A very fast tabu search algorithm for the permutation flowshop problem with makespan criterion. Computers and
Operations Research 2004;31:1891–909.[15] Murata T, Ishibuchi H, Tanaka H. Genetic algorithms for flowshop scheduling problems. Computers and Industrial Engineering 1996;30:
1061–71.[16] Reeves C, Yamada T. Genetic algorithms, path relinking and the flowshop sequencing problem. Evolutionary Computation 1998;6:45–60.[17] Etiler O, Toklu B, Atak M, Wilson J. A genetic algorithm for flow shop scheduling problems. Journal of the Operations Research Society
2004;55:830–5.[18] Ruiz R, Maroto C, Alcaraz J. Two new robust genetic algorithms for the flowshop scheduling problem. Omega—International Journal of
Management Science 2006;34:461–76.[19] Ying KC, Liao CJ. An ant colony system for permutation flow-shop sequencing. Computers and Operations Research 2004;31:791–801.[20] Rajendran C, Ziegler H. Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs. European
Journal of Operational Research 2004;155:426–38.[21] Liu B, Wang L, Jin Y-H. An effective PSO-based memetic algorithm for flow shop scheduling. IEEE Transactions on Systems Man and
Cybernetics B 2007;37:18–27.[22] Stützle T. Applying iterated local search to the permutation flowshop problem. Technical Report, AIDA-98-04, TU Darmstadt, FG Intellektik;
1998.[23] Onwubolu GC, Davendra D. Scheduling flow-shops using differential evolution algorithm. European Journal of Operational Research
2006;171:674–92.[24] Blum C, Roli A. Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Computing Surveys 2003;35:
268–308.[25] Holland JH. Genetic algorithms. New York: Scientific American; 1992.[26] Hansen P, Mladenovic N. Variable neighbourhood search: principles and applications. European Journal of Operational Research 2001;130:
449–67.[27] Groce FD, Tadei R, Volta G. A genetic algorithm for the job shop problem. Computers and Operations Research 1995;22:15–24.[28] Gonçalves JF, Mendes JJDM, Resende MGC. A hybrid genetic algorithm for the job shop scheduling problem. European Journal of Operational
Research 2005;167:77–95.[29] Liaw C-F. A hybrid genetic algorithm for the open shop scheduling problem. European Journal of Operational Research 2000;124:28–42.
G.I. Zobolas et al. / Computers & Operations Research 36 (2009) 1249–1267 1267
[30] Hindi KS, Fleszar K, Charalambous C. An effective heuristic for the CLSP with set-up times. Journal of the Operations Research Society2003;54:490–8.
[31] Fleszar K, Hindi KS. Solving the resource-constrained project scheduling problem by a variable neighbourhood search. European Journal ofOperational Research 2004;155:402–13.
[32] Gagné C, Gravel M, Price WL. Using metaheuristic compromise programming for the solution of multiple-objective scheduling problems.Journal of the Operations Research Society 2005;56:687–98.
[33] Anghinolfi D, Paolucci M. Parallel machine total tardiness scheduling with a new hybrid metaheuristic approach. Computers and OperationsResearch 2007;34:3471–90.
[34] Tasgetiren MF, Liang YC, Sevkli M, Gencyilmaz G. A particle swarm optimization algorithm for makespan and total flowtime minimizationin the permutation flowshop sequencing problem. European Journal of Operational Research 2007;177:1390–947.
[35] Tarantilis CD, Kiranoudis CT. A list-based threshold accepting method for job shop scheduling problems. International Journal of ProductionEconomics 2002;77(2):159–71.
[36] Cheng R, Gen M, Tsujimura Y. A tutorial survey of job-shop scheduling problems using genetic algorithms, part I: representation. Computersand Industrial Engineering 1996;30:983–97.
[37] Ishibuchi H, Yoshida T, Murata T. Balance between genetic search and local Search in memetic algorithms for multiobjective permutationflowshop scheduling. IEEE Transactions on Evolutionary Computation 2003;7:204–23.
[38] Taillard E. Benchmarks for basic scheduling problems. European Journal of Operational Research 1993;64:278–85.[39] Beasley JE. 〈http://ina2.eivd.ch/Collaborateurs/etd/problemes.dir/ordonnancement.dir/flowshop.dir/best_lb_up.txt〉, accessed 10 December
2006.[40] Beasley JE. 〈http://people.brunel.ac.uk/∼mastjjb/jeb/orlib/jobshopinfo.html〉, accessed 10 December 2006.[41] Cordeau J-F, Gendreau M, Laporte G, Potvin J-Y, Semet F. A guide to vehicle routing heuristics. Journal of the Operations Research Society
2002;53:512–22.[42] Widmer M, Hertz A. A new heuristic method for the flow shop sequencing problem. European Journal of Operational Research 1989;41:
186–93.[43] Dongarra JJ. Performance of various computers using standard linear equations software. Report CS-89-85, Computer Science Department,