Top Banner

of 34

balas1965(2)

May 30, 2018

Download

Documents

Pablo
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/9/2019 balas1965(2)

    1/34

    An Additive Algorithm for Solving Linear Programs with Zero-One Variables

    Author(s): Egon Balas, Fred Glover, Stanley ZiontsSource: Operations Research, Vol. 13, No. 4 (Jul. - Aug., 1965), pp. 517-549Published by: INFORMSStable URL: http://www.jstor.org/stable/167850

    Accessed: 03/03/2010 07:47

    Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at

    http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless

    you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you

    may use content in the JSTOR archive only for your personal, non-commercial use.

    Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained athttp://www.jstor.org/action/showPublisher?publisherCode=informs.

    Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed

    page of such transmission.

    JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of

    content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms

    of scholarship. For more information about JSTOR, please contact [email protected].

    INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research.

    http://www.jstor.org

    http://www.jstor.org/stable/167850?origin=JSTOR-pdfhttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/action/showPublisher?publisherCode=informshttp://www.jstor.org/action/showPublisher?publisherCode=informshttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/stable/167850?origin=JSTOR-pdf
  • 8/9/2019 balas1965(2)

    2/34

    Operations ResearchJuly-August 1965

    AN ADDITIVE ALGORITHM FOR SOLVINGLINEAR PROGRAMS WITH ZERO-ONE

    VARIABLEStEgon Balas

    Centreof MathematicalStatistics, Rumanian Academy,Bucharest(Received March 2, 1964)

    An algorithm is proposed for solving linear programs with variables con-strained to take only one of the values 0 or 1. It starts by setting all then variables equal to 0, and consists of a systematic procedureof successivelyassigning to certain variables the value 1, in such a way that after trying a(small) part of all the 2- possible combinations, one obtains either anoptimal solution, or evidence of the fact that no feasible solution exists.The only operations required under the algorithm are additions and sub-tractions; thus round-off errors are excluded. Problems involving up to15 variables can be solved with this algorithm by hand in not more than3-4 hours. An extension of the algorithm to integer linear programmingand to nonlinear programming is available, but not dealt with in thisarticle.IT IS well known that important classes of economic (and not onlyeconomic) problems find their mathematical models in linear programs

    with integer variables. Prominent among these problems are those thatcorrespond to linear programs with variables taking only one of thevalues 0 or 1. A rapidly growing literature on the subject describes manypractical instances of such problems in a large variety of fields.:At present, several methods are available for solving linear programsof the type discussed here.? Best known among them are R. E. GOMORY'Salgorithms,13,14] for solving linear programs with integer variables. Theyuse the dual simplex method and impose the integer conditions by addingt Paper presented at the International Symposium on Mathematical Program-ming, July 1964, London.t See references 1-4; reference 5 (pp. 650-656, 695-700); reference 6; reference7 (pp. 194-202); reference 8 (pp. 535-550);references 9-11.?See references 12-27;reference 5 (pp. 700-712);reference 7 (pp. 160-194);refer-ence 8 (pp. 514-535); reference 28 (pp. 190-205).

    517

  • 8/9/2019 balas1965(2)

    3/34

    518 Egon Balasautomatically generated new constraints to the original constraint set.These are satisfied by any integer solution to the latter, but not by thesolution reached at the stage of their introduction. The cutting-planeapproach has also been used by E. M. L. BEALE[171 and R. E. Gomory'141to develop algorithms for solving the mixed case, when some but not allof the variables are bound to be integers. The procedures of references6 and 24 also belong to this family.Another type of algorithm for integer (and mixed integer) linear pro-grams, developed by A. H. LAND AND A. G. DOIG, 18) also starts with anoninteger optimal solution and then finds the optimal integer (or mixed-integer) solution through systematic parallel shifts of the objective function-hyperplane. The methods of references 21, 22, 25, and 26 also come underthis heading.A different approach to the problem was initiated by R. FORTET[291onthe lines of Boolean algebra, and continued by R. CAMION[301with theintroduction of Galois fields. On these lines, P. L. IVANESCU[231developedan algorithm for solving discrete polynomial programs.The algorithm proposed in this papert represents a combinatorialapproach to the problem of solving discrete-variable linear programs ingeneral, and linear programs with zero-one variables in particular. Asan abbreviated enumeration procedure, it is kindred in conception tocombinatorial methods developed in related areas (see, for instance, refer-ences 33 and 34). This algorithm is first of all a direct method for solvinglinear programs with zero-one variables, and for this particular type ofproblem it seems to work very efficiently. It has also been extendedE32'to linear programs with integer variables, and, as a method of approxima-tion, to nonlinear programs of a more general type than those usually dealtwith.

    BASIC IDEAS AND OUTLINE OF THE ADDITIVE ALGORITHMTHE GENERAL form of a linear program with zero-one variables may bestated as follows:

    Find x' minimizing (maximizing)

    subject to A'x >

  • 8/9/2019 balas1965(2)

    4/34

    Linear Programs with Zero-One Variables 519where X'= (xj/) is an n-component column-vector, c'= (c1') is a givenn-component row-vector, A'= (a$') is a given qXn matrix and b'= (bj')is a given q-component colunmn-vector, while {1, ***, q}= Q, {1, ***, n} = N.However, we wish to consider the problem in a slightly different form,namely with all the constraints being inequalities of the same form _,and all the coefficients of the objective function (to be minimized) beingnonnegative. Any problem of the type (1'), (2'), (3') can be broughtto this form by the following operations:(a). Replacing all equations by two inequalities.(b). Multiplying by -1 all inequalities of the form >.

    (c). SettingfX1' for cj'> 0 when minimizing; for cj'< 0 when maximizing,xj 1-xj' for cj'< 0 when minimizing; for cj'>0 when maximizing.Following this and the introduction of an m-component nonnegativeslack vector y, the problem may be restated thus:Find x such that

    z=cx=min, (1)subject to Ax+y= b, (2)

    xj= or 1, (jEN) (3)y O, (4)where c >0 and where x, C; A, and b are to be obtained from x', c', A'and b' through the above described transformation. The dimension ofx and c remains n. Let b be m-dimensional, with { 1, **, in} = Mlm Q.The problem (1), (2), (3), and (4) will be labeled P.Let aj stand for the jth column of A.An (n+rm) -dimensional vector u = (x,y) will be called a solution, if itsatisfies (2) and (3); a feasible solution, if it satisfies (2), (3), and (4);and an optimal (feasible) solution, if it satisfies (1), (2), (3), and (4).Let us denote P' the linear program defined by (1), (2), and (4), andthe constraints

    Xj_>O. (jEN) (3a)Xj= a, (jEJ,) (3b,)

    where J8 is a subset of N. Let Jo= , and thus Po be the problem definedby (1), (2), (3a), and (4).The fundamental idea underlying our algorithm runs on the followinglines. We start with the ordinary linear program P0 with uo= (x0,y0)= (O,b), which is obviously a dual-feasible solution to P' (because c>0).The corresponding basis consists of the unit-matrix I(m)= (ei) (i= 1, * *, m),e} being the ith unit vector. For some i such that Yi0< 0 we choose, accord-

  • 8/9/2019 balas1965(2)

    5/34

    520 Egon Balasing to a certain rule, a vector a11 such that ai1j

  • 8/9/2019 balas1965(2)

    6/34

    Linear Programs with Zero-One Variables 521If a nonnegative vector u = (x',y') is obtained, it is an optimal (feasible)solution to P8. Such a solution may or may not be optimal for P, but it

    is always a feasible solution to P.The procedure is then started again from a solution u"(p

  • 8/9/2019 balas1965(2)

    7/34

    522 Egon BalasXjw Ij-1.- 5)

    ( =1 ...... 5) 0l = ( 2,3,4,5)

    / f l(j=2) 1/(j=) I(j=1,2)= O(X I ..5) 0 (j = 1,3,4,5) O(j -2,3,4,5) _ I(j = 3,4,5)_ 7 _ \ ? la2 4 / __ ______??-23)) 4 7 /-Zig ~-t - t - - -- -A--A 2-tIO-2,4,) ,4,5-_ ?-- 3

    ?~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~X 1 (Q= 2X3)OQ = 1,4,5)

    Figure 1through a set of rules determining at each interation (a) a subset of variablesthat are candidates for being assigned the value 1; (b) the variable to bechosen among the candidates. At certain stages of the procedure it be-comes clear that either an optimal solution has been obtained, or there is nooptimal solution with value 1 for all the variables that had been assignedthis value. The procedure is then stopped and started again from a pre-vious stage. In other words, the rules of the algorithm identify suchbranches of the solution-tree, which may be abandoned because they can-not lead to a feasible 'solution' better than the one already obtained.Thus, in Fig. 1, which illustrates our first numerical example presentedin the final section, only the thick lines are to be followed and the corre-sponding solutions to be tested. This means that, instead of all 25=32solutions, only the following 3 had to be tried:

    u0 with xj =O, (j=1, * , 5)u' with x ~=1) j3w{O (j=1, 2,4,5)

    2 with X12={ X (j=2, 3)and this latter (j=oh b4 5)and this latter solution has been found to be optimal. The 'stop signals'

  • 8/9/2019 balas1965(2)

    8/34

    Linear Programs with Zero-One Variables 523of the algorithm (represented by the circles at the end of the thick lines)make sure that no feasible solution 'better' than the one obtained existsbeyond them.Of course, under such circumstances, the efficiency of the algorithmdepends largely on the efficiency of these 'stop signals' i.e., on the numberof branches that need not be followed. As it will be shown later in greaterdetail, in most cases the algorithm succeeds in reducing the subset of solu-tions to be tested to a relatively small fraction of the complete set.

    SOME DEFINITIONS AND NOTATIONSLET US consider problem P.

    As each constraint of the set (2) contains exactly one component ofy, a solution u'= (x',y') is uniquely determined by the set Jp, {jjJEN,xj'=1}. For, ifXj {O[J(N-Jp,)], (5)

    then y=ipbi- E j. aij. (icli) (6)As already shown, the additive algorithm generates a sequence ofsolutions. We shall denote the sth term of this sequence by

    us= u(j **,j) = (x8,y8), (7)where {jil ...** =JJ = fjjEN,xjs 1}I, (8)while z, will represent the value of the form (1) for us.The sequence starts with uo, for which Jo= 4, i.e., x =0, ys = b, andzo=O. Of course, JoCJp for any p#O.The set of values taken by the objective function for the feasible solu-tions obtained until iteration s will be denoted

    Z8= {ZjPIP 0}. (9)If this set is not void, its smallest element will be called the ceilingfor u8. If it is void, the r6le of the ceiling will be performed by co. Thus,we shall denote the ceiling for u8

    z8(s)={ ? if Z84, (10)min,8z, if Z8#4.

    At each iteration s+1, the new vector to be introduced into the basiswill be chosen from a subset of {ajEjcN},called the set improvingvectors orthe solution uW. We shall denote by N8 the corresponding set of indicesj (of course, NACN), and we shall define it more precisely below.tt Throughout this paper the symbol C will be used for inclusion, while C willstand for strict inclusion.

  • 8/9/2019 balas1965(2)

    9/34

    524 Egon BalasWe now define certain values which will serve as a criterion for thechoice of the vector to be introduced into the basis. Thus, for each solu-

    tion us and for each jeNs, we define the values{a ~- (( 8-aij), (jcNM, -) ()o) ~~~~(jE-N,;Mr=)

    where M1K= ifyi8-aij }. (15)

    We are now in a position to give a proper definition to the set of im-proving vectors for a solution us. It is the set of those aj, for which jbelongs to N8=N- (CsUDSUES). (16)Obviously, N8= 4 for any feasible solution us.

  • 8/9/2019 balas1965(2)

    10/34

    Linear Programs with Zero-One Variables 525Similarly to D, we define for the pair of solutions uk and u8(k z*(s) -ZkJ (17)

    Finally, given a pair of solutions us and uk, such that u8= u(ji, ***,jr)U= U (jl, * * jr-h), (1

  • 8/9/2019 balas1965(2)

    11/34

    526 Egon BalasStep 2. Identify the improving vectors for the solution u' by formingthe set N8 as defined by (16).2a. If N, =4, i.e., there are no improving vectors for u8, pass to step 5.2b. If NS,?-4),pass toStep 3. Check the relations

    ZjENS a-j yi8, (ijyis

  • 8/9/2019 balas1965(2)

    12/34

    Lintear Programs with Zero-One Variables 527

    l~~~~~~~~~~~~ys fo -i6 o

    step 1 yes (la) Form D (k1< s)YC >O (i cm)?ancelDjs) ()

    no (lb)

    SNs 1 J u stno (2b)

    C eyes tb or 3Compuyes or i N

    _ _ Y n~~~ ~ ~~~~~o3c _ _qopuet z- ,an - i o. Computezs~ '1anids

    Y. ( M (N 1iew M (e

    s oloutioono) (Ne solution)Fi Cancel F ca for tE ao

    yes (5a) ste epeattepp5 fork8 for' k == i> ik. 1ci., pass toStep 6. Check the relations

    Ef N, a-j: y. (it|8ik

  • 8/9/2019 balas1965(2)

    13/34

    528 Egon Balas5 and 6. Whenever step 5 is repeated for k

  • 8/9/2019 balas1965(2)

    14/34

    Linear Programs with Zero-One Variables 529under the algorithm may be further reduced at the cost of a relativelysmall additional computational effort at each iteration, if we change steps3b, 3c and 6b, 6c as follows:3b. If all relations (22) hold, check the relations

    ZA tj-a71?yis (ijyj8s+1 maxjENv, (23)cancel v8?+iand pass to step 8.3c. If all relations (22) hold, and there exists a subset M' of M suchthat relations (22a) do not hold for ieM', pass to step 4.Completely analogous changes are to be made in steps 6b, 6c.The application of the algorithm is facilitated through the use of atableau of the type presented with the numerical examples of the finalsection.

    FINITENESS PROOFSTHEADDITIVElgorithm yields a sequence of solutions uo, u1, ***. We shallsay that a solution uk is abandoned, f we are instructed by the algorithmeither to check Np'(O< p < k< s), or to stop (termination).A central feature of our algorithm, which is at the same time instru-mental in proving its convergence, may be expressed by Theorem 1.THEOREM. If a solution uk is abandonedunder the additive algorithm,* t *(I)then no feasible solution tu exists such thatJkCJt and Zt

  • 8/9/2019 balas1965(2)

    15/34

    530 Egon Balashave been cancelled until iteration s for reasons a, b, and c respectively,by Cp8(a), Cp8(b), and Cp'(c), so that Cp8=Cp8(a)UCp'(b)UC,8(c), andlet us denote by C8(a), C8(b), and C8(c) so that

    CpCp' (a)UCp(b)UCp(c),and let us denote by Cs(a), C8(b),and C8(c)respectively, the correspond-ing subsets of C'.We haveLEMMA . Givena solution u', if thereexists a feasible solution ut such thatJsc J and zt

  • 8/9/2019 balas1965(2)

    16/34

    Linear Programs with Zero-One Variables 531Now, there are two possibilities: either Jn+1= Jk U I n+ } or Jn+l= JkUFk -Let us suppose that the first of these two situations holds, and thus

    Ck'(b) Ck (b) Ujn+} (A perfectly analogous reasoning is valid forthe second situation.)We have(Jt-Jn+l) nfc,,1 (Jt-Jk) A Jt - jLn? nf[Ck (b) U{jn+I}]) (41)

    or, as(Ji - {Ini} ) -lJ}I (42)

    (Jt-Jn+l) no -n1=(Jt-Jk) n Jt- jjn+,jncok (b) * (43)We shall now show that

    (Jt-Jkc) nfCk (b) = (44)Let us suppose the contrary, i.e., that there exists jf, such that

    if l4(Jt-J) nCk'(b)], (45)and let us denote Jq1 =Jk U L/flIObviously, Jt is not identical with Jqj, as from jf1ECk'(b) t follows thatqjZ*((n), while z*(t)

  • 8/9/2019 balas1965(2)

    17/34

    532 Egon Balasmust also exist (i=2, 3, ... ), Jq_, and Jqj being defined analogously toJ1. As the number of sets Ct (b) H4 is finite, this sequence of implica-tions obviously ends in a contradiction that proves the validity of (44).Relation (34) is thus shown to hold also in case (2), and this completesthe proof of Lemma 1.LEMMA 2. Given two solutions u8= u(ji, j, ) and u = u(jil, jr-h)(1 _h _r; JkC Js8), such that Np= C +1'(k p As), if there exists a feasiblesolution ut such that JkcJt and zt

  • 8/9/2019 balas1965(2)

    18/34

    Linear Programs with Zero-One Variables 5336a. If (27) does not hold for a certain k, then a solution ut such as re-quired in the theorem could only exist if

    (Jt-Jo) n[N-(N.8UE.)] ,. (60)But this cannot hold, as was shown at the preceding point.Theorem 1 is thus proved.Now we can formulate the following

    CONVERGENCE THEOREM 2. In a finite number of iterations, the additivealgorithm yields either an optimalfeasible solution, or the conclusion that theproblemhas no feasible solution at all.Proof. From theorem 1 it follows that the termination of the algo-rithm (which means that all solutions tested by the algorithm have beenabandoned) yields either an optimal feasible solution, or the conclusionthat no feasible solution to the problem exists. We shall now show thatthe additive algorithm terminates in a finite number of steps.(a) In a finite number of steps, each iteration yields a new solution orelse the algorithm terminates. Repetition of certain steps during one andthe same iteration may arise in one of the situations 6a or 7b. In bothsituations step 5, which requires the checking of Nk8 (klJkCJ,) is to be re-peated for k s. We cannot have ji= ki(i= 1, ***, r), because in that case jrECq-1(where q is defined either by Jt=JqU{jr}or by Jt=JqUFt71), and this ex-cludes jrEJt. Let (ja, ka) be the first pair of indices (ji, ki) such that jickiand let us denote u'= u(j', ..., jw-1). Then jasCt-1cCt-1 and thusjaSit.

    This completes the proof of the Convergence Theorem.SOME REMARKS ON THE EFFICIENCY OF THE ALGORITHM

    UNLIKE MOSTof the known algorithms for solving linear programs withinteger variables, the additive algorithm attacks directlythe linear programwith zero-one variables, and does not requirethe solution of the correspond-ing ordinary linear program (without the zero-one constraints).As has been shown, the only operations required under the algorithmdescribed above are additions and subtractions. Thus any possibility ofround-off errorsis excluded.The additive algorithm does not impose a heavy burden on the storagesystem of a computer. Most of the partial results may be dropped shortlyafter they have been obtained.At this moment, experience with the additive algorithm is at its very

  • 8/9/2019 balas1965(2)

    19/34

    534 Egon Balasbeginning. So far it consists only of solving by hand about a dozen prob-lems with up to 15 variables, and of partially solving, also by hand, onesingle large problem. The results are very encouraging, but of course thisexperience is insufficient for a firm judgment on the efficiency of the al-gorithm, especially for larger problems. We shall summarize the above ex-perience and comment on it, but all our conjectures in this section should beregarded as tentative, in view of their scanty experimental basis.The data of 11 problems solved by hand with the additive algorithm aregiven in Tableau I.

    TABLEAU INumber of Number of NubroProblem no. zero-one variables constraints Nuiterations(inequalities) ieain

    I 5 2 62(a) 5 3 33 7 5 II4(a)(b) 9 4 315 IO 4 I26(a) IO 7 57 II 6 218 II 7 89(a)(c) I2 6 39

    IO I4 9 23II I5 12 22

    (a) Problems given as numerical examples in final section.(b) A problem with no feasible solution.(c) A problem with only one feasible solution.

    The time needed for solving these problemst was on the average 7minutes for an iteration, i.e., 1-3 hours for each of the problems 3, 4, 5, 7, 8,10, and 11. Problems 1, 2, and 6 were solved in less than an hour, whilesomewhat more than 4 hours were used to solve problem 9.Any one who has tried to solve by hand a problem of the type and sizediscussed here with the cutting-plane technique for integer programmingproblems (which in this case has to be combined with bounded variables-procedures) knows that it takes several times that amount of time, not tospeak about difficulties generated by round-off. Moreover, the solution ofthe ordinary linear programs (without the zero-one constraints) correspond-ing to the above problems would also require an amount of computationsconsiderably larger than that needed for solving the zero-one problems bythe additive algorithm.

    t By hand calculations by a person having no special experience with the additivealgorithm.

  • 8/9/2019 balas1965(2)

    20/34

    Linear Programs with Zero-One Variables 535The only large problem on which the additive algorithm has so far beentried was a problem in forest management, [10]with a linear objective func-

    tion in zero-one variables subject to a set of linear constraints, and toanother set of conditional ('logical') constraints.T The latter set has beenreplaced by an equivalent set of linear constraints including additionalzero-one variables, so that finally a zero-one linear programming problememerged with 40 variables and 22 constraints. The number of nonzeroelements in the final coefficient matrix was 140, of which 115 were 1 or -1.This problem was specially chosen for studying the applicability of thealgorithm to the given type of forest-management problems, and it was sostructured that the optimal solution could be known beforehand. After 35iterations were made by hand-the average time of an iteration being about20 minutes-the optimal feasible solution was approximated within 1.4per cent. We note, however, that this approximation cannot be regardedas a sign that termination of our algorithm was also near, and we furthernote that our coefficient-matrix was relatively sparse-with many 0elements.This experiment showed, among other things, the great advantages ofthe additive character of the algorithm. For instance, an errordiscoveredat the 19th iteration could be traced back to the 3rd iteration and correctedthroughout the subsequent solutions in about 2 hours' time.The following considerations concerning the dependence of the amountof computations on the size and other characteristics of the problem arebased partly on common-sense examination of the mechanism of thealgorithm, partly on the experiencesummarizedabove.Let n be the number of variables and m the number of constraints.(a) Amount of computationsneeded for one iteration. The number ofoperations needed for checking relations (22) and (27) (steps 3 and 6respectively), and for computing the values vjf (step 3b), depends linearlyon mXn. The number of operations needed to form or check the sets Nsand Nk' (steps 2, 5), depends linearly on n, while the number of operationsneeded to compute and check a new solution (steps 8, 4a, 7a, and la) de-pends linearly on m. Thus the total amount of computations needed forone iteration depends linearly on a quantity ,usituated somewhere betweenmin[m,n] and mXn.(b) Number of iterations needed to solve the problem. This obviouslydepends first of all on n. As to m, its increase enhances the efficiency ofsome of the stop signals (steps 3a and 6a) and thus tends to reduce thenumber of iterations. (This is an important advantage of the algorithm.)The crucial thing about the efficiency of the algorithm is to know howthe number of iterations depends on n. The combinatorial nature of thealgorithm does not necessarily imply an exponential relation for, while the

    t Expressing relations of the type V, =>, and A.

  • 8/9/2019 balas1965(2)

    21/34

    536 Egon Balasset of all solutions from which an optimal feasible one has to be selectedgrows exponentially with n, the efficiency of some of the stop signals mayperhaps grow even faster with n. So far the experience summarized abovedoes not seem to indicate an exponential relation. While in the threesmallest problems of Tableau I (1, 2, and 3) the number of iterations was0.6-1.6 times n, in the two largest problems (10 and 11) it was 1.5-1.6 timesn. In the largest problem so far solved (11), 22 solutions out of a total of2 = 32,768 had to be tested. But of course this experience is insufficient,and computer experience with a considerable number of larger problemswill be needed to elucidate this question in the absence of an analyticproof.

    The number of iterations also depends on other characteristics of theproblem. In cases where an optimal solution exists in which only a fewvariables take the value 1, the relatively large size of the sets D. makes thestop signals especially efficient and thus assures a rapid convergence of thealgorithm (see, for instance, problem 6 of Tableau I) (example 2 in the finalsection), whereonly 5 out of 210 1,024 solutions had to be tested). On theother hand, if the problem has no feasible solution, all sets D, are void andthe efficiency of the stop signals is reduced. The same holds to a lesserextent for problems with very few feasible solutions. But it should benoted that even in the case of such 'ill-behaved' problems the number ofiterations does not become unreasonably large. Thus, problem 4 ofTableau I (example 3 in the final section), with 9 variables and 4 con-straints, having no feasible solution, was 'solved' (i.e., the absence of afeasible solution was established) in 31 iterations, while in the worst ofcases met until now, problem 9 of Tableau I (example 4 in the final section),with 12 variables and 6 constraints, having only one feasible solution, wassolved by testing 39 out of 212= 4,096 solutions.

    NUMERICAL EXAMPLESExample 1. (Problem 2 of Tableau I.)Let us consider the following problem:-5x1'+7X2'+ 1Ox3'-3X4'+ x5 = min

    -x1'-3X2 + 5x3 - X4'-4x5 > 0,-2x1'-6X2 + 3X3'-2X4'-2x5'< -4,

    -2x2 + 2x3 + X4'- x5 > 2,xj'=0 or 1. (j=, **, 5)

    Multiplying by -1 the two inequalities of the form > and settingfi {1-lit, (j=2, 3, 5)

    1-xj /, ~~(j= 14)

  • 8/9/2019 balas1965(2)

    22/34

    Linear Programs with Zero-One Variables 537allows us to restate our problem in the following form, correspondingto P in the second section:

    5x,+7X2+10x3+3x4+ X6 =nmin,-xl+-3x2- 5x3- x4+4x5+yl = -2,2x1-6X2+ 3x3+2x4 -2x5 +Y2 =0,

    x2- 2x3+ X4+ X5 +Y3=-1,x;=O or 1. (j=1, * , 5)

    TABLEAU IIRow s J, z8 8 C D E8 Fno. I 2 3

    I O 0 0 -2 0 -I j 2,52 ZlENO aii -7 -23 I yij0 ail -I -2 -I4 3 yi - ai3 -3 35 4 yj0ai4 -I -2 -2 -~556 I 3 10 Yil 3 3 I 3 0 I,4

    8 __ __ __ _ 5 _ _Yi _;ai2 C9 5 yj1 -tai5 -I I 2

    IO 2 3, 2 I7* Yi 0 3 0ii ZJ~~~~ENl2catji -2

    I 2 ZieN,2caiI - 2 0

    Iinview of the form of Tableau II, the computations are easier to followif we work with AT instead of A (T being the symbol of transposition):-1 2 03 -6 1A= 5 3 -2-1 2 14 -2 1

    We start with the initial solution u = (0, b). The data of this solution,JoT p, zo=O, and y1o=bi -2, y20= b2=O.Y30=b3-1, are shown on theleft-hand side of row 1 of Tableau II.To make this illustration easier to follow on Tableau II, cancellation ofthe values vj/ is not marked through crossing out, but through numbering

  • 8/9/2019 balas1965(2)

    23/34

    538 Egon Balasin the order of cancellation. (Of course, this is not necessary in currentwork with the algorithm.)

    The solution of this problem is also illustrated by Fig. 1.Iteration 1Step 1. y 0

  • 8/9/2019 balas1965(2)

    24/34

  • 8/9/2019 balas1965(2)

    25/34

    540 Egon BalasExample 2. (Problem 6 of Tableau I).We shall consider the following problem:

    lOx1'- 7x2'+ X3'-12x4'+2x5'+8x6'- 3x7' x8'+ 5x9'+3xlo = max,3x'?+ 12x2'- 8x3'- X41 - 7x3' 2xo > -8X2 ?103 ?5x5'- x6+7x7'+ x8

  • 8/9/2019 balas1965(2)

    26/34

    Linear Programs with Zero-One Variables 541TABLEAUIII

    Row S Js zs Ns Vj Cs 1Ds Es Fsno. I 2 3 4 5 6 7I 0 q5 0 y;? -2 -I -I I -3 7 -I

    2 TjeNo aij -27 -I5 -6 -22 -34 -3I 43 I Y&? ail 5 -I -6 -3 -7 -2234 2 yji-ai 2 -2 -3 -i6 -6 -2945 3 yi&-ais -7 -3 -I0356 4 yij aj4 3 -I -I -I -657 5 yi&-ai5 -2 -I -3 -6128 6 Yi&-ai6 -2 -2 -I -13 -I -I989 7 yi&-ai7 -2 -8 -I 4 -7 -2218I0 8 y&0-a&8 -2 -2 -I -9 -I -I517

    II 9 y&?-aig -I -I -I -31I2 I0 yi?-ailo -4 -I -2 -3 -4 -I -I518I3 I 9 5 yil 5 -I -I I -I 8 9 9 0 I7,

    I0

    I4 2jeNl aij -i5 -6 -221$ 2 yj1-aj2 -2 -2 -I -I -67I6 3 yil-ais 0217 4 yil-ai4 -I -1 -28I8 5 yil ai5 -I -I -2919 6 yil-ai6 -I _I1020 8 yj1-aia -2 -I -3112I 2 9, 3 6* 0 yi2 I3 9 0 0 3 8 722 TjeN2 air -8 35 3 -I3 -IO -623 3 5 2 Yis -2 4 -I I -3 0 0 I, 2,4,5, q 7, I06, 924 2jeNs aij -8 3 -i825 3 yi3-ais -2 -21326 8 y&3-aj8 -2 -I -2 -514

    27 4 5, 33 3 4Y 6 14 0 0 I O -2 1,2,3,4, 7,i0 85, 6, 928 Tje N4 a j -2 -929 eN0 a&i -8 I0 3 1 -I 3 3

    Example 3. (Problem 4 of Tableau I).The following is an ill-behaved problem (with no feasible solution):4x,+2x2+x3+5x4+3x5+6x6+x7+ 2x8+3xg= min,3x,+5x2-2x3- X4 - x6 -4x8+2x9: -1,6x1-2x2 -2X4+2x5- 4x6+3x7

  • 8/9/2019 balas1965(2)

    27/34

    542 Egon BalasWe do not reproduce in detail the computations but the application ofthe algorithm may be followed in Tableau IV, which shows the sequence of

    solutions and of the steps used at each iteration.TABLEAU IV

    Sequence of solutions Sequence Sequence of solutions Sequences J, of steps s J- of stepso A O 3, 9 AI 4 A I7 3, 9, 6 A2 4, 9 A I8 3, 9, 6, 7 B3 4,9,8 A 19 3,9,6,7,2 C(o)4 4, 9, 8, 7 A 20 3, 9, 6, C (2)5 4, 9, 85 7, 2 B 21 3, 7 B6 4, 9, 8, 7, 2, 3, 5, 6 C (I) 22 3, 7, 2, 6 C (o)7 4, 9, 8, 2 C (I) 23 3, I, 2 C (0)8 4, 9, 3 A 24 7 B9 4, 9, 3, 7 A 25 7, 2, 6 C (o)

    I0 4, 9 3, 7, 6 C (I) 26 8 AII 4, 9, 3, 2 C (2) 27 8, 6 AI2 4, 7 A 28 8, 6, 9 AI3 4, 7, 6 C (I) 29 8, 6, 9, C (3)I4 4, I C (I) 30 6 AI5 3 A 3I 6, 9 D (2)Stop

    The symbols in the Tableau indicate the following sequences of steps:A= lb, 2b, 3b, 8,B=lb, 2b, 3c, 4a,

    C(k) = lb, 2b, 3a, 5b, 6a, *, 5b, 6a, 5b, 6b, 8,2kDl(k) = lb, 2b, 3a, 5b, 6a, ***, 5b, 6a, 5a.

    2kExample 4. (Problem 9 of Tableau I).Another ill-behaved problem (with only one feasible solution) is thefollowing:

    5x1+ x2+ 3x3+ 2x4+ 6x5+4x6+7x7+2x8+4x9+ x10+ x1i+5xl2= min,-x1+3X2-12x3 - x5+7X6- X7 +3xio-5x11- x12? -6,

    3x1-7X2 + X4+ 6xb

  • 8/9/2019 balas1965(2)

    28/34

    Linear Programs with Zero-One Variables 5435X2+ 6x3 -12x5+7x6 +3x8+ xs-8x1o +5x12

  • 8/9/2019 balas1965(2)

    29/34

    544 Egon BalasThe optimal (in this case the only feasible) solution is the starred one,

    i.e.,_f1, (j=3, 4,8, 10, 12)xj lo. (j= 1, 2, 5, 6, 7, 9, 11)

    ACKNOWLEDGMENTSI AM indebted to PROF.WILLIAMWV.COOPER,s well as to FRED GLOVERand to STANLEY ZIONTS for comments and suggestions which helped toimprove this article. I also wish to acknowledge the help of ELENAMARINESCU,who carried out the computations for the forest-managementproblem discussed in the sixth section.

    REFERENCES1. G. B. DANTZIG, Discrete Variable Extremum Problems," Opns. Res. 5, 266-277 (1957).2. H. M. MARKOWITZND A. S. MANNE, "On the Solution of Discrete Program-ming Problems," Econometrica25, 84-110 (1957).3. K. EISEMANN,"The Trim Problem," Management Sci., 3, 279-284 (1957).4. G. B. DANTZIG, On the Significance of Solving Linear Programming Problemswith Some Integer Variables," Econometrica28, 30-44 (1960).5. A. CHARNES NDW. W. COOPER,ManagementModels and Industrial Applica-tions of Linear Programming, Wiley, New York, 1961.6. A. BEN-ISRAEL ND A. CHARNES, On Some Problems of Diophantine Pro-gramming," Cahiers du Centre d'1tudes de Recherche Operationnelle(Bruxelles) 4, 215-280 (1962).7. M. SIMONNARD,rogrammation lineaire, Dunod, Paris, 1962.8. G. B. DANTZIG, inear Programming and Extensions, Princeton UniversityPress, 1963.9. E. BALAS,"Linear Programming with Zero-One Variables" (in Rumanian),Proceedings of the Third Scientific Session on Statistics, Bucharest, December

    5-7, 1963.10. , "Mathematical Programming in Forest Management" (in Ru-manian), Proceedings of the Third Scientific Session on Statistics, Bucharest,December 5-7, 1963.11. GH. MIHoC AND E. BALAS, "The Problem of Optimal Timetables," Revue deMathdmatiquesPures et Appliquges 10 (1965).12. R. E. GOMORY,Outline of an Algorithm for Integer Solutions to Linear Pro-grams," Bull. Am. Math. Soc. 64, 3 (1958).13. , "An All-Integer Programming Algorithm," in J. R. MUTHANDG. L.THOMPSONeds.), Industrial Scheduling, Chap. 13, Prentice-Hall, 1963.14. , "An Algorithm for Integer Solutions to Linear Programs," in R. L.GRAVESANDPH. WOLFE eds.), Recent Advances in Mathematical Program-ming, pp. 269-302, McGraw-Hill, New York, 1963.

  • 8/9/2019 balas1965(2)

    30/34

    Linear Programs with Zero-One Variables 54515. G. B. DANTZIG, Note on Solving Linear Programs in Integers," Naval Res.Log. Quart. 6, 75-76 (1959).16. R. E. GOMORYNDA. J. HOFFMAN,On the Convergence of an Integer-Pro-gramming Process," Naval Res. Log. Quart. 10, 121-124 (1963).17. E. M. L. BEALE,"A Method of Solving Linear Programming Problems WhenSome but Not All of the Variables Must Take Integral Values," StatisticalTechniques Research Group, Technical Report No. 19, Princeton University,1958.18. A. H. LANDANDA. G. DOIG,"An Automatic Method of Solving Discrete Pro-gramming Problems," Econometrica28, 497-520 (1960).19. J. F. BENDERS, A. R. CATCHPOLE,AND L. C. KUIKEN, "Discrete-VariableOptimization Problems," Paper presented to the Rand Symposium onMathematical Programming, Santa Monica, 1959.20. P. M. J. HARRIS,"The Solution of Mixed Integer Linear Programs," Opnl.Res. Quart. 15, 117-133 (1964).21. G. L. THOMPSON,The Stopped Simplex Method, Part I," RevueFrancaise deRechercheOperationnelle8, 159-182 (1964).22. , "The Stopped Simplex Method, Part. II," RevueFran~aisede RechercheOperationnelle9 (1965).23. P. L. IVANESCU,"Programmation polynomiale en nombres entiers," ComptesRendus de l'Academiedes Sciences (Paris) 257, 424-427 (1963).24. F. GLOVER, A Bound Escalation Method for the Solution of Integer Pro-grams," Cahiersdu Centred'Atudes de RechercheOperationnelle Bruxelles) 6,(1964).25. S. E. ELMAGHRABY, An Algorithm for the Solution of the 'Zero-One' Problemof Integer Linear Programming," Department of Industrial Administration,Yale University, May 1963.26. W. SZWARC, The Mixed Integer Linear Programming P^'oblem When theVariables are Zero or One," Carnegie Institute of Technology, GraduateSchool of Industrial Administration, May 1963.27. F. LAMBERT, Programmes lindaires mixtes," Cahiers du Centre d'Etudes de

    RechercheOpdrationnelle Bruxelles) 2, 47-126 (1960).28. S. VAJDA,Mathematical Programming, Addison-Wesley, 1961.29. R. FORTET, Applications de l'algdbre de Boole en recherche op6rationnelle,"RevueFrancaise de RechercheOpgrationelle4, 17-25 (1960).30. P. CAMION,Une m6thode de resolution par l'algbbre de Boole des problemscombinatoires o-hinterviennent des entiers," Cahiers du Centred'Etudes deRechercheOperationnelle Bruxelles) 2, 234-289 (1960).

    31. E. BALAS,"Un algorithms additif pour la resolution des programmes lin6airesen variables bivalentes," ComptesRendus de l'Acadmie des Sciences (Paris)258, 3817-3820 (1964).

    32. , "Extension de l'algorithme additif b la programmation en nombresentiers et A la programmation nonlin6aire," ComptesRendus de l'Academiedes Sciences (Paris) 258, 5136-5139 (1964).

  • 8/9/2019 balas1965(2)

    31/34

    546 Egon Balas33. F. RAD6, "Linear Programming with Logical Conditions" (in Rumanian),ComunicdrileAcademiei RPR 13, 1039-1041 (1963).34. J. D. C. LITTLE, K. G. MURTY, D. W. SWEENEY, AND C. KAREL, "An Algorithmfor the Traveling Salesman Problem," Opns. Res. 11, 972-989 (1963).35. P. BERTHIER AND PH. T. NGHIEM, Resolution de problems en variables biva-lentes (Algorithme de Balas et procedure S.E.P.). Note de travail no. 33,Society d'I1conomieet de Math6matique Appliquees," Paris, 1965.

    A NOTE ON THE ADDITIVE ALGORITHM OF BALAStFred Glover

    CarnegieInstitute of Technology,Pittsburgh, Pa.and

    Stanley ZiontsCarnegieInstitute of Technologyand U. S. Steel Corp. Applied ResearchLaboratory,Monroeville,Pa.

    (Received December 28, 1964)

    IN THE preceding paper EGONBALASpresents an interesting combinatorialapproach to solving linear programs with zero-one variables. The method isessentially a tree-search algorithm that uses information generated in the search toexclude portions of the tree from consideration. The purpose of this note is: (1)to propose additional tests meant to increase the power of Balas' algorithm byreducing the number of possible solutions examined in the course of computation;and (2) to propose an application for which the algorithm appears particularlywell-suited. An acquaintance with Balas' paper is assumed in the discussion thatfollows.First consider ways of reducing the number of solutions examined by Balas'method. In one approach to this objective, Balas defines the set D. that (using hisnotation and equation numbers) consists of those je(N - Cs) such that, if aj wereintroduced into the basis, the value of the objective function would equal or exceed

    the best value (z*()) already obtained. It then follows immediately that thevariables associated with elements of D8 may be ignored in seeking an improvingsolution along the current branch of the solution tree. However, D, can be fruit-fully enlarged by using a slightly less immediate criterion for inclusion, specifiedas follows. Each j in N8 is first examined-in any desired sequence-to see ift The authors originally refereed Mr. Balas' paper for Operations Research. With

    their referee's report they submitted this manuscript extending some of his results.TThe additional tests proposed in this note reduce the number of solutions to beexamined under the additive algorithm, at the expense of an increase in the amountof computation at each iteration. While we conjecture that on the balance it isworthwhile introducing these tests, this is a matter to be decided on the basis ofcomputational experience.

  • 8/9/2019 balas1965(2)

    32/34

    The Algorithm of Balas 547JsU j gives a feasiblesolution. (Suchan examinationwill sometimehave to bemade for a number of thesej in any case.) If feasibilityis established or anyJ8U jj}a new improvingsolution is obtained that may be handled n the usualfashionby the algorithm. But in the more probableevent that JU j} fails tosatisfy one of the constraints-say constraint i(a71j>yja)-consider the objectivefunction coefficient h=min[c, Ip -(N -j}) and ate

  • 8/9/2019 balas1965(2)

    33/34

    548 Fred Glover and Stanley ZiontsWe turn now to the issue of generating and searching the solution tree. Firstwe note that for many problems a feasible solution is known in advance. This

    solution may be used to establish a starting value for Z*(8) other than infinity,thereby expediting convergence to the optimum. But such a solution may alsobe used in another way. If it is suspected that an appreciable fraction of thevariables for the feasible solution coincide in value to those for some optimal solu-tion, it may be useful to employ a two-stage algorithm that treats the ('primary')variables set at unity in the feasible solution differently from the ('secondary')variables set equal to zero. In particular, it would seem desirable for such analgorithm to be designed to dispose rapidly of various 0-1 assignments to the pri-mary variables in the first stage and then to apply the additive algorithm to thesecondary variables in the second stage. Though it is beyond the scope of thisnote to describe such a procedure in detail, we remark that it is possible to use asimplified bookkeeping scheme for the first stage (consisting of a single vector eachof whose components assumes only three values) so that with slight modificationsthe tests described above may still be applied to restrict the range of 0-1 assign-ments necessary for consideration (see reference 5).An application for the Balas' algorithm of interest lies in its potential integrationwith the GILMORE-GOMORYethod for solving the cutting-stock problem.[' 41While this problem may be given an integer programming formulation, the numberof variables, even for a moderate-sized problem, is so large that, in a practicalsense, the usual integer programming approach is to no avail. Even the ordinarylinear programming methods are not practical. Gilmore and Gomory's methodovercomes this difficulty by restricting attention to a very small subset of thevariables in order to obtain a starting solution via linear programming. Once asolution is available, a solution to the knapsack problem is used to generate im-proving variables for the problem.t The solution process then alternates betweengenerating new variables and solving the linear programming problem to see whichof the newly generated variables should be included in the solution. When noimproving variables can be found the algorithm comes to a halt.Three features of Balas' method should prove useful in this application. First,Gilmore and Gomory's method does not provide integer solutions; Balas' algorithmappears very reasonable in this context. Second, except for the first solution ofthe integer-program subproblem, a finite starting Z*(8) (obtained from the previoussubproblem) is available. Such solutions can be exploited as suggested above.Third, at any stage of the Gilmore-Gomory method it is necessary to consideronly those solutions involving at least one of the newly generated variables, asituation with which Balas' method dovetails rather well.A significant portion of the cutting-stock problems known to us can be repre-sented using zero-one variables, higher-valued integers of course being representedby sums of zero-one integers. Balas' extension of the additive algorithm to thegeneral integer linear programming problem[ proposes an improvement in effi-ciency of such a representation that makes the approach even more attractive.

    t In the first version of their cutting-stock method, Gilmore and Gomory solvethe knapsack problem by dynamic programming; in the second, they propose aspecial algorithm for the knapsack problem which proves to be substantially su-perior to dynamic programming.

  • 8/9/2019 balas1965(2)

    34/34

    The Algorithm of Balas 549REFERENCES

    1. EGONBALAS,"An Additive Algorithm for Solving Linear Programs with Zero-One Variables," Opns.Res.13, 517-546 (1965).2. , "Extension de l'algorithme additif a la programmation en nombresentiers et a la programmation nonlineaire," ComptesRendusdel'Academie esSciences Paris) 258, 5136-5139 (1964).3. P. C. GILMORE AND R. E. GOMORY, "A Linear Programming Approach to theCutting-Stock Problem," Opns.Res.9, 849-859 (1961).4. AND , "A Linear Programming Approach to the Cutting-StockProblem-Part II," Opns.Res. 11, 863-888 (1963).5. FRED GLOVER, "A Multiphase-Dual Algorithm for the Zero-One Integer Pro-gramming Problem," (forthcoming).

    6. S. ZIONTS, G. L. THOMPSON, AND F. M. TONGE, "Techniques for RemovingNonbinding Constraints and Extraneous Variables from Linear ProgrammingProblems," Carnegie Institute of Technology, Graduate School of IndustrialAdministration, Pittsburgh, Pennsylvania, November, 1964.