Research Article A Conjugate Gradient Type Method for the ...A Conjugate Gradient Type Method for the Nonnegative Constraints Optimization Problems CanLi ... is a steplength obtained
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2013 Article ID 986317 6 pageshttpdxdoiorg1011552013986317
Research ArticleA Conjugate Gradient Type Method for the NonnegativeConstraints Optimization Problems
Can Li
College of Mathematics Honghe University Mengzi 661199 China
Correspondence should be addressed to Can Li canlymathe163com
Received 16 December 2012 Accepted 20 March 2013
Academic Editor Theodore E Simos
Copyright copy 2013 Can Li This is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
We are concernedwith the nonnegative constraints optimization problems It is well known that the conjugate gradientmethods areefficient methods for solving large-scale unconstrained optimization problems due to their simplicity and low storage Combiningthe modified Polak-Ribiere-Polyak method proposed by Zhang Zhou and Li with the Zoutendijk feasible direction method weproposed a conjugate gradient type method for solving the nonnegative constraints optimization problems If the current iterationis a feasible point the direction generated by the proposed method is always a feasible descent direction at the current iterationUnder appropriate conditions we show that the proposed method is globally convergent We also present some numerical resultsto show the efficiency of the proposed method
1 Introduction
Due to their simplicity and their low memory requirementthe conjugate gradient methods play a very important rolefor solving unconstrained optimization problems especiallyfor the large-scale optimization problems Over the yearsmany variants of the conjugate gradient method have beenproposed and some are widely used in practice The key fea-tures of the conjugate gradient methods are that they requireno matrix storage and are faster than the steepest descentmethod
The linear conjugate gradient method was proposed byHestenes and Stiefel [1] in the 1950s as an iterative method forsolving linear systems
119860119909 = 119887 119909 isin 119877119899
(1)
where 119860 is an 119899 times 119899 symmetric positive definite matrix Pro-blem (1) can be stated equivalently as the followingminimiza-tion problem
min 1
2119909119879
119860119909 minus 119887119879
119909 119909 isin 119877119899
(2)
This equivalence allows us to interpret the linear conjugategradient method either as an algorithm for solving linear
systems or as a technique for minimizing convex quadraticfunctions For any 119909 isin 119877
119899 the sequence 119909119896 generated by the
linear conjugate gradient method converges to the solution119909lowast of the linear systems (1) in at most 119899 stepsThe first nonlinear conjugate gradient method was intro-
duced by Fletcher and Reeves [2] in the 1960s It is one of theearliest known techniques for solving large-scale nonlinearoptimization problems
min119891 (119909) 119909 isin 119877119899
(3)
where119891 119877119899
rarr 119877 is continuously differentiableThe nonlin-ear conjugate gradient methods for solving (3) have the fol-lowing form
where 120572119896is a steplength obtained by a line search and 120573
119896is
a scalar which deternimes the different conjugate gradientmethods If we choose119891 to be a strongly convex quadratic and120572119896to be the exact minimizer the nonliear conjugate gradient
method reduces to the linear conjugate gradient method
2 Journal of Applied Mathematics
Several famous formulae for 120573119896are the Hestenes-Stiefel (HS)
[1] Fletcher-Reeves (FR) [2] Polak-Ribiere-Polyak (PRP) [34] Conjugate-Descent (CD) [5] Liu-Storey (LS) [6] andDai-Yuan (DY) [7] formulae which are given by
ean norm of vectors In this paper we focus our attentionon the Polak-Ribiere-Polyak (PRP) method The study ofthe PRP method has received much attention and has madegood progress The global convergence of the PRP methodwith exact line search has been proved in [3] under strongconvexity assumption on 119891 However for general nonlinearfunction an example given by Powell [8] shows that the PRPmethod may fail to be globally convergent even if the exactline search is used Inspired by Powellrsquos work Gilbert andNocedal [9] conducted an elegant analysis and showed thatthe PRPmethod is globally convergent if 120573PRP
119896is restricted to
be nonnegative and 120572119896is determined by a line search satisfy-
ing the sufficient descent condition 119892⊤
119896119889119896
le minus119888 1198921198962 in
addition to the standard Wolfe conditions Other conjugategradient methods and their global convergence can be foundin [10ndash15] and so forth
Recently Li andWang [16] extended themodified Fletch-er-Reeves (MFR) method proposed by Zhang et al [17] forsolving unconstrained optimization to the nonlinear equa-tions
119865 (119909) = 0 (8)
where 119865 119877119899
rarr 119877119899 is continuously differentiable and pro-
posed a descent derivative-freemethod for solving symmetricnonlinear equations The direction generated by the methodis descent for the residual function Under appropriateconditions the method is globally convergent by the use ofsome backtracking line search technique
In this paper we further study the conjugate gradientmethod We focus our attention on the modified Polak-Ribiere-Polyak (MPRP) method proposed by Zhang et al[18] The direction generated by MPRP method is given by
119889119896=
minus119892 (119909119896) 119896 = 0
minus119892 (119909119896) + 120573
PRP119896
119889119896minus1
minus 120579119896119910119896minus1
119896 gt 0
(9)
where 119892(119909119896) = nabla119891(119909
119896) 120573PRP119896
= 119892(119909119896)119879
119910119896minus1
119892(119909119896minus1
)2 120579119896=
119892(119909119896)119879
119889119896minus1
119892(119909119896minus1
)2 and 119910
119896minus1= 119892(119909
119896) minus 119892(119909
119896minus1) The
MPRP method not only reserves good properties of thePRP method but also possesses another nice property thatit is always generates descent directions for the objective
function This property is independent of the line searchused Under suitable conditions the MPRP method withthe Armoji-type line search is also globally convergent Thepurpose of this paper is to develop an MPRP type methodfor the nonnegative constraints optimization problems Com-bining the Zoutendijk feasible direction method with MPRPmethod we propose a conjugate gradient type method forsolving the nonnegative constraints optimization problemsIf the initial point is feasible the method generates a feasiblepoint sequence We also do numerical experiments to testthe proposed method and compare the performance of themethod with the Zoutendijk feasible direction method Thenumerical results show that the method that we proposeoutperforms the Zoutendijk feasible direction method
2 Algorithm
Consider the following nonnegative constraints optimizationproblems
min 119891 (119909)
st 119909 ge 0
(10)
where 119891 119877119899
rarr 119877 is continuously differentiable Let 119909119896ge 0
be the current iteration Define the index set119868119896= 119868 (119909
119896) = 119894 | 119909
119896(119894) = 0 119869
119896= 1 2 119899 119868
119896
(11)
where 119909119896(119894) is the 119894th component of 119909
119896 In fact the index set
119868119896is the active set of problem (10) at 119909
119896
The purpose of this paper is to develop a conjugategradient type method for problem (10) Since the iterativesequence is a feasible point sequence the search directionsshould be feasible descent directions Let 119909
119896ge 0 be the
current iteration By the definition of feasible direction wehave that [19] 119889 isin 119877
119899 is a feasible direction of (10) at 119909119896if and
only if 119889119868119896
ge 0 Similar to the Zoutendijk feasible directionmethod we consider the following problem
min nabla119891(119909119896)119879
119889
st 119889119868119896ge 0 119889 le 1
(12)
Next we show that if 119909119896is not a KKT point of (10) the solu-
tion of problem (12) is a feasible descent direction of 119891 at 119909119896
Lemma 1 Let 119909119896ge 0 and let 119889 be a solution of problem (12)
then nabla119891(119909119896)119879
119889 le 0 Moreover nabla119891(119909119896)119879
119889 = 0 if and only if 119909119896
is a KKT point of problem (10)
Proof Since 119889 = 0 is a feasible point of problem (12) theremust be nabla119891(119909
119896)119879
119889 le 0 Consequently if nabla119891(119909119896)119879
119889 = 0 theremust be nabla119891(119909
119896)119879
119889 lt 0 This implies that the direction 119889 is afeasible descent direction of 119891 at 119909
119896
We suppose thatnabla119891(119909119896)119879
119889 = 0 Problem (12) is equivalentto the following problem
min nabla119891(119909119896)119879
119889
st 119889119868119896ge 0 119889
2
le 1
(13)
Journal of Applied Mathematics 3
Then there exist 120582119868119896and 120583 such that the following KKT con-
dition holds
nabla119891 (119909119896) minus (
120582119868119896
0) + 2120583119889 = 0
120582119868119896ge 0 119889
119868119896ge 0 120582
119879
119868119896
119889119868119896= 0
120583 ge 01003817100381710038171003817100381711988910038171003817100381710038171003817le 1 120583 (
Taking the limit on both sides of the equation then bycombining 119889
119896le 119872 and recalling lim
119896isin119870 119896rarrinfin120572119896= 0 we
obtain that lim119896isin119870 119896rarrinfin
|119892(119909119896)119879
119889119896| = 0 This also yields a
contradiction
3 Numerical Experiments
In this section we report some numerical experiments Wetest the performance of Algorithm 4 and compare it with theZoutendijk method
The code was written in Matlab and the program wasrun on a PC with 220GHz CPU and 100GB memory Theparameters in the method are specified as follows We set 120588 =
12 120575 = 110 We stop the iteration if |nabla119891(119909119896)119879
119889119896| le 00001
or the iteration number exceeds 10000We first test Algorithm 4 on small and medium size
problems and compared them with the Zoutendijk methodin the total number of iterations and the CPU time used Thetest problems are from the CUTE library [20]The numericalresults of Algorithm 4 and the Zoutendijk method are listedin Table 1 The columns have the following meanings
119875(119894) is the number of the test problemDim is thedimension of the test problem Iter is the number of iterationsandTime is CPU time in seconds
We can see fromTable 1 that Algorithm 4 has successfullysolved 12 test problems and the Zoutendijk method hassuccessfully solved 8 test problems From the number of iter-ations Algorithm 4 has 12 test results better than Zoutendijkmethod From the computation time Algorithm 4 performs
Table 1 The numerical results
119875(119894) Dim Algorithm 4 Zoutendijk methodIter Time Iter Time
much better than the Zoutendijk method did We then testAlgorithm 4 and the Zoutendijk method on two problemswith a larger dimension The problem of VARDIM comesfrom [20] and the following problem comes from [16] Theresults are listed in Tables 2 and 3
Problem 1 The nonnegative constraints optimization prob-lem
min 119891 (119909)
st 119909 ge 0
(46)
6 Journal of Applied Mathematics
with Engval function 119891 119877119899
rarr 119877 is defined by
119891 (119909) =
119899
sum
119894=2
(1199092
119894minus1+ 1199092
119894)2
minus 4119909119894minus1
+ 3 (47)
We can see fromTable 2 that Algorithm 4 has successfullysolved the problem of VARDIMwhose scale varies from 1000dimensions to 5000 dimensions However the Zoutendijkmethod fails to solve the problem of VARDIM with largerdimension From Table 3 although the number of iterationsof Algorithm 4 is more than the Zoutendijk method thecomputation time of Algorithm 4 is less than the Zoutendijkmethod and this feature becomesmore evident as increase ofthe dimension of the test problem
In summary the results from Tables 1ndash3 show thatAlgorithm 4 is more efficient than the Zoutendijk methodand provides an efficient method for solving nonnegativeconstraints optimization problems
Acknowledgment
This research is supported by the NSF (11161020) of China
References
[1] M R Hestenes and E Stiefel ldquoMethods of conjugate gradientsfor solving linear systemsrdquo Journal of Research of the NationalBureau of Standards vol 49 pp 409ndash436 1952
[2] R Fletcher and C M Reeves ldquoFunction minimization by con-jugate gradientsrdquo The Computer Journal vol 7 pp 149ndash1541964
[3] B Polak and G Ribire ldquoNote sur la convergence de directionsconjugeesrdquo Revue Francaise drsquoInformatique et de RechercheOperationnelle vol 16 pp 35ndash43 1969
[4] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[5] R Fletcher Practical Methods of Optimization John Wiley ampSons Ltd Chichester UK 2nd edition 1987
[6] Y Liu and C Storey ldquoEfficient generalized conjugate gradientalgorithms I Theoryrdquo Journal of Optimization Theory and Ap-plications vol 69 no 1 pp 129ndash137 1991
[7] Y H Dai and Y Yuan ldquoA nonlinear conjugate gradient methodwith a strong global convergence propertyrdquo SIAM Journal onOptimization vol 10 no 1 pp 177ndash182 1999
[8] M J D Powell ldquoConvergence properties of algorithms for non-linear optimizationrdquo SIAM Review vol 28 no 4 pp 487ndash5001986
[9] J C Gilbert and J Nocedal ldquoGlobal convergence properties ofconjugate gradient methods for optimizationrdquo SIAM Journal onOptimization vol 2 no 1 pp 21ndash42 1992
[10] R Pytlak ldquoOn the convergence of conjugate gradient algo-rithmsrdquo IMA Journal of Numerical Analysis vol 14 no 3 pp443ndash460 1994
[11] G Li C Tang and ZWei ldquoNew conjugacy condition and relat-ed new conjugate gradient methods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[12] X Li and X Zhao ldquoA hybrid conjugate gradient method foroptimization problemsrdquoNatural Science vol 3 no 1 pp 85ndash902011
[13] Y H Dai and Y Yuan ldquoAn efficient hybrid conjugate gradientmethod for unconstrained optimizationrdquo Annals of OperationsResearch vol 103 pp 33ndash47 2001
[14] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[15] D-H Li Y-Y Nie J-P Zeng andQ-N Li ldquoConjugate gradientmethod for the linear complementarity problemwith 119878-matrixrdquoMathematical and ComputerModelling vol 48 no 5-6 pp 918ndash928 2008
[16] D-H Li and X-L Wang ldquoA modified Fletcher-Reeves-typederivative-free method for symmetric nonlinear equationsrdquoNumerical Algebra Control and Optimization vol 1 no 1 pp71ndash82 2011
[17] L Zhang W Zhou and D Li ldquoGlobal convergence of a mod-ified Fletcher-Reeves conjugate gradient method with Armijo-type line searchrdquo Numerische Mathematik vol 104 no 4 pp561ndash572 2006
[18] L Zhang W Zhou and D-H Li ldquoA descent modified Polak-Ribiere-Polyak conjugate gradient method and its global con-vergencerdquo IMA Journal of Numerical Analysis vol 26 no 4 pp629ndash640 2006
[19] D H Li and X J Tong Numerical Optimization Science PressBeijing China 2005
[20] J J More B S Garbow and K E Hillstrom ldquoTesting uncon-strained optimization softwarerdquo ACM Transactions on Mathe-matical Software vol 7 no 1 pp 17ndash41 1981
ean norm of vectors In this paper we focus our attentionon the Polak-Ribiere-Polyak (PRP) method The study ofthe PRP method has received much attention and has madegood progress The global convergence of the PRP methodwith exact line search has been proved in [3] under strongconvexity assumption on 119891 However for general nonlinearfunction an example given by Powell [8] shows that the PRPmethod may fail to be globally convergent even if the exactline search is used Inspired by Powellrsquos work Gilbert andNocedal [9] conducted an elegant analysis and showed thatthe PRPmethod is globally convergent if 120573PRP
119896is restricted to
be nonnegative and 120572119896is determined by a line search satisfy-
ing the sufficient descent condition 119892⊤
119896119889119896
le minus119888 1198921198962 in
addition to the standard Wolfe conditions Other conjugategradient methods and their global convergence can be foundin [10ndash15] and so forth
Recently Li andWang [16] extended themodified Fletch-er-Reeves (MFR) method proposed by Zhang et al [17] forsolving unconstrained optimization to the nonlinear equa-tions
119865 (119909) = 0 (8)
where 119865 119877119899
rarr 119877119899 is continuously differentiable and pro-
posed a descent derivative-freemethod for solving symmetricnonlinear equations The direction generated by the methodis descent for the residual function Under appropriateconditions the method is globally convergent by the use ofsome backtracking line search technique
In this paper we further study the conjugate gradientmethod We focus our attention on the modified Polak-Ribiere-Polyak (MPRP) method proposed by Zhang et al[18] The direction generated by MPRP method is given by
119889119896=
minus119892 (119909119896) 119896 = 0
minus119892 (119909119896) + 120573
PRP119896
119889119896minus1
minus 120579119896119910119896minus1
119896 gt 0
(9)
where 119892(119909119896) = nabla119891(119909
119896) 120573PRP119896
= 119892(119909119896)119879
119910119896minus1
119892(119909119896minus1
)2 120579119896=
119892(119909119896)119879
119889119896minus1
119892(119909119896minus1
)2 and 119910
119896minus1= 119892(119909
119896) minus 119892(119909
119896minus1) The
MPRP method not only reserves good properties of thePRP method but also possesses another nice property thatit is always generates descent directions for the objective
function This property is independent of the line searchused Under suitable conditions the MPRP method withthe Armoji-type line search is also globally convergent Thepurpose of this paper is to develop an MPRP type methodfor the nonnegative constraints optimization problems Com-bining the Zoutendijk feasible direction method with MPRPmethod we propose a conjugate gradient type method forsolving the nonnegative constraints optimization problemsIf the initial point is feasible the method generates a feasiblepoint sequence We also do numerical experiments to testthe proposed method and compare the performance of themethod with the Zoutendijk feasible direction method Thenumerical results show that the method that we proposeoutperforms the Zoutendijk feasible direction method
2 Algorithm
Consider the following nonnegative constraints optimizationproblems
min 119891 (119909)
st 119909 ge 0
(10)
where 119891 119877119899
rarr 119877 is continuously differentiable Let 119909119896ge 0
be the current iteration Define the index set119868119896= 119868 (119909
119896) = 119894 | 119909
119896(119894) = 0 119869
119896= 1 2 119899 119868
119896
(11)
where 119909119896(119894) is the 119894th component of 119909
119896 In fact the index set
119868119896is the active set of problem (10) at 119909
119896
The purpose of this paper is to develop a conjugategradient type method for problem (10) Since the iterativesequence is a feasible point sequence the search directionsshould be feasible descent directions Let 119909
119896ge 0 be the
current iteration By the definition of feasible direction wehave that [19] 119889 isin 119877
119899 is a feasible direction of (10) at 119909119896if and
only if 119889119868119896
ge 0 Similar to the Zoutendijk feasible directionmethod we consider the following problem
min nabla119891(119909119896)119879
119889
st 119889119868119896ge 0 119889 le 1
(12)
Next we show that if 119909119896is not a KKT point of (10) the solu-
tion of problem (12) is a feasible descent direction of 119891 at 119909119896
Lemma 1 Let 119909119896ge 0 and let 119889 be a solution of problem (12)
then nabla119891(119909119896)119879
119889 le 0 Moreover nabla119891(119909119896)119879
119889 = 0 if and only if 119909119896
is a KKT point of problem (10)
Proof Since 119889 = 0 is a feasible point of problem (12) theremust be nabla119891(119909
119896)119879
119889 le 0 Consequently if nabla119891(119909119896)119879
119889 = 0 theremust be nabla119891(119909
119896)119879
119889 lt 0 This implies that the direction 119889 is afeasible descent direction of 119891 at 119909
119896
We suppose thatnabla119891(119909119896)119879
119889 = 0 Problem (12) is equivalentto the following problem
min nabla119891(119909119896)119879
119889
st 119889119868119896ge 0 119889
2
le 1
(13)
Journal of Applied Mathematics 3
Then there exist 120582119868119896and 120583 such that the following KKT con-
dition holds
nabla119891 (119909119896) minus (
120582119868119896
0) + 2120583119889 = 0
120582119868119896ge 0 119889
119868119896ge 0 120582
119879
119868119896
119889119868119896= 0
120583 ge 01003817100381710038171003817100381711988910038171003817100381710038171003817le 1 120583 (
Taking the limit on both sides of the equation then bycombining 119889
119896le 119872 and recalling lim
119896isin119870 119896rarrinfin120572119896= 0 we
obtain that lim119896isin119870 119896rarrinfin
|119892(119909119896)119879
119889119896| = 0 This also yields a
contradiction
3 Numerical Experiments
In this section we report some numerical experiments Wetest the performance of Algorithm 4 and compare it with theZoutendijk method
The code was written in Matlab and the program wasrun on a PC with 220GHz CPU and 100GB memory Theparameters in the method are specified as follows We set 120588 =
12 120575 = 110 We stop the iteration if |nabla119891(119909119896)119879
119889119896| le 00001
or the iteration number exceeds 10000We first test Algorithm 4 on small and medium size
problems and compared them with the Zoutendijk methodin the total number of iterations and the CPU time used Thetest problems are from the CUTE library [20]The numericalresults of Algorithm 4 and the Zoutendijk method are listedin Table 1 The columns have the following meanings
119875(119894) is the number of the test problemDim is thedimension of the test problem Iter is the number of iterationsandTime is CPU time in seconds
We can see fromTable 1 that Algorithm 4 has successfullysolved 12 test problems and the Zoutendijk method hassuccessfully solved 8 test problems From the number of iter-ations Algorithm 4 has 12 test results better than Zoutendijkmethod From the computation time Algorithm 4 performs
Table 1 The numerical results
119875(119894) Dim Algorithm 4 Zoutendijk methodIter Time Iter Time
much better than the Zoutendijk method did We then testAlgorithm 4 and the Zoutendijk method on two problemswith a larger dimension The problem of VARDIM comesfrom [20] and the following problem comes from [16] Theresults are listed in Tables 2 and 3
Problem 1 The nonnegative constraints optimization prob-lem
min 119891 (119909)
st 119909 ge 0
(46)
6 Journal of Applied Mathematics
with Engval function 119891 119877119899
rarr 119877 is defined by
119891 (119909) =
119899
sum
119894=2
(1199092
119894minus1+ 1199092
119894)2
minus 4119909119894minus1
+ 3 (47)
We can see fromTable 2 that Algorithm 4 has successfullysolved the problem of VARDIMwhose scale varies from 1000dimensions to 5000 dimensions However the Zoutendijkmethod fails to solve the problem of VARDIM with largerdimension From Table 3 although the number of iterationsof Algorithm 4 is more than the Zoutendijk method thecomputation time of Algorithm 4 is less than the Zoutendijkmethod and this feature becomesmore evident as increase ofthe dimension of the test problem
In summary the results from Tables 1ndash3 show thatAlgorithm 4 is more efficient than the Zoutendijk methodand provides an efficient method for solving nonnegativeconstraints optimization problems
Acknowledgment
This research is supported by the NSF (11161020) of China
References
[1] M R Hestenes and E Stiefel ldquoMethods of conjugate gradientsfor solving linear systemsrdquo Journal of Research of the NationalBureau of Standards vol 49 pp 409ndash436 1952
[2] R Fletcher and C M Reeves ldquoFunction minimization by con-jugate gradientsrdquo The Computer Journal vol 7 pp 149ndash1541964
[3] B Polak and G Ribire ldquoNote sur la convergence de directionsconjugeesrdquo Revue Francaise drsquoInformatique et de RechercheOperationnelle vol 16 pp 35ndash43 1969
[4] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[5] R Fletcher Practical Methods of Optimization John Wiley ampSons Ltd Chichester UK 2nd edition 1987
[6] Y Liu and C Storey ldquoEfficient generalized conjugate gradientalgorithms I Theoryrdquo Journal of Optimization Theory and Ap-plications vol 69 no 1 pp 129ndash137 1991
[7] Y H Dai and Y Yuan ldquoA nonlinear conjugate gradient methodwith a strong global convergence propertyrdquo SIAM Journal onOptimization vol 10 no 1 pp 177ndash182 1999
[8] M J D Powell ldquoConvergence properties of algorithms for non-linear optimizationrdquo SIAM Review vol 28 no 4 pp 487ndash5001986
[9] J C Gilbert and J Nocedal ldquoGlobal convergence properties ofconjugate gradient methods for optimizationrdquo SIAM Journal onOptimization vol 2 no 1 pp 21ndash42 1992
[10] R Pytlak ldquoOn the convergence of conjugate gradient algo-rithmsrdquo IMA Journal of Numerical Analysis vol 14 no 3 pp443ndash460 1994
[11] G Li C Tang and ZWei ldquoNew conjugacy condition and relat-ed new conjugate gradient methods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[12] X Li and X Zhao ldquoA hybrid conjugate gradient method foroptimization problemsrdquoNatural Science vol 3 no 1 pp 85ndash902011
[13] Y H Dai and Y Yuan ldquoAn efficient hybrid conjugate gradientmethod for unconstrained optimizationrdquo Annals of OperationsResearch vol 103 pp 33ndash47 2001
[14] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[15] D-H Li Y-Y Nie J-P Zeng andQ-N Li ldquoConjugate gradientmethod for the linear complementarity problemwith 119878-matrixrdquoMathematical and ComputerModelling vol 48 no 5-6 pp 918ndash928 2008
[16] D-H Li and X-L Wang ldquoA modified Fletcher-Reeves-typederivative-free method for symmetric nonlinear equationsrdquoNumerical Algebra Control and Optimization vol 1 no 1 pp71ndash82 2011
[17] L Zhang W Zhou and D Li ldquoGlobal convergence of a mod-ified Fletcher-Reeves conjugate gradient method with Armijo-type line searchrdquo Numerische Mathematik vol 104 no 4 pp561ndash572 2006
[18] L Zhang W Zhou and D-H Li ldquoA descent modified Polak-Ribiere-Polyak conjugate gradient method and its global con-vergencerdquo IMA Journal of Numerical Analysis vol 26 no 4 pp629ndash640 2006
[19] D H Li and X J Tong Numerical Optimization Science PressBeijing China 2005
[20] J J More B S Garbow and K E Hillstrom ldquoTesting uncon-strained optimization softwarerdquo ACM Transactions on Mathe-matical Software vol 7 no 1 pp 17ndash41 1981
Taking the limit on both sides of the equation then bycombining 119889
119896le 119872 and recalling lim
119896isin119870 119896rarrinfin120572119896= 0 we
obtain that lim119896isin119870 119896rarrinfin
|119892(119909119896)119879
119889119896| = 0 This also yields a
contradiction
3 Numerical Experiments
In this section we report some numerical experiments Wetest the performance of Algorithm 4 and compare it with theZoutendijk method
The code was written in Matlab and the program wasrun on a PC with 220GHz CPU and 100GB memory Theparameters in the method are specified as follows We set 120588 =
12 120575 = 110 We stop the iteration if |nabla119891(119909119896)119879
119889119896| le 00001
or the iteration number exceeds 10000We first test Algorithm 4 on small and medium size
problems and compared them with the Zoutendijk methodin the total number of iterations and the CPU time used Thetest problems are from the CUTE library [20]The numericalresults of Algorithm 4 and the Zoutendijk method are listedin Table 1 The columns have the following meanings
119875(119894) is the number of the test problemDim is thedimension of the test problem Iter is the number of iterationsandTime is CPU time in seconds
We can see fromTable 1 that Algorithm 4 has successfullysolved 12 test problems and the Zoutendijk method hassuccessfully solved 8 test problems From the number of iter-ations Algorithm 4 has 12 test results better than Zoutendijkmethod From the computation time Algorithm 4 performs
Table 1 The numerical results
119875(119894) Dim Algorithm 4 Zoutendijk methodIter Time Iter Time
much better than the Zoutendijk method did We then testAlgorithm 4 and the Zoutendijk method on two problemswith a larger dimension The problem of VARDIM comesfrom [20] and the following problem comes from [16] Theresults are listed in Tables 2 and 3
Problem 1 The nonnegative constraints optimization prob-lem
min 119891 (119909)
st 119909 ge 0
(46)
6 Journal of Applied Mathematics
with Engval function 119891 119877119899
rarr 119877 is defined by
119891 (119909) =
119899
sum
119894=2
(1199092
119894minus1+ 1199092
119894)2
minus 4119909119894minus1
+ 3 (47)
We can see fromTable 2 that Algorithm 4 has successfullysolved the problem of VARDIMwhose scale varies from 1000dimensions to 5000 dimensions However the Zoutendijkmethod fails to solve the problem of VARDIM with largerdimension From Table 3 although the number of iterationsof Algorithm 4 is more than the Zoutendijk method thecomputation time of Algorithm 4 is less than the Zoutendijkmethod and this feature becomesmore evident as increase ofthe dimension of the test problem
In summary the results from Tables 1ndash3 show thatAlgorithm 4 is more efficient than the Zoutendijk methodand provides an efficient method for solving nonnegativeconstraints optimization problems
Acknowledgment
This research is supported by the NSF (11161020) of China
References
[1] M R Hestenes and E Stiefel ldquoMethods of conjugate gradientsfor solving linear systemsrdquo Journal of Research of the NationalBureau of Standards vol 49 pp 409ndash436 1952
[2] R Fletcher and C M Reeves ldquoFunction minimization by con-jugate gradientsrdquo The Computer Journal vol 7 pp 149ndash1541964
[3] B Polak and G Ribire ldquoNote sur la convergence de directionsconjugeesrdquo Revue Francaise drsquoInformatique et de RechercheOperationnelle vol 16 pp 35ndash43 1969
[4] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[5] R Fletcher Practical Methods of Optimization John Wiley ampSons Ltd Chichester UK 2nd edition 1987
[6] Y Liu and C Storey ldquoEfficient generalized conjugate gradientalgorithms I Theoryrdquo Journal of Optimization Theory and Ap-plications vol 69 no 1 pp 129ndash137 1991
[7] Y H Dai and Y Yuan ldquoA nonlinear conjugate gradient methodwith a strong global convergence propertyrdquo SIAM Journal onOptimization vol 10 no 1 pp 177ndash182 1999
[8] M J D Powell ldquoConvergence properties of algorithms for non-linear optimizationrdquo SIAM Review vol 28 no 4 pp 487ndash5001986
[9] J C Gilbert and J Nocedal ldquoGlobal convergence properties ofconjugate gradient methods for optimizationrdquo SIAM Journal onOptimization vol 2 no 1 pp 21ndash42 1992
[10] R Pytlak ldquoOn the convergence of conjugate gradient algo-rithmsrdquo IMA Journal of Numerical Analysis vol 14 no 3 pp443ndash460 1994
[11] G Li C Tang and ZWei ldquoNew conjugacy condition and relat-ed new conjugate gradient methods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[12] X Li and X Zhao ldquoA hybrid conjugate gradient method foroptimization problemsrdquoNatural Science vol 3 no 1 pp 85ndash902011
[13] Y H Dai and Y Yuan ldquoAn efficient hybrid conjugate gradientmethod for unconstrained optimizationrdquo Annals of OperationsResearch vol 103 pp 33ndash47 2001
[14] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[15] D-H Li Y-Y Nie J-P Zeng andQ-N Li ldquoConjugate gradientmethod for the linear complementarity problemwith 119878-matrixrdquoMathematical and ComputerModelling vol 48 no 5-6 pp 918ndash928 2008
[16] D-H Li and X-L Wang ldquoA modified Fletcher-Reeves-typederivative-free method for symmetric nonlinear equationsrdquoNumerical Algebra Control and Optimization vol 1 no 1 pp71ndash82 2011
[17] L Zhang W Zhou and D Li ldquoGlobal convergence of a mod-ified Fletcher-Reeves conjugate gradient method with Armijo-type line searchrdquo Numerische Mathematik vol 104 no 4 pp561ndash572 2006
[18] L Zhang W Zhou and D-H Li ldquoA descent modified Polak-Ribiere-Polyak conjugate gradient method and its global con-vergencerdquo IMA Journal of Numerical Analysis vol 26 no 4 pp629ndash640 2006
[19] D H Li and X J Tong Numerical Optimization Science PressBeijing China 2005
[20] J J More B S Garbow and K E Hillstrom ldquoTesting uncon-strained optimization softwarerdquo ACM Transactions on Mathe-matical Software vol 7 no 1 pp 17ndash41 1981
Taking the limit on both sides of the equation then bycombining 119889
119896le 119872 and recalling lim
119896isin119870 119896rarrinfin120572119896= 0 we
obtain that lim119896isin119870 119896rarrinfin
|119892(119909119896)119879
119889119896| = 0 This also yields a
contradiction
3 Numerical Experiments
In this section we report some numerical experiments Wetest the performance of Algorithm 4 and compare it with theZoutendijk method
The code was written in Matlab and the program wasrun on a PC with 220GHz CPU and 100GB memory Theparameters in the method are specified as follows We set 120588 =
12 120575 = 110 We stop the iteration if |nabla119891(119909119896)119879
119889119896| le 00001
or the iteration number exceeds 10000We first test Algorithm 4 on small and medium size
problems and compared them with the Zoutendijk methodin the total number of iterations and the CPU time used Thetest problems are from the CUTE library [20]The numericalresults of Algorithm 4 and the Zoutendijk method are listedin Table 1 The columns have the following meanings
119875(119894) is the number of the test problemDim is thedimension of the test problem Iter is the number of iterationsandTime is CPU time in seconds
We can see fromTable 1 that Algorithm 4 has successfullysolved 12 test problems and the Zoutendijk method hassuccessfully solved 8 test problems From the number of iter-ations Algorithm 4 has 12 test results better than Zoutendijkmethod From the computation time Algorithm 4 performs
Table 1 The numerical results
119875(119894) Dim Algorithm 4 Zoutendijk methodIter Time Iter Time
much better than the Zoutendijk method did We then testAlgorithm 4 and the Zoutendijk method on two problemswith a larger dimension The problem of VARDIM comesfrom [20] and the following problem comes from [16] Theresults are listed in Tables 2 and 3
Problem 1 The nonnegative constraints optimization prob-lem
min 119891 (119909)
st 119909 ge 0
(46)
6 Journal of Applied Mathematics
with Engval function 119891 119877119899
rarr 119877 is defined by
119891 (119909) =
119899
sum
119894=2
(1199092
119894minus1+ 1199092
119894)2
minus 4119909119894minus1
+ 3 (47)
We can see fromTable 2 that Algorithm 4 has successfullysolved the problem of VARDIMwhose scale varies from 1000dimensions to 5000 dimensions However the Zoutendijkmethod fails to solve the problem of VARDIM with largerdimension From Table 3 although the number of iterationsof Algorithm 4 is more than the Zoutendijk method thecomputation time of Algorithm 4 is less than the Zoutendijkmethod and this feature becomesmore evident as increase ofthe dimension of the test problem
In summary the results from Tables 1ndash3 show thatAlgorithm 4 is more efficient than the Zoutendijk methodand provides an efficient method for solving nonnegativeconstraints optimization problems
Acknowledgment
This research is supported by the NSF (11161020) of China
References
[1] M R Hestenes and E Stiefel ldquoMethods of conjugate gradientsfor solving linear systemsrdquo Journal of Research of the NationalBureau of Standards vol 49 pp 409ndash436 1952
[2] R Fletcher and C M Reeves ldquoFunction minimization by con-jugate gradientsrdquo The Computer Journal vol 7 pp 149ndash1541964
[3] B Polak and G Ribire ldquoNote sur la convergence de directionsconjugeesrdquo Revue Francaise drsquoInformatique et de RechercheOperationnelle vol 16 pp 35ndash43 1969
[4] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[5] R Fletcher Practical Methods of Optimization John Wiley ampSons Ltd Chichester UK 2nd edition 1987
[6] Y Liu and C Storey ldquoEfficient generalized conjugate gradientalgorithms I Theoryrdquo Journal of Optimization Theory and Ap-plications vol 69 no 1 pp 129ndash137 1991
[7] Y H Dai and Y Yuan ldquoA nonlinear conjugate gradient methodwith a strong global convergence propertyrdquo SIAM Journal onOptimization vol 10 no 1 pp 177ndash182 1999
[8] M J D Powell ldquoConvergence properties of algorithms for non-linear optimizationrdquo SIAM Review vol 28 no 4 pp 487ndash5001986
[9] J C Gilbert and J Nocedal ldquoGlobal convergence properties ofconjugate gradient methods for optimizationrdquo SIAM Journal onOptimization vol 2 no 1 pp 21ndash42 1992
[10] R Pytlak ldquoOn the convergence of conjugate gradient algo-rithmsrdquo IMA Journal of Numerical Analysis vol 14 no 3 pp443ndash460 1994
[11] G Li C Tang and ZWei ldquoNew conjugacy condition and relat-ed new conjugate gradient methods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[12] X Li and X Zhao ldquoA hybrid conjugate gradient method foroptimization problemsrdquoNatural Science vol 3 no 1 pp 85ndash902011
[13] Y H Dai and Y Yuan ldquoAn efficient hybrid conjugate gradientmethod for unconstrained optimizationrdquo Annals of OperationsResearch vol 103 pp 33ndash47 2001
[14] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[15] D-H Li Y-Y Nie J-P Zeng andQ-N Li ldquoConjugate gradientmethod for the linear complementarity problemwith 119878-matrixrdquoMathematical and ComputerModelling vol 48 no 5-6 pp 918ndash928 2008
[16] D-H Li and X-L Wang ldquoA modified Fletcher-Reeves-typederivative-free method for symmetric nonlinear equationsrdquoNumerical Algebra Control and Optimization vol 1 no 1 pp71ndash82 2011
[17] L Zhang W Zhou and D Li ldquoGlobal convergence of a mod-ified Fletcher-Reeves conjugate gradient method with Armijo-type line searchrdquo Numerische Mathematik vol 104 no 4 pp561ndash572 2006
[18] L Zhang W Zhou and D-H Li ldquoA descent modified Polak-Ribiere-Polyak conjugate gradient method and its global con-vergencerdquo IMA Journal of Numerical Analysis vol 26 no 4 pp629ndash640 2006
[19] D H Li and X J Tong Numerical Optimization Science PressBeijing China 2005
[20] J J More B S Garbow and K E Hillstrom ldquoTesting uncon-strained optimization softwarerdquo ACM Transactions on Mathe-matical Software vol 7 no 1 pp 17ndash41 1981
Taking the limit on both sides of the equation then bycombining 119889
119896le 119872 and recalling lim
119896isin119870 119896rarrinfin120572119896= 0 we
obtain that lim119896isin119870 119896rarrinfin
|119892(119909119896)119879
119889119896| = 0 This also yields a
contradiction
3 Numerical Experiments
In this section we report some numerical experiments Wetest the performance of Algorithm 4 and compare it with theZoutendijk method
The code was written in Matlab and the program wasrun on a PC with 220GHz CPU and 100GB memory Theparameters in the method are specified as follows We set 120588 =
12 120575 = 110 We stop the iteration if |nabla119891(119909119896)119879
119889119896| le 00001
or the iteration number exceeds 10000We first test Algorithm 4 on small and medium size
problems and compared them with the Zoutendijk methodin the total number of iterations and the CPU time used Thetest problems are from the CUTE library [20]The numericalresults of Algorithm 4 and the Zoutendijk method are listedin Table 1 The columns have the following meanings
119875(119894) is the number of the test problemDim is thedimension of the test problem Iter is the number of iterationsandTime is CPU time in seconds
We can see fromTable 1 that Algorithm 4 has successfullysolved 12 test problems and the Zoutendijk method hassuccessfully solved 8 test problems From the number of iter-ations Algorithm 4 has 12 test results better than Zoutendijkmethod From the computation time Algorithm 4 performs
Table 1 The numerical results
119875(119894) Dim Algorithm 4 Zoutendijk methodIter Time Iter Time
much better than the Zoutendijk method did We then testAlgorithm 4 and the Zoutendijk method on two problemswith a larger dimension The problem of VARDIM comesfrom [20] and the following problem comes from [16] Theresults are listed in Tables 2 and 3
Problem 1 The nonnegative constraints optimization prob-lem
min 119891 (119909)
st 119909 ge 0
(46)
6 Journal of Applied Mathematics
with Engval function 119891 119877119899
rarr 119877 is defined by
119891 (119909) =
119899
sum
119894=2
(1199092
119894minus1+ 1199092
119894)2
minus 4119909119894minus1
+ 3 (47)
We can see fromTable 2 that Algorithm 4 has successfullysolved the problem of VARDIMwhose scale varies from 1000dimensions to 5000 dimensions However the Zoutendijkmethod fails to solve the problem of VARDIM with largerdimension From Table 3 although the number of iterationsof Algorithm 4 is more than the Zoutendijk method thecomputation time of Algorithm 4 is less than the Zoutendijkmethod and this feature becomesmore evident as increase ofthe dimension of the test problem
In summary the results from Tables 1ndash3 show thatAlgorithm 4 is more efficient than the Zoutendijk methodand provides an efficient method for solving nonnegativeconstraints optimization problems
Acknowledgment
This research is supported by the NSF (11161020) of China
References
[1] M R Hestenes and E Stiefel ldquoMethods of conjugate gradientsfor solving linear systemsrdquo Journal of Research of the NationalBureau of Standards vol 49 pp 409ndash436 1952
[2] R Fletcher and C M Reeves ldquoFunction minimization by con-jugate gradientsrdquo The Computer Journal vol 7 pp 149ndash1541964
[3] B Polak and G Ribire ldquoNote sur la convergence de directionsconjugeesrdquo Revue Francaise drsquoInformatique et de RechercheOperationnelle vol 16 pp 35ndash43 1969
[4] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[5] R Fletcher Practical Methods of Optimization John Wiley ampSons Ltd Chichester UK 2nd edition 1987
[6] Y Liu and C Storey ldquoEfficient generalized conjugate gradientalgorithms I Theoryrdquo Journal of Optimization Theory and Ap-plications vol 69 no 1 pp 129ndash137 1991
[7] Y H Dai and Y Yuan ldquoA nonlinear conjugate gradient methodwith a strong global convergence propertyrdquo SIAM Journal onOptimization vol 10 no 1 pp 177ndash182 1999
[8] M J D Powell ldquoConvergence properties of algorithms for non-linear optimizationrdquo SIAM Review vol 28 no 4 pp 487ndash5001986
[9] J C Gilbert and J Nocedal ldquoGlobal convergence properties ofconjugate gradient methods for optimizationrdquo SIAM Journal onOptimization vol 2 no 1 pp 21ndash42 1992
[10] R Pytlak ldquoOn the convergence of conjugate gradient algo-rithmsrdquo IMA Journal of Numerical Analysis vol 14 no 3 pp443ndash460 1994
[11] G Li C Tang and ZWei ldquoNew conjugacy condition and relat-ed new conjugate gradient methods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[12] X Li and X Zhao ldquoA hybrid conjugate gradient method foroptimization problemsrdquoNatural Science vol 3 no 1 pp 85ndash902011
[13] Y H Dai and Y Yuan ldquoAn efficient hybrid conjugate gradientmethod for unconstrained optimizationrdquo Annals of OperationsResearch vol 103 pp 33ndash47 2001
[14] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[15] D-H Li Y-Y Nie J-P Zeng andQ-N Li ldquoConjugate gradientmethod for the linear complementarity problemwith 119878-matrixrdquoMathematical and ComputerModelling vol 48 no 5-6 pp 918ndash928 2008
[16] D-H Li and X-L Wang ldquoA modified Fletcher-Reeves-typederivative-free method for symmetric nonlinear equationsrdquoNumerical Algebra Control and Optimization vol 1 no 1 pp71ndash82 2011
[17] L Zhang W Zhou and D Li ldquoGlobal convergence of a mod-ified Fletcher-Reeves conjugate gradient method with Armijo-type line searchrdquo Numerische Mathematik vol 104 no 4 pp561ndash572 2006
[18] L Zhang W Zhou and D-H Li ldquoA descent modified Polak-Ribiere-Polyak conjugate gradient method and its global con-vergencerdquo IMA Journal of Numerical Analysis vol 26 no 4 pp629ndash640 2006
[19] D H Li and X J Tong Numerical Optimization Science PressBeijing China 2005
[20] J J More B S Garbow and K E Hillstrom ldquoTesting uncon-strained optimization softwarerdquo ACM Transactions on Mathe-matical Software vol 7 no 1 pp 17ndash41 1981
We can see fromTable 2 that Algorithm 4 has successfullysolved the problem of VARDIMwhose scale varies from 1000dimensions to 5000 dimensions However the Zoutendijkmethod fails to solve the problem of VARDIM with largerdimension From Table 3 although the number of iterationsof Algorithm 4 is more than the Zoutendijk method thecomputation time of Algorithm 4 is less than the Zoutendijkmethod and this feature becomesmore evident as increase ofthe dimension of the test problem
In summary the results from Tables 1ndash3 show thatAlgorithm 4 is more efficient than the Zoutendijk methodand provides an efficient method for solving nonnegativeconstraints optimization problems
Acknowledgment
This research is supported by the NSF (11161020) of China
References
[1] M R Hestenes and E Stiefel ldquoMethods of conjugate gradientsfor solving linear systemsrdquo Journal of Research of the NationalBureau of Standards vol 49 pp 409ndash436 1952
[2] R Fletcher and C M Reeves ldquoFunction minimization by con-jugate gradientsrdquo The Computer Journal vol 7 pp 149ndash1541964
[3] B Polak and G Ribire ldquoNote sur la convergence de directionsconjugeesrdquo Revue Francaise drsquoInformatique et de RechercheOperationnelle vol 16 pp 35ndash43 1969
[4] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[5] R Fletcher Practical Methods of Optimization John Wiley ampSons Ltd Chichester UK 2nd edition 1987
[6] Y Liu and C Storey ldquoEfficient generalized conjugate gradientalgorithms I Theoryrdquo Journal of Optimization Theory and Ap-plications vol 69 no 1 pp 129ndash137 1991
[7] Y H Dai and Y Yuan ldquoA nonlinear conjugate gradient methodwith a strong global convergence propertyrdquo SIAM Journal onOptimization vol 10 no 1 pp 177ndash182 1999
[8] M J D Powell ldquoConvergence properties of algorithms for non-linear optimizationrdquo SIAM Review vol 28 no 4 pp 487ndash5001986
[9] J C Gilbert and J Nocedal ldquoGlobal convergence properties ofconjugate gradient methods for optimizationrdquo SIAM Journal onOptimization vol 2 no 1 pp 21ndash42 1992
[10] R Pytlak ldquoOn the convergence of conjugate gradient algo-rithmsrdquo IMA Journal of Numerical Analysis vol 14 no 3 pp443ndash460 1994
[11] G Li C Tang and ZWei ldquoNew conjugacy condition and relat-ed new conjugate gradient methods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[12] X Li and X Zhao ldquoA hybrid conjugate gradient method foroptimization problemsrdquoNatural Science vol 3 no 1 pp 85ndash902011
[13] Y H Dai and Y Yuan ldquoAn efficient hybrid conjugate gradientmethod for unconstrained optimizationrdquo Annals of OperationsResearch vol 103 pp 33ndash47 2001
[14] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[15] D-H Li Y-Y Nie J-P Zeng andQ-N Li ldquoConjugate gradientmethod for the linear complementarity problemwith 119878-matrixrdquoMathematical and ComputerModelling vol 48 no 5-6 pp 918ndash928 2008
[16] D-H Li and X-L Wang ldquoA modified Fletcher-Reeves-typederivative-free method for symmetric nonlinear equationsrdquoNumerical Algebra Control and Optimization vol 1 no 1 pp71ndash82 2011
[17] L Zhang W Zhou and D Li ldquoGlobal convergence of a mod-ified Fletcher-Reeves conjugate gradient method with Armijo-type line searchrdquo Numerische Mathematik vol 104 no 4 pp561ndash572 2006
[18] L Zhang W Zhou and D-H Li ldquoA descent modified Polak-Ribiere-Polyak conjugate gradient method and its global con-vergencerdquo IMA Journal of Numerical Analysis vol 26 no 4 pp629ndash640 2006
[19] D H Li and X J Tong Numerical Optimization Science PressBeijing China 2005
[20] J J More B S Garbow and K E Hillstrom ldquoTesting uncon-strained optimization softwarerdquo ACM Transactions on Mathe-matical Software vol 7 no 1 pp 17ndash41 1981