Top Banner
Globally convergent modified Perry’s conjugate gradient method Ioannis E. Livieris , Panagiotis Pintelas Department of Mathematics, University of Patras, GR 265-00, Greece Educational Software Development Laboratory, Department of Mathematics, University of Patras, GR 265-00, Greece article info Keywords: Unconstrained optimization Conjugate gradient method Sufficient descent property Line search Global convergence abstract Conjugate gradient methods are probably the most famous iterative methods for solving large scale optimization problems in scientific and engineering computation, characterized by the simplicity of their iteration and their low memory requirements. In this paper, we propose a new conjugate gradient method which is based on the MBFGS secant condition by modifying Perry’s method. Our proposed method ensures sufficient descent indepen- dent of the accuracy of the line search and it is globally convergent under some assump- tions. Numerical experiments are also presented. Ó 2012 Elsevier Inc. All rights reserved. 1. Introduction Let us consider the unconstrained optimization problem min f ðxÞ; x 2 R n ; ð1:1Þ where f : R n ! R is a smooth nonlinear function and its gradient is denoted by gðxÞ¼ rf ðxÞ. Iterative methods are usually applied to deal with this problem by generating a sequence of points fx k g, starting from an initial point x 0 2 R n , using the recurrence x kþ1 ¼ x k þ a k d k ; k ¼ 0; 1; ... ; ð1:2Þ where a k > 0 is the stepsize obtained by some line search and d k is the search direction. Conjugate gradient methods are probably the most famous iterative methods for solving the optimization problem (1.1), especially when the dimension is large due to the simplicity of their iteration and their low memory requirements. These methods define the search direction by d k ¼ g 0 ; if k ¼ 0; g k þ b k d k1 ; otherwise; ð1:3Þ where g k ¼ gðx k Þ. Conjugate gradient methods differ in their way of defining the scalar parameter b k . In the literature, there have been proposed several choices for b k which give rise to distinct conjugate gradient methods. The most well known con- jugate gradient methods are the Hestenes–Stiefel (HS) method [19], the Fletcher–Reeves (FR) method [11], the Polak–Ribière (PR) method [27] and Perry’s (P) method [26]. The update parameters of these methods are respectively specified as follows: b HS k ¼ g T k y k1 y T k1 d k1 ; b FR k ¼ kg k k 2 kg k1 k 2 ; b PR k ¼ g T k y k1 kg k1 k 2 ; b P k ¼ g T k ðy k1 s k1 Þ y T k1 d k1 ; 0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2012.02.076 Corresponding author at: Department of Mathematics, University of Patras, GR 265-00, Greece. E-mail address: [email protected] (I.E. Livieris). Applied Mathematics and Computation xxx (2012) xxx–xxx Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc Please cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math. Comput. (2012), doi:10.1016/j.amc.2012.02.076
11

Globally convergent modified Perry’s conjugate gradient method

Apr 09, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Globally convergent modified Perry’s conjugate gradient method

Applied Mathematics and Computation xxx (2012) xxx–xxx

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation

journal homepage: www.elsevier .com/ locate/amc

Globally convergent modified Perry’s conjugate gradient method

Ioannis E. Livieris ⇑, Panagiotis PintelasDepartment of Mathematics, University of Patras, GR 265-00, GreeceEducational Software Development Laboratory, Department of Mathematics, University of Patras, GR 265-00, Greece

a r t i c l e i n f o

Keywords:Unconstrained optimizationConjugate gradient methodSufficient descent propertyLine searchGlobal convergence

0096-3003/$ - see front matter � 2012 Elsevier Incdoi:10.1016/j.amc.2012.02.076

⇑ Corresponding author at: Department of MatheE-mail address: [email protected] (I.E. Livieris).

Please cite this article in press as: I.E. LivierisComput. (2012), doi:10.1016/j.amc.2012.02.07

a b s t r a c t

Conjugate gradient methods are probably the most famous iterative methods for solvinglarge scale optimization problems in scientific and engineering computation, characterizedby the simplicity of their iteration and their low memory requirements. In this paper, wepropose a new conjugate gradient method which is based on the MBFGS secant conditionby modifying Perry’s method. Our proposed method ensures sufficient descent indepen-dent of the accuracy of the line search and it is globally convergent under some assump-tions. Numerical experiments are also presented.

� 2012 Elsevier Inc. All rights reserved.

1. Introduction

Let us consider the unconstrained optimization problem

min f ðxÞ; x 2 Rn; ð1:1Þ

where f : Rn ! R is a smooth nonlinear function and its gradient is denoted by gðxÞ ¼ rf ðxÞ. Iterative methods are usuallyapplied to deal with this problem by generating a sequence of points fxkg, starting from an initial point x0 2 Rn, using therecurrence

xkþ1 ¼ xk þ akdk; k ¼ 0;1; . . . ; ð1:2Þ

where ak > 0 is the stepsize obtained by some line search and dk is the search direction.Conjugate gradient methods are probably the most famous iterative methods for solving the optimization problem (1.1),

especially when the dimension is large due to the simplicity of their iteration and their low memory requirements. Thesemethods define the search direction by

dk ¼�g0; if k ¼ 0;�gk þ bkdk�1; otherwise;

�ð1:3Þ

where gk ¼ gðxkÞ. Conjugate gradient methods differ in their way of defining the scalar parameter bk. In the literature, therehave been proposed several choices for bk which give rise to distinct conjugate gradient methods. The most well known con-jugate gradient methods are the Hestenes–Stiefel (HS) method [19], the Fletcher–Reeves (FR) method [11], the Polak–Ribière(PR) method [27] and Perry’s (P) method [26]. The update parameters of these methods are respectively specified as follows:

bHSk ¼

gTk yk�1

yTk�1dk�1

; bFRk ¼

kgkk2

kgk�1k2 ; bPR

k ¼gT

k yk�1

kgk�1k2 ; bP

k ¼gT

kðyk�1 � sk�1ÞyT

k�1dk�1;

. All rights reserved.

matics, University of Patras, GR 265-00, Greece.

, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.6

Page 2: Globally convergent modified Perry’s conjugate gradient method

2 I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx

where sk�1 ¼ xk � xk�1; yk�1 ¼ gk � gk�1 and k � k denotes the Euclidean norm.In the literature, much effort has been devoted to the global convergence analysis of conjugate gradient methods, which is

usually based on mild conditions which refer to the Lipschitz and boundedness assumptions and is closely connected withthe sufficient descent property

PleaseComp

gTk dk 6 �ckgkk

2; ð1:4Þ

where c > 0 is a positive constant. The global convergence of the FR method was established using both exact [41] and inex-act [1] line search on general functions. The PR and the HS methods may be trapped and cycle infinitely without approachinga solution which implies that they both do not have global convergence for general functions in certain circumstances (seePowell’s counter example [29]). Nevertheless, both methods are preferred to the FR method in its numerical performancesince they have the remarkable property of performing a restart after encountering a bad direction. Motivated by Powell’swork [30], Gilbert and Nocedal [15] conducted an elegant analysis and established that the PR method is globally convergentif bPR

k is restricted to be nonnegative under the sufficient descent condition (1.4). This theoretic result is very interesting andthis globalization technique has been be extended to other conjugate gradient methods, see for instance [6,18]. Perry’s meth-od is based on a quasi-Newton philosophy since it satisfies the secant equation and has been considered to be one of themost efficient conjugate gradient methods in the context of unconstrained minimization [2,35]. However, a global conver-gence result for general functions has not been established yet. We refer to the books [6,25], the survey paper [18] and thereferences therein about the numerical performance and the convergence properties of conjugate gradient methods. Duringthe last decade, much effort has been devoted to develop new conjugate gradient methods which are not only globally con-vergent for general functions but also computationally superior to classical methods and are classified in two classes.

The first class utilizes second order information to accelerate conjugate gradient methods based on modified secant equa-tions (see [12,13,20,21,34]). Dai and Liao [5] proposed a conjugate gradient method by exploiting a new conjugacy conditionbased on the standard secant equation. Motivated by their work, Zhou and Zhang [40] proposed a modification of the Dai-Liao method which is based on the MBFGS condition [20,21]. Li et al. [22] proposed some conjugate gradient methods whichare based on a modified secant equation [34]. In more recent works, Ford et al. [14] proposed a multi-step conjugate gradientmethod that is based on the multi-step quasi-Newton methods proposed in [12,13]. Under proper conditions, these methodsare globally convergent and sometimes their numerical performance is superior to classical conjugate gradient methods.However, these methods don’t ensure to generate descent directions, therefore restarts are employed in their analysisand implementation in order to guarantee convergence.

The second class focuses on developing conjugate gradient methods which ensure the sufficient descent property (1.4).Independently, Dai and Yuan [7] and Hager and Zhang [17] proposed conjugate gradient methods by modifying the updateparameter bk which generate descent directions under the Wolfe line search conditions. Moreover, an important feature oftheir works is that they established the global convergence of their methods for general functions.

Quite recently, similar to the spectral gradient method [2], Zhang et al. [39] considered a different approach, to modify thesearch direction such that it satisfies sufficient descent gT

k dk ¼ �kgkk2. More specifically, they proposed a modification of the

FR method in the following way:

dk ¼ � 1þ bFRk

gTk dk�1

kgkk

� �gk þ bFR

k dk�1: ð1:5Þ

Their method is reduced to the classical FR method in case the line search is exact. An attractive property of their method isthat the property gT

k dk ¼ �kgkk2 holds, independently of the performed line search and the choice of bk. Moreover, if bk in Eq.

(2.5) is specified by another existing conjugate gradient formula, we obtain the corresponding modified conjugate gradientmethod. Along this line, many related conjugate gradient methods have been extensively studied [4,8,10,23,37,38] whichpossess global convergence for general functions and are also computationally competitive to classical methods.

In this work, we propose a conjugate gradient method which can be regarded as a modified version of Perry’s method[26]. Our proposed method ensures sufficient descent using any line search and it is based on the MBFGS secant equation[20,21]. Under suitable conditions, we establish the global convergence of our proposed method provided that the line searchsatisfies the Wolfe conditions. Our numerical results demonstrate that our proposed method is promising.

The remainder of this paper is organized as follows: In Section 2, we present our proposed conjugate gradient method andin Section 3, we present its global convergence analysis. The numerical experiments are reported in Section 4 using the per-formance profiles of Dolan & Morè [9]. Finally, Section 5 presents our concluding remarks and our proposals for futureresearch.

2. Modified Perry’s conjugate gradient method

Li and Fukushima [20,21] made a modification on the standard BFGS method and developed a modified BFGS (MBFGS)method which is globally convergent without a convexity assumption on the objective function f. Their method satisfiesthe following secant condition

Bksk�1 ¼ zk�1; ð2:1Þ

cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.ut. (2012), doi:10.1016/j.amc.2012.02.076

Page 3: Globally convergent modified Perry’s conjugate gradient method

I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx 3

where Bk is the Hessian approximation and

PleaseComp

zk�1 ¼ yk�1 þ hkkgk�1krsk�1; ð2:2Þ

where r > 0 and hk > 0 (hk is slightly different from that of [20] where r ¼ 1) is defined by

hk ¼ t þmax � sTk�1yk�1

ksk�1k2 ;0

( )kgk�1k

�r; ð2:3Þ

where t is a positive constant. By taking these into consideration, we propose a modification of Perry’s formula bPk as follows:

bMPk ¼ gT

kðzk�1 � sk�1ÞzT

k�1dk�1; ð2:4Þ

where zk�1 is defined by (2.2) and (2.3). Notice that, for a general function the MBFGS secant Eqs. (2.1)–(2.3) ensures that thedominator zT

k�1dk�1 in (2.4) is always positive independent of the performed line search, thus formula (2.4) is well defined. Inorder to guarantee that our proposed method generates descent directions, we exploit the idea of the modified FR method[39]. More specifically, let the search direction be defined by

dk ¼ � 1þ bMPk

gTk dk�1

kgkk

� �gk þ bMP

k dk�1: ð2:5Þ

It is easy to see that the condition

gTk dk ¼ �kgkk

2; ð2:6Þ

holds, using any line search. At this point, we present our proposed modified Perry’s conjugate gradient algorithm (MP-CG).

Algorithm 2.1 (MP-CG).

Step 1: Initiate x0 2 Rn and 0 < r1 < r2 < 1; Set k ¼ 0.Step 2: If kgkk ¼ 0, then terminate; Otherwise go to the next step.Step 3: Compute the descent direction dk by Eq. (2.5).Step 4: Determine a stepsize ak using the Wolfe line search:

f ðxk þ akdkÞ � f ðxkÞ 6 r1akgTk dk; ð2:7Þ

gðxk þ akdkÞT dk P r2gTk dk: ð2:8Þ

Step 5: Let xkþ1 ¼ xk þ akdk.Step 6: Set k ¼ kþ 1 and go to Step 2.

3. Global convergence analysis

In order to present the global convergence analysis, we make the following assumptions on the objective function f, whichhave often been used in the literature [18,6,25] to establish the global convergence of conjugate gradient methods.

Assumption 1. The level set L ¼ fx 2 Rnjf ðxÞ 6 f ðx0Þg is bounded, namely, there exists a positive constant B > 0 such that

kxk 6 B; 8x 2 L: ð3:1Þ

Assumption 2. In some neighborhood N of L; f is differentiable and its gradient g is Lipschitz continuous, i.e., there exists apositive constant L > 0 such that

kgðxÞ � gðyÞk 6 Lkx� yk; 8x; y 2 N : ð3:2Þ

Since ffkg is a decreasing sequence, it is clear that the sequence fxkg generated by Algorithm MP-CG is contained in L. In

addition, it follows directly from Assumptions 1 and 2 that there exists a positive constant c > 0 such that

kgðxÞk 6 c; 8x 2 L: ð3:3Þ

In Algorithm 2.1 since the line search satisfies the Wolfe line search conditions (2.7) and (2.8), it immediately follows thatyT

k�1sk�1 > 0 for all k > 0, thus zk�1 is reduced to

zk�1 ¼ yk�1 þ tkgk�1krsk�1: ð3:4Þ

Utilizing this with Assumption 2 and relation (3.3), we can easily obtain the following lemma whose proof is omitted.

cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.ut. (2012), doi:10.1016/j.amc.2012.02.076

Page 4: Globally convergent modified Perry’s conjugate gradient method

4 I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx

Lemma 3.1. Suppose that Assumptions 1 and 2 hold. Let fxkg be generated by Algorithm MP-CG, then we have

PleaseComp

kzk�1k 6 Lþ tcrð Þksk�1k: ð3:5Þ

Any conjugate gradient method implemented with a line search that satisfies the Wolfe conditions (2.7) and (2.8) possessesthe following property called the Zoutendijk condition [41] which is often used to prove global convergence of conjugategradient methods.

Lemma 3.2. Suppose that Assumptions 1 and 2 hold. Consider any conjugate gradient method of the form (1.2) where dk satisfygT

k dk < 0 and ak satisfies the Wolfe line search conditions (2.7) and (2.8), then

XkP0

ðgTk dkÞ2

kdkk2 < þ1: ð3:6Þ

Clearly, by substituting (2.6) in Zoutendijk’s condition (3.6), we obtain the following inequality

XkP0

kgkk4

kdkk2 < þ1; ð3:7Þ

which is useful in showing the global convergence of our proposed method. In the following, we establish the global conver-gence of Algorithm MP-CG for uniformly convex functions.

Theorem 3.1. Suppose that Assumptions 1 and 2 hold and f is uniformly convex, namely, there exists a positive constant c > 0such that

ðrf ðxÞ � rf ðyÞÞTðx� yÞP ckx� yk2; 8x; y 2 L: ð3:8Þ

If fxkg is obtained by Algorithm MP-CG, then we have either gk ¼ 0 for some k or

limk!1

inf kgkk ¼ 0:

Proof. By the convexity assumption (3.8) and Eq. (3.4), we have

zTk�1dk�1 ¼ yT

k�1dk�1 þ tkgk�1krsT

k�1dk�1 P cak�1kdk�1k2: ð3:9Þ

Combining the previous inequality with relations (3.1), (3.2) and (3.5), we obtain

jbMPk j ¼

gTkðzk�1 � sk�1Þ

zTk�1dk�1

�������� 6 kgkk kzk�1k þ ksk�1kð Þ

zTk�1dk�1

�� �� 6Lþ tcr þ 1

ckgkkkdk�1k

:

Therefore, by the definition of the search direction in Eq. (2.5), we have

kdkk 6 kgkk þ jbMPk jkdk�1kkgkkkgkk

2 kgkk þ jbMPk j kdk�1k ¼ kgkk þ 2jbMP

k j kdk�1k 6 1þ 2Lþ tcr þ 1

c

� �kgkk:

Inserting this upper bound for dk in Eq. (3.7), yieldsP

kP0kgkk2<1, which completes the proof. h

Subsequently, in order to ensure global convergence for general functions, similar to Gilbert and Nocedal [15], we restrictthe update parameter of being nonnegative, namely

bMPþk ¼ max

gTkðzk�1 � sk�1Þ

zTk�1dk�1

;0� �

: ð3:10Þ

Moreover, for simplicity in Algorithm 2.1, in case the update parameter bk is computed by Eq. (3.10), we refer it as AlgorithmMP+-CG.

Next, we establish the global convergence of Algorithm MP+-CG for general nonlinear functions. The following lemmashows that bMP

k will be small when the step sk�1 is small which implies that the Algorithm MP-CG prevents the inefficientbehavior of the jamming phenomenon [28], presented in the FR method, from occurring. This property is similar to butslightly different from Property(*), which was derived by Gilbert and Nocedal [15].

Lemma 3.3. Suppose that Assumptions 1 and 2 hold. Let fxkg and fdkg be generated by Algorithm MP-CG, if there exists a positiveconstant l > 0 such that for all k

kgkkP l; for all k P 0: ð3:11Þ

then there exist constants b > 1 and k > 0 such that for all k P 1

cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.ut. (2012), doi:10.1016/j.amc.2012.02.076

Page 5: Globally convergent modified Perry’s conjugate gradient method

I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx 5

PleaseComp

jbMPk j 6 b ð3:12Þ

and

ksk�1k 6 k ) jbMPk j 6

1b: ð3:13Þ

Proof. From equations (2.6), (2.8) and (3.4), we have

zTk�1dk�1 ¼ yT

k�1dk�1 þ tkgk�1krsT

k�1dk�1 P ðr2 � 1ÞgTk�1dk�1 P ð1� r2Þkgk�1k

2:

Using this with (3.3), (3.5) and (3.13) we obtain

jbMPk j ¼

gTkðzk�1 � sk�1Þ

zTk�1dk�1

�������� 6 kgkkðkzk�1k þ ksk�1kÞ

jzTk�1dk�1j

6cðLþ tcr þ 1Þð1� r2Þl2 ksk�1k , Cksk�1k: ð3:14Þ

Therefore, by setting b :¼maxf2;2CBg and k :¼ 1=Cb, we have that relations (3.12) and (3.13) hold, which completes ourproof. h

From the definition of bMPþk in Eq. (3.10), it immediately follows that jbMPþ

k j 6 jbMPk j for all k > 0. Therefore, we can obtain

the same result for Algorithm MP+-CG as in Lemma 3.3.Subsequently, we present a lemma for the search direction which shows that, asymptotically, the search directions

change slowly.

Lemma 3.4. Suppose that Assumptions 1 and 2 hold. Let fxkg and fdkg be generated by Algorithm MP+-CG, if there exists a positiveconstant l > 0 such that Eq. (3.13) holds; then dk – 0 and

X

kP1

kwk �wk�1k2<1;

where wk ¼ dk=kdkk.

Proof. Firstly, note that dk – 0, for otherwise (2.6) would imply gk ¼ 0. Therefore, wk is well defined. Now, let us define

rk :¼ tk

kdkkand dk :¼ bMP

k þkdk�1kkdkk

; ð3:15Þ

where

tk ¼ � 1þ bMPk þ

gTk dk�1

kgkk2

!gk:

Then, by Eq. (2.5), we have

wk ¼ rk þ dkwk�1: ð3:16Þ

Using this relation with the identity kwkk ¼ kwk�1k ¼ 1, we obtain

krkk ¼ kwk � dkwk�1k ¼ kwk�1 � dkwkk:

Moreover, using this with the condition dk P 0 and the triangle inequality, we get

kwk �wk�1k 6 kwk � dkwk�1k þ kwk�1 � dkwkk ¼ 2krkk: ð3:17Þ

Next, we estimate an upper bound for ktkk. By the Wolfe condition (2.8), we have

gTk dk�1 P r2gT

k�1dk�1 P �r2zTk�1dk�1 þ r2gT

k dk�1: ð3:18Þ

Also observe that

gTk dk�1 ¼ yT

k�1dk�1 þ gTk�1dk�1 6 yT

k�1dk�1 6 zTk�1dk�1: ð3:19Þ

By rearranging inequality (3.18), we obtain gTk dk�1 P �ðr2=1� r2ÞzT

k�1dk�1, which together with (3.19), we obtain

gTk dk�1

zTk�1dk�1

�������� 6 max

r2

ð1� r2Þ;1

� �: ð3:20Þ

It follows from the definition of tk in Eq. (3.16) and relations (3.1), (3.3), (3.5) and (3.20) that there exists a positive constantD > 0 such that

cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.ut. (2012), doi:10.1016/j.amc.2012.02.076

Page 6: Globally convergent modified Perry’s conjugate gradient method

1 http2 http

6 I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx

PleaseComp

ktkk ¼ 1þ bMPþk

gTk dk�1

kgkk2

!gk

���������� 6 1þ gT

kðzk�1 � sk�1ÞzT

k�1dk�1

�������� jgT

k dk�1jkgkk

2

!kgkk 6 kgkk þ kzk�1k þ ksk�1kð Þ gT

k dk�1

zTk�1dk�1

��������

6 cþ ðLþ tcr þ 1ÞB maxr2

ð1� r2Þ;1

� �, D:

Therefore, using this relation with (3.7), we obtain

XkP1

krkk26

XkP1

ktkk2

kdkk2 6XkP1

ktkk2

kgkk4

kgkk4

kdkk2 6D2

l4

XkP1

kgkk4

kdkk2 < þ1:

which together with (3.17) completes the proof. h

Next, by making use of Lemmas 3.3 and 3.4, we establish the global convergence theorem for Algorithm MP+-CG. Theproof of the following theorem is similar to that of Theorem 3.2 in [17].

Theorem 3.2. Suppose that Assumptions 1 and 2 hold. If fxkg is obtained by Algorithm MP+-CG, then we have

limk!1

inf kgkk ¼ 0:

Proof. We proceed by contraction. Suppose that there exists a positive constant l > 0 such that for all k P 0

kgkkP l:

The proof is divided in the following two steps.

Step I. A bound on the step sk. Let D be a positive integer, chosen large enough that

D P 4BC;

where B and C are defined in (3.1) and (3.14), respectively. For any l > k P k0 with l� k 6 D, following the same proof as thecase II of Theorem 3.2 in [17], we get

Xl�1

j¼k

ksjk < 2B:

Step II. A bound on the directions dl determined by Eq. (2.5). It follows from (2.5) that

dl ¼ �gl þ bMPþk I � glg

Tl

kglk2

!dl�1: ð3:21Þ

Since gl is orthogonal to I � glgTl

kglk2

� dl�1 and I � glg

Tl

kglk2 is a project matrix, we have from (3.1), (3.14) and (3.21) that

kdlk26 kglk

2 þ jbMPþk j2kdl�1k2

6 c2 þ C2ksl�1k2kdl�1k2:

Now, the remaining argument is standard in the same way as case III of Theorem 3.2 in [17], thus we omit it. This completesthe proof. h

4. Numerical experiments

In this section, we report some numerical results in order to evaluate the performance of our proposed conjugate gradientmethod MP+-CG with that of the CG-DESCENT method [17] and the PR+ method [15].

The implementation code was written in Fortran and compiled with ifort on a PC (2.66 GHz Quad-Core processor,4Gbyte RAM) running Linux operating system. The CG-DESCENT code is coauthored by Hager and Zhang obtained from Hag-er’s web page1 and the PR+ code is coauthored by Liu, Nocedal and Waltz and obtained by Nocedal’s web page.2 In our exper-iments, we use the condition kgkk1 6 10�6 as stopping criterion. For our proposed method, we set parameter t ¼ 10�4 and r ¼ 1,if kgkkP 1; otherwise r ¼ 3 as in [40]. We selected 111 problems from the CUTEr [3] library that have been also tested by Hagerand Zhang [17]. The problem ncb20 was excluded from our experimental analysis because it gives the ‘‘insufficient space’’ errorwhen evaluated by any tested algorithm. Table 1 reports the numerical results, which gives the problem names and theirdimensions, the total number of iterations (Iter), the total number of function evaluations (FcEv), the total number of gradient

://www.math.ufl.edu/�hager/papers/CG.://www.ece.northwestern.edu/�nocedal/software.html.

cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.ut. (2012), doi:10.1016/j.amc.2012.02.076

Page 7: Globally convergent modified Perry’s conjugate gradient method

Table 1The numerical results of the methods.

No. Problem Dimension CG Descent PR MP

Iter/FcEv/GradEv/Time Iter/FcEv/GradEv/Time Iter/FcEv/GradEv/Time

1 ARGLINA 500 1/3/2/0.00 1/5/5/0.01 1/3/2/0.002 ARGLINB 500 Failed Failed Failed3 ARGLINC 500 Failed Failed Failed4 ARWHEAD 1000 12/31/22/0.00 Failed 8/20/15/0.005 ARWHEAD 10000 10/22/15/0.02 Failed 8/17/11/0.027 BDQRTIC 1000 738/1453/1281/0.20 Failed 133/295/257/0.036 BDQRTIC 5000 5018/7656/10649/7.95 Failed 208/542/488/0.408 BROWNAL 400 6/15/11/0.02 5/39/39/0.04 7/17/12/0.029 BROYDN7D 2000 628/1247/643/0.45 Failed 602/1192/616/0.46

10 BRYBND 1000 51/103/52/0.00 30/73/73/0.02 33/72/39/0.0111 BRYBND 5000 36/74/39/0.05 26/66/66/0.06 37/77/40/0.0512 CHAINWOO 1000 435/837/495/0.10 Failed 280/553/393/0.0713 CHNROSNB 50 272/545/273/0.00 243/500/500/0.00 232/466/234/0.0014 COSINE 500 12/29/23/0.00 9/26/26/0.00 11/30/23/0.0015 COSINE 1000 12/28/24/0.00 9/29/29/0.00 11/31/25/0.0016 CRAGGLVY 2000 109/193/138/0.07 Failed 105/191/128/0.0717 CRAGGLVY 5000 112/201/146/0.23 Failed 215/467/354/0.5018 CURLY10 200 2164/3713/3194/0.07 Failed 2120/3666/3135/0.0719 CURLY10 1000 9574/14559/14741/1.53 Failed 9734/14701/15247/1.5720 CURLY20 200 1894/3376/2728/0.08 Failed 1895/3357/2869/0.0921 CURLY20 1000 9630/15189/14805/2.33 Failed 10528/16269/16841/2.6322 CURLY30 200 2058/3688/3042/0.13 Failed 1848/3358/2732/0.1123 CURLY30 1000 9871/15751/15461/3.36 Failed 10785/16961/17466/3.6824 DECONVU 61 385/772/389/0.00 322/666/666/0.00 255/512/260/0.0025 DIXMAANA 1500 9/19/10/0.00 7/22/22/0.00 7/15/8/0.0026 DIXMAANA 3000 9/19/10/0.01 7/20/20/0.01 7/15/8/0.0127 DIXMAANB 1500 9/19/10/0.00 7/24/24/0.00 8/17/9/0.0028 DIXMAANB 3000 9/19/10/0.00 6/23/23/0.01 8/17/9/0.0029 DIXMAANC 6000 10/21/11/0.01 7/23/23/0.02 9/19/10/0.0130 DIXMAAND 6000 12/25/13/0.00 8/25/25/0.01 11/23/12/0.0131 DIXMAANE 6000 302/605/303/0.23 307/620/620/0.40 311/623/312/0.2632 DIXMAANF 6000 229/459/230/0.18 215/437/437/0.27 219/439/220/0.1833 DIXMAANG 6000 226/453/227/0.18 206/420/420/0.26 218/437/219/0.1834 DIXMAANH 6000 223/447/224/0.20 408/825/825/0.53 217/435/218/0.1935 DIXMAANI 6000 2734/5469/2735/2.30 2357/4720/4720/3.03 3816/7633/3817/3.1736 DIXMAANJ 6000 295/591/296/0.26 275/557/557/0.35 312/625/313/0.2637 DIXMAANK 6000 263/527/264/0.24 289/587/587/0.37 276/553/277/0.2338 DIXMAANL 6000 244/489/245/0.20 346/702/702/0.45 243/487/244/0.2039 DIXON3DQ 500 499/999/500/0.01 499/1003/1003/0.02 999/1999/1001/0.0340 DIXON3DQ 1000 1000/2001/1002/0.05 1000/2005/2005/0.08 1017/2035/1020/0.0641 DQDRTIC 1000 7/15/8/0.00 5/15/15/0.01 12/25/14/0.0042 DQDRTIC 10000 7/15/8/0.01 5/15/15/0.02 12/25/14/0.0243 EDENSCH 5000 34/61/43/0.04 Failed 33/63/45/0.0444 EG2 1000 4/9/6/0.00 Failed 4/9/6/0.0045 EIGENALS 110 377/759/398/0.02 419/856/856/0.04 499/1003/517/0.0346 EIGENALS 420 1315/2637/1333/0.63 1539/3112/3112/1.22 1341/2688/1357/0.6447 EIGENBLS 420 5198/10421/5225/2.49 4672/9369/9369/3.68 3874/7761/3888/1.8148 EIGENCLS 90 356/714/358/0.01 366/743/743/0.02 306/614/308/0.0049 EIGENCLS 462 1705/3431/1728/0.97 1609/3230/3230/1.46 1752/3514/1762/0.9450 ENGVAL1 10000 24/43/32/0.05 Failed 20/36/27/0.0451 ERRINROS 50 1004/1994/1339/0.00 Failed 1180/2358/1612/0.0052 EXTROSNB 50 4980/10535/5732/0.10 4038/8777/8777/0.12 4123/8755/4788/0.0853 FLETCBV2 500 480/961/482/0.05 480/962/962/0.07 480/961/482/0.0454 FLETCBV2 1000 1049/2099/1052/0.22 942/1886/1886/0.29 939/1879/942/0.2055 FLETCBV3 500 20241/46468/28592/6.09 Failed 19617/39840/24894/5.5356 FLETCHCR 1000 6577/13523/6990/0.96 4484/9012/9012/0.97 4677/9371/4700/0.6657 FLETCHCR 5000 32150/66781/34912/24.81 20051/40134/40134/21.0 20046/40106/20065/14.4458 FMINSRF2 121 113/228/115/0.00 123/250/250/0.00 116/236/120/0.0059 FMINSRF2 1024 276/558/282/0.04 257/517/517/0.06 252/508/256/0.0360 FMINSURF 1024 236/474/238/0.04 226/455/455/0.06 248/502/254/0.0561 FMINSURF 5625 491/983/492/0.60 471/949/949/0.82 459/930/472/0.4862 FREUROTH 1000 2164/3713/3194/0.07 Failed 75/152/99/0.0163 FREUROTH 5000 107/172/177/0.14 Failed 61/125/86/0.0864 GENHUMPS 500 1806/3672/1891/0.32 1880/4026/4026/0.52 1917/3923/2035/0.3465 GENHUMPS 1000 3021/6140/3142/1.07 2491/5205/5205/1.36 2494/5087/2618/0.8966 GENROSE 100 299/626/338/0.00 298/626/626/0.00 298/630/345/0.00

(continued on next page)

I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx 7

Please cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.Comput. (2012), doi:10.1016/j.amc.2012.02.076

Page 8: Globally convergent modified Perry’s conjugate gradient method

Table 1 (continued)

No. Problem Dimension CG Descent PR MP

Iter/FcEv/GradEv/Time Iter/FcEv/GradEv/Time Iter/FcEv/GradEv/Time

67 GENROSE 500 1199/2439/1249/0.08 1150/2329/2329/0.12 1112/2271/1174/0.0868 HILBERTA 200 25/51/31/0.06 13/38/38/0.05 44/89/53/0.0869 HILBERTB 200 5/11/6/0.01 5/13/13/0.02 6/13/7/0.0170 HYDC20LS 99 Failed Failed Failed71 LIARWHD 5000 21/48/32/0.02 16/46/46/0.03 23/49/32/0.0372 LIARWHD 10000 26/62/43/0.06 15/46/46/0.06 18/42/29/0.0473 MANCINO 100 12/25/13/0.15 11/27/27/0.24 10/21/11/0.1374 MOREBV 1000 425/851/426/0.04 425/851/851/0.08 425/851/426/0.0575 MOREBV 10000 95/191/97/0.11 100/201/201/0.21 106/213/108/0.1776 NONCVXU2 100 343/639/396/0.00 477/974/974/0.00 490/932/548/0.0077 NONCVXU2 1000 1969/3817/2092/0.43 Failed 1950/3792/2060/0.4178 NONCVXUN 100 170/331/183/0.00 193/397/397/0.00 163/323/168/0.0079 NONCVXUN 1000 Failed Failed Failed80 NONDIA 5000 9/29/22/0.05 5/26/26/0.06 8/18/12/0.0281 NONDIA 10000 9/27/22/0.12 6/26/26/0.11 7/16/11/0.0582 NONDQUAR 1000 3038/6084/3206/0.22 2895/6073/6073/0.32 2981/5970/3111/0.2283 NONDQUAR 5000 5013/10053/5122/1.80 10007/20056/20056/5.39 2185/4393/2584/0.8784 PENALTY1 5000 51/103/52/0.01 38/152/152/0.04 44/95/54/0.0285 PENALTY1 10000 53/126/77/0.06 14/81/81/0.05 48/115/71/0.0586 PENALTY2 200 199/234/365/0.02 Failed 205/240/386/0.0187 POWELLSG 5000 766/1538/873/0.25 148/346/346/0.08 92/204/122/0.0288 POWELLSG 10000 59/120/70/0.05 177/394/394/0.19 84/182/116/0.0689 POWER 1000 116/233/117/0.00 114/236/236/0.00 133/267/136/0.0090 POWER 5000 258/517/259/0.05 252/514/514/0.07 260/521/261/0.0691 QUARTC 5000 33/67/34/0.01 17/66/66/0.02 45/118/78/0.0192 QUARTC 10000 35/71/36/0.02 16/69/69/0.02 47/116/75/0.0493 SCHMVETT 10000 46/73/70/0.24 45/105/105/0.32 45/73/68/0.2294 SINQUAD 500 120/256/219/0.02 Failed 27/77/67/0.0195 SPARSINE 200 450/901/452/0.01 455/913/913/0.02 482/966/485/0.0296 SPARSINE 1000 4642/9285/4643/1.11 4465/8935/8935/1.72 5439/10879/5440/1.2897 SPARSQUR 10000 22/45/23/0.05 41/131/131/0.26 35/75/46/0.1198 SPMSRTLS 1000 142/291/151/0.03 138/281/281/0.05 134/275/143/0.0399 SPMSRTLS 4999 218/443/227/0.26 212/430/430/0.39 201/409/210/0.25

100 SROSENBR 10000 12/26/17/0.01 8/26/26/0.01 10/22/14/0.01101 TESTQUAD 100 288/577/311/0.00 452/972/972/0.00 380/761/391/0.00102 TOINTGOR 50 122/222/152/0.00 Failed 122/221/153/0.00103 TOINTGSS 10000 4/9/5/0.01 4/20/20/0.03 4/9/5/0.01104 TQUARTIC 5000 28/75/58/0.03 9/32/32/0.01 15/91/82/0.04105 TQUARTIC 10000 25/59/42/0.03 13/38/38/0.03 21/57/43/0.04106 TRIDIA 5000 782/1565/783/0.24 781/1565/1565/0.36 938/1877/939/0.28107 TRIDIA 10000 1116/2233/1117/0.69 1114/2231/2231/1.02 1333/2667/1334/0.85108 VARDIM 5000 50/110/62/0.01 Failed 47/102/56/0.02109 VARDIM 10000 52/116/65/0.05 Failed 52/115/64/0.05110 VAREIGVL 5000 119/303/184/0.31 30/80/80/0.10 79/220/145/0.22111 WOODS 10000 253/556/319/0.38 232/487/487/0.40 149/342/205/0.24

8 I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx

evaluations (GradEv) and the CPU time (Time) in seconds. Moreover, ‘‘Failed’’ means that the method failed to converge with theprescribed accuracy, i.e. kgkk1 6 10�6. For convenience, we give the meanings of these methods in Table 1.

� ‘‘CG-DESCENT’’ stands for the CG-DESCENT method [17] implemented with the Wolfe line search conditions (2.7) and(2.8) with r1 ¼ 0:1 and r2 ¼ 0:9. The other parameters are set as default.� ‘‘PR+’’ stands for PR+ of Gilbert and Nocedal [15] implemented with the line search proposed in [24].� ‘‘MP+’’ stands for Algorithm MP+-CG implemented with the same line search as the CG-DESCENT.

All algorithms were evaluated using the performance profiles proposed by Dolan and Morè [9] which provide a wealth ofinformation such as solver efficiency, robustness and probability of success in compact form. The use of performance profileseliminate the influence of a small number of problems on the benchmarking process and the sensitivity of results associatedwith the ranking of solvers [9]. The performance profile plots the fraction P of problems for which any given method is withina factor s of the best solver. The horizontal axis of the problems for which a method is each plot shows the percentage thefastest (efficiency), while the vertical side gives the percentage of the problems that were successfully solved by each method(robustness). Figs. 1–4 show the performance profiles relative to function evaluations, gradient evaluations, number of iter-ations and CPU time, respectively.

Please cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.Comput. (2012), doi:10.1016/j.amc.2012.02.076

Page 9: Globally convergent modified Perry’s conjugate gradient method

Fig. 1. Log10 scaled performance profiles of conjugate gradient methods CG-DESCENT, PR+ and MP+ based on function evaluations.

Fig. 2. Log10 scaled performance profiles of conjugate gradient methods CG-DESCENT, PR+ and MP+ based on gradient evaluations.

I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx 9

Clearly, Figs. 1–4 present that our proposed method MP+ exhibits the best overall performance since it illustrates thehighest probability of being the optimal solver, followed by the CG-DESCENT, relative to all performance metrics. PR+ exhib-its the worst performance, solving only 73.8% of the test problems successfully, while MP+ and CG-DESCENT solve 96.3% ofthe test problems successfully. Additionally, it is worth noticing that MP+ solves about 51.4% and 55.9% of the test problemswith the least number of function evaluations and gradient evaluations, respectively while the CG-DESCENT solves about31.5% and 44.1% of the test problems, in the same situation. Moreover, Figs. 3 and 4 show that MP+ has the best performancewith respect to the number of iterations and CPU time since it corresponds to the top curves.

Please cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.Comput. (2012), doi:10.1016/j.amc.2012.02.076

Page 10: Globally convergent modified Perry’s conjugate gradient method

Fig. 3. Log10 scaled performance profiles of conjugate gradient methods CG-DESCENT, PR+ and MP+ based on number of iterations.

Fig. 4. Log10 scaled performance profiles of conjugate gradient methods CG-DESCENT, PR+ and MP+ based on CPU time.

10 I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx

5. Conclusions & future research

In this paper, we proposed a conjugate gradient method which is based on the MBFGS secant condition by modifying Per-ry’s method. An important property of our proposed method is that it ensures sufficient descent using any line search. Underproper conditions, we established that our proposed method is globally convergent for general functions under the Wolfeline search. The presented numerical results illustrated the efficiency and robustness of our proposed method.

Our future work is concentrated on studying the convergence properties and numerical performance of our proposedmethod using different inexact line searches [16,31–33,36].

Please cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.Comput. (2012), doi:10.1016/j.amc.2012.02.076

Page 11: Globally convergent modified Perry’s conjugate gradient method

I.E. Livieris, P. Pintelas / Applied Mathematics and Computation xxx (2012) xxx–xxx 11

References

[1] M. Al-Baali, Descent property and global convergence of the Fletcher–Reeves method with inexact line search, IMA Journal of Numerical Analysis 5(1985) 121–124.

[2] E.G. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Applied Mathematics and Optimization 43 (1999)117–128.

[3] I. Bongartz, A. Conn, N. Gould, P. Toint, Cute: constrained and unconstrained testing environments, ACM Transaction on Mathematical Software 21(1995) 123–160.

[4] W. Chen, Q. Liu, Sufficient descent nonlinear conjugate gradient methods with conjugacy condition, Numerical Algorithms 53 (2010) 113–131.[5] Y.H. Dai, L.Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Applied Mathematics and Optimization 43 (2001) 87–

101.[6] Y.H. Dai, Y.X. Yuan, Nonlinear Conjugate Gradient Methods, Shanghai Scientific and Technical Publishers, Shanghai, 2000.[7] Y.H. Dai, Y.X. Yuan, A nonlinear conjugate gradient with a strong global convergence properties, SIAM Journal of Optimization 10 (2000) 177–182.[8] Z. Dai, B.S. Tian, Global convergence of some modified PRP nonlinear conjugate gradient methods, Optimization Letters (2010) 1–16.[9] E. Dolan, J.J. Moré, Benchmarking optimization software with performance profiles, Mathematical Programming 91 (2002) 201–213.

[10] S.Q. Du, Y.Y. Chen, Global convergence of a modified spectral FR conjugate gradient method, Applied Mathematics and Computation 202 (2) (2008)766–770.

[11] R. Fletcher, C.M. Reeves, Function minimization by conjugate gradients, Computer Journal 7 (1964) 149–154.[12] J.A. Ford, I.A. Moghrabi, Multi-step quasi-Newton methods for optimization, Journal of Computational and Applied Mathematics 50 (1994) 305–323.[13] J.A. Ford, I.A. Moghrabi, Using function-values multi-step quasi-Newton methods, Journal of Computational and Applied Mathematics 66 (1996) 201–

211.[14] J.A. Ford, Y. Narushima, H. Yabe, Multi-step nonlinear conjugate gradient methods for unconstrained minimization, Computational Optimization and

Applications 40 (2008) 191–216.[15] J.C. Gilbert, J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM Journal of Optimization 2 (1) (1992) 21–

42.[16] L. Grippo, S. Lucidi, A globally convergent version of the Polak–Ribière conjugate gradient method, Mathematical Programming 78 (3) (1997) 375–391.[17] W.W. Hager, H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM of Journal Optimization 16 (2005)

170–192.[18] W.W. Hager, H. Zhang, A survey of nonlinear conjugate gradient methods, Pacific of Journal Optimization 2 (2006) 35–58.[19] M.R. Hestenes, E. Stiefel, Methods for conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards 49 (1952)

409–436.[20] D.H. Li, M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, Journal of Computational and Applied

Mathematics 129 (2001) 15–35.[21] D.H. Li, M. Fukushima, On the global convergence of the BFGS method for nonconvex unconstrained optimization problems, SIAM Journal on

Optimization 11 (2001) 1054–1064.[22] G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, Journal of Computational

and Applied Mathematics 202 (2007) 523–539.[23] A. Lu, H. Liu, X. Zheng, W. Cong, A variant spectral-type FR conjugate gradient method and its global convergence, Applied Mathematics and

Computation 217 (12) (2011) 5547–5552.[24] J.J. Moré, D. Thuente, Line search algorithms with guaranteed sufficient decrease, ACM Transaction on Mathematical Software 20 (1994) 286–307.[25] J. Nocedal, S.J. Wright, Numerical Optimization, Springer-Verlag, New York, 1999.[26] A. Perry, A modified conjugate gradient algorithm, Operational Research 26 (1978) 1073–1078.[27] E. Polak, G. Ribière, Note sur la convergence de methods de directions conjuguees, Revue Francais d’Informatique et de Recherche Operationnelle 16

(1969) 35–43.[28] M.J.D. Powell, Restart procedures for the conjugate gradient method, Mathematical Programming 12 (1977) 241–254.[29] M.J.D. Powell, Nonconvex minimization calculations and the conjugate gradient method, Lectures Notes in Mathematics, vol. 1066, Springer-Verlag,

Berlin, 1984, pp. 122–141.[30] M.J.D. Powell, Convergence properties of algorithms for nonlinear optimization, SIAM Review 28 (1986) 487–500.[31] Z.J. Shi, J. Shen, Convergence of PRP method with new nonmonotone line search, Applied Mathematics and Computation 181 (1) (2006) 423–431.[32] Z.J. Shi, S. Wang, Z. Xu, The convergence of conjugate gradient method with nonmonotone line search, Applied Mathematics and Computation 217 (5)

(2010) 1921–1932.[33] C.Y. Wang, Y. Y Chen, S.Q. Du, Further insight into the Shamanskii modification of Newton method, Applied Mathematics and Computation 180 (2006)

46–52.[34] Z. Wei, G. Yu, G. Yuan, Z. Lian, The superlinear convergence of a modified BFGS-type method for unconstrained optimization, Computational

Optimization and Applications 29 (2004) 315–332.[35] G. Yu, L. Guan, W. Chen, Spectral conjugate gradient methods with sufficient descent property for large-scale unconstrained optimization,

Optimization Methods and Software 23 (2) (2008) 275–293.[36] G. Yu, L. Guan, Z. Wei, Globally convergent Polak–Ribière-Polyak conjugate gradient methods under a modified Wolfe line search, Applied Mathematics

and Computation 215 (8) (2009) 3082–3090.[37] L. Zhang, Two modified Dai–Yuan nonlinear conjugate gradient methods, Numerical Algorithms 50 (2009) 1–16.[38] L. Zhang, W. Zhou, Two descent hybrid conjugate gradient methods for optimization, Journal of Computational and Applied Mathematics 216 (2008)

164–251.[39] L. Zhang, W. Zhou, D. Li, Global convergence of a modified Fletcher–Reeves conjugate gradient method with Armijo-type line search, Numerische

Mathematik 104 (2006) 561–572.[40] W. Zhou, L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optimization Methods and Software 21 (5) (2006)

707–714.[41] G. Zoutendijk, Nonlinear programming, in: J. Abadie (Ed.), Integer and Nonlinear Programming, North Holland, Amsterdam, 1970, pp. 37–86.

Please cite this article in press as: I.E. Livieris, P. Pintelas, Globally convergent modified Perry’s conjugate gradient method, Appl. Math.Comput. (2012), doi:10.1016/j.amc.2012.02.076