Top Banner
MULTIPARAMETER ITERATIVE SCHEMES FOR THE SOLUTION OF SYSTEMS OF LINEAR AND NONLINEAR EQUATIONS * C. BREZINSKI AND J.-P. CHEHAB SIAM J. SCI. COMPUT. c 1999 Society for Industrial and Applied Mathematics Vol. 20, No. 6, pp. 2140–2159 Abstract. In this paper, we introduce multiparameter generalizations of the linear and non- linear iterative Richardson methods for solving systems of linear and nonlinear equations. The new algorithms are based on using a (optimal) matricial relaxation instead of the (optimal) scalar re- laxation of the steepest descent method. The optimal matrix, which is defined at each iteration by minimizing the current residual, is computed as the least squares solution of an associated problem whose dimension is generally much lower than that of the original problem. In particular, thanks to this approach, we construct multiparameter versions of the Δ k method introduced for solving nonlinear fixed point problems. Various numerical results illustrate the implementation of the new schemes. They concern the solution of a linear problem and of a nonlinear one which comes out from a reaction-diffusion problem which exhibits bifurcations. In both cases, the (optimal) multiparameter relaxation improves the convergence as compared to the (optimal) scalar one. Key words. nonlinear systems, fixed point methods, convergence acceleration, hybrid procedure AMS subject classifications. 65H10, 65B99 PII. S106482759631370X Let us consider the system of p linear equations in p unknowns, Ax = b. For finding x, an iterative method producing the sequence (x n ) is used. For accelerating its convergence, we can transform it into a new sequence (y n ) given by y n = x n - λ n z n , (1) where z n is an (almost) arbitrary vector and λ n a parameter. We set ρ n = b - Ay n and r n = b - Ax n and we have ρ n = r n + λ n Az n . We choose for λ n the value which minimizes kρ n k =(ρ n n ) 1/2 . Such a λ n is given by λ n = - (r n , Az n ) (Az n , Az n ) . This acceleration procedure was discussed at length in [3]. It was extended to the nonlinear case in [5]. The same idea can also be used for building a new iterative method by cycling, that is, by considering the iterations x n+1 = x n - λ n z n , where λ n , still given by the formula above, minimizes kr n+1 k [20]. When z n = r n , the Richardson acceleration procedure and the Richardson iterative method are, respectively, recovered. In some applications, the various components (or blocks of components) of the vectors x n can behave quite differently. Such a situation can arise, for example, in decomposition methods, multigrid methods, wavelets, multiresolution methods, inertial manifolds, incremental unknowns (IUs), and the nonlinear Galerkin method. In these methods, the unknowns are split into various subsets, each of them being * Received by the editors December 16, 1996; accepted for publication (in revised form) December 10, 1997; published electronically July 9, 1999. http://www.siam.org/journals/sisc/20-6/31370.html Laboratoire d’Analyse Num´ erique et d’Optimisation, UFR IEEA–M3, Universit´ e des Sciences et Technologies de Lille, 59655 Villeneuve d’Ascq cedex, France ([email protected]). Laboratoire d’Analyse Num´ erique et d’Optimisation, UFR de Math´ ematiques Pures et Ap- pliqu´ ees–M2, Universit´ e des Sciences et Technologies de Lille, 59655 Villeneuve d’Ascq cedex, France ([email protected]). 2140
20

Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

May 15, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES FOR THE SOLUTIONOF SYSTEMS OF LINEAR AND NONLINEAR EQUATIONS∗

C. BREZINSKI† AND J.-P. CHEHAB‡

SIAM J. SCI. COMPUT. c© 1999 Society for Industrial and Applied MathematicsVol. 20, No. 6, pp. 2140–2159

Abstract. In this paper, we introduce multiparameter generalizations of the linear and non-linear iterative Richardson methods for solving systems of linear and nonlinear equations. The newalgorithms are based on using a (optimal) matricial relaxation instead of the (optimal) scalar re-laxation of the steepest descent method. The optimal matrix, which is defined at each iteration byminimizing the current residual, is computed as the least squares solution of an associated problemwhose dimension is generally much lower than that of the original problem. In particular, thanksto this approach, we construct multiparameter versions of the ∆k method introduced for solvingnonlinear fixed point problems. Various numerical results illustrate the implementation of the newschemes. They concern the solution of a linear problem and of a nonlinear one which comes out froma reaction-diffusion problem which exhibits bifurcations. In both cases, the (optimal) multiparameterrelaxation improves the convergence as compared to the (optimal) scalar one.

Key words. nonlinear systems, fixed point methods, convergence acceleration, hybrid procedure

AMS subject classifications. 65H10, 65B99

PII. S106482759631370X

Let us consider the system of p linear equations in p unknowns, Ax = b. Forfinding x, an iterative method producing the sequence (xn) is used. For acceleratingits convergence, we can transform it into a new sequence (yn) given by

yn = xn − λnzn,(1)

where zn is an (almost) arbitrary vector and λn a parameter. We set ρn = b − Aynand rn = b− Axn and we have ρn = rn + λnAzn. We choose for λn the value whichminimizes ‖ρn‖ = (ρn, ρn)1/2. Such a λn is given by

λn = − (rn, Azn)

(Azn, Azn).

This acceleration procedure was discussed at length in [3]. It was extended to thenonlinear case in [5].

The same idea can also be used for building a new iterative method by cycling,that is, by considering the iterations xn+1 = xn − λnzn, where λn, still given by theformula above, minimizes ‖rn+1‖ [20]. When zn = rn, the Richardson accelerationprocedure and the Richardson iterative method are, respectively, recovered.

In some applications, the various components (or blocks of components) of thevectors xn can behave quite differently. Such a situation can arise, for example,in decomposition methods, multigrid methods, wavelets, multiresolution methods,inertial manifolds, incremental unknowns (IUs), and the nonlinear Galerkin method.In these methods, the unknowns are split into various subsets, each of them being

∗Received by the editors December 16, 1996; accepted for publication (in revised form) December10, 1997; published electronically July 9, 1999.

http://www.siam.org/journals/sisc/20-6/31370.html†Laboratoire d’Analyse Numerique et d’Optimisation, UFR IEEA–M3, Universite des Sciences et

Technologies de Lille, 59655 Villeneuve d’Ascq cedex, France ([email protected]).‡Laboratoire d’Analyse Numerique et d’Optimisation, UFR de Mathematiques Pures et Ap-

pliquees–M2, Universite des Sciences et Technologies de Lille, 59655 Villeneuve d’Ascq cedex, France([email protected]).

2140

Page 2: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2141

treated in a different way. See [29] for a review of multiresolution methods for partialdifferential equations.

In this paper, we will use a different λn, if not for each component of zn then forblocks of components. This idea will lead to multiparameter generalizations of theprocedures described in [3] and [5].

We will also propose a multiparameter generalization of the hybrid procedureintroduced in [6] for linear systems and extended in [5] to the nonlinear case.

1. A multiparameter vector sequence transformation. We will transformthe sequence (xn) into the new sequence (yn) given by

yn = xn − Znλn,

where Zn is a p ×m matrix and λn ∈ Rm. We set ρn = b − Ayn and rn = b − Axnand we have

ρn = rn +AZnλn.

The value of λn minimizing ‖ρn‖2 = (ρn, ρn) is given by the least squares solution ofthe system rn +AZnλn = 0, that is,

λn = − [(AZn)TAZn]−1

(AZn)T rn.(2)

The computation of λn needs solving of an m×m system of linear equations. However,m is usually quite small. Moreover, the partitioning technique introduced below showsthat, in practice, the construction of the matrix of this system is cheap.

We have

ρn = (I −Pn)rn

and

(ρn, ρn) = (rn, (I −Pn)rn)

with

Pn = AZn[(AZn)TAZn

]−1(AZn)T .

Obviously P2n = Pn and PT

n = Pn, which shows that Pn represents an orthogonalprojection and also I −Pn. I −Pn is the projection on E⊥n where En is the subspacegenerated by the columns of the matrix AZn. We also have (ρn,Pnrn) = 0. For moredetails on projections, see [4]. Obviously, by construction, we have

‖ρn‖ ≤ ‖rn‖.

Let us now describe the partitioning strategy used for constructing the matrixZn. Let zn ∈ Rp be an arbitrary vector. We will partition it into m blocks aszn = (z1

n, . . . , zmn )T , and we will take

Zn =

z1n

0. . . 0

zmn

.

Page 3: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2142 C. BREZINSKI AND J.-P. CHEHAB

Thus, setting λn = (λ1n, . . . , λ

mn )T , we have Znλn = (λ1

nz1n, . . . , λ

mn z

mn )T . Such a

choice corresponds to taking different values of λn, namely λ1n, . . . , λ

mn , for each block

of components of zn. Note that the value of the integer m can depend on n.Instead of the procedure described above, the most natural extension of (1) would

have been to set yn = xn − Λnzn, where Λn is a matrix partitioned as zn and havingthe form

Λn =

λ1nI

0. . . 0

λmn I

,

where each identity matrix I has the same dimension as the corresponding zin, and tochoose the scalars λin’s minimizing ‖ρn = rn+AΛnzn‖. Such a problem seems difficultto solve. However, we have Λnzn = Znλn which shows that both formulations areequivalent. The minimization problem of the formulation we chose is much simpler.Moreover, our approach is more general.

Let us now discuss an optimal choice (in some sense) of the matrix Zn. Let uschoose Zn such that there exists αn ∈ Rm such that rn = AZnαn. Then, ρn =AZn(αn + λn). But

λn = − [(AZn)TAZn]−1

(AZn)TAZnαn = −αnand, thus, ρn = 0. Thus, the best choice for Zn is to take a matrix such that∃ αn ∈ Rm satisfying Znαn = A−1rn. Since this choice is impractical, we will takefor Zn a matrix such that there exists αn ∈ Rm satisfying

Znαn = Cnrn,(3)

where Cn is an approximation of A−1.We now have two important problems: the choice of m and that of Cn. In fact,

these two choices are related. As explained in [3], if m = 1, rn = 0 if and only if thevectors Azn = ACnrn and rn are collinear, that is, if and only if Cn = A−1. However,if m = p, the choice Zn = Cn will satisfy (3) with αn = rn. Thus, taking for Cn anynonsingular matrix, we will have in this case

ρn = rn − (ACn)(ACn)−1[(ACn)T ]−1(ACn)T rn = 0.

We now show how to construct the matrix Zn. As before, we partition the vectorCnrn into m blocks, Cnrn = ((Cnrn)1, . . . , (Cnrn)m)T and we take

Zn =

(Cnrn)1

0. . . 0

(Cnrn)m

and αn = e = (1, . . . , 1)T . Obviously Znαn = Cnrn. This choice corresponds to thechoice zn = Cnrn followed by the partitioning strategy described above. This proce-dure appears as a multiparameter extension of the variant of Richardson’s acceleration(called the PR2 acceleration) described in [3].

Instead of using the previous ideas for transforming a sequence (xn) coming froman arbitrary iterative method for solving Ax = b into the new sequence (yn), it is also

Page 4: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2143

possible to use this procedure as an iterative method itself, that is, to consider theiterations

xn+1 = xn − Znλn.For the optimal choice of Zn discussed above, we obtain a multiparameter generaliza-tion of the variant of Richardson’s method (called the PR2 iterative method) studiedin [3].

Since the choice (2) for λn minimizes ‖ρn‖ or ‖rn+1‖, we have ∀αn,

‖ρn‖ or ‖rn+1‖ ≤ ‖rn −AZnαn‖.In particular, this is true if αn satisfies (3). In that case, we have

‖ρn‖ or ‖rn+1‖ ≤ ‖rn −ACnrn‖ ≤ ‖I −ACn‖ · ‖rn‖.If the sequence (Cn) is constructed as described in [3], the results given there arevalid.

2. Multiparameter hybrid procedures. Let us now assume that, for solvingthe system Ax = b, two iterative methods are used, and let (x′n) and (x′′n) be thecorresponding sequences of iterates. We will construct a new sequence (xn) by

xn = αnx′n + (1− αn)x′′n = x′′n + αn(x′n − x′′n),

where αn is chosen to minimize the residual of rn = b−Axn, that is,

αn = − (r′n − r′′n, r′′n)

(r′n − r′′n, r′n − r′′n),

where r′n = b−Ax′n and r′′n = b−Ax′′n. We have

rn = αnr′n + (1− αn)r′′n = r′′n + αn(r′n − r′′n)

and, by construction, it holds that

‖rn‖ ≤ min(‖r′n‖, ‖r′′n‖).This procedure was introduced in [6] and was called the hybrid procedure. If we takex′′n = xn−1, the preceding iterate of the hybrid procedure itself, then we have

‖rn‖ ≤ min(‖r′n‖, ‖rn−1‖)which shows that the sequence (‖rn‖) is monotone. Such a procedure was introducedin [26] (see also [25]) and was called the minimal residual smoothing (MRS).

As mentioned, if some blocks of components of the vectors x′n and x′′n behavequite differently, it will be better to take a different value of the parameter αn foreach block. This is the reason why, following the ideas of the preceding section, wewill now present a multiparameter extension of the hybrid procedure.

Let us partition the vectors x′n and x′′n into m blocks denoted, respectively, by(x′n)1, . . . , (x′n)m and (x′′n)1, . . . , (x′′n)m. We set

X ′n =

(x′n)1

0. . . 0

(x′n)m

and X ′′n =

(x′′n)1

0. . . 0

(x′′n)m

.

Page 5: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2144 C. BREZINSKI AND J.-P. CHEHAB

The multiparameter hybrid procedure (MLHP) consists of constructing the se-quence (xn) by

xn = x′′n + (X ′n −X ′′n)αn,

where αn ∈ Rm. We have

rn = r′′n −A(X ′n −X ′′n)αn.

The vector αn is chosen to minimize ‖rn‖. It is given by the least squares solution ofr′′n −A(X ′n −X ′′n)αn = 0, that is,

αn =[(X ′n −X ′′n)TATA(X ′n −X ′′n)

]−1(X ′n −X ′′n)TAT r′′n.

Thus the computation of αn requires the solution of an m × m system of linearequations. However, due to the sparsity of X ′n and X ′′n , the construction of thematrix of this system is quite cheap. Indeed, let p1, . . . , pm be the dimensions ofthe blocks in the partition of the vectors x′n and x′′n. Obviously, p1 + · · · + pm =p, the dimension of the system. We will partition the matrix A into m blocks ofrespective dimensions p× pi for i = 1, . . . ,m. Let us denote them by A1, . . . , Am. Itis easy to see that the matrix A(X ′n − X ′′n) is the p ×m matrix whose columns areA1((x′n)1 − (x′′n)1), . . . , Am((x′n)m − (x′′n)m).

3. The nonlinear case. We will now study the extension to the nonlinear caseof the block- (or multiparameter-) minimizing residual procedure described in section1. For that purpose, we consider nonlinear methods of Richardson type; the generalform of these schemes makes them suitable for a multiparameter generalization, asit appears clearly in the linear case. A family of such schemes is the ∆k-method,introduced in [5]. This algorithm is aimed at computing fixed points of a givennonlinear mapping F : Rp → Rp and is defined as follows:

Let x0 be given.

For n = 0, 1, . . .

xn+1 = xn − (−1)kλn∆kxn,

where

∆kxn =k∑i=0

(−1)i−kCki Fi(xn),

F j denoting F iterated j times. The parameter λn is chosen in order to minimize thenorm of an approximation of the residual at the nth step of the ∆k scheme.

3.1. Definition of the multiparameter scheme. We will now extend thisminimization procedure by replacing the real parameter λn with a diagonal matrixΓn.

As in section 1, we will partition the vector x into m blocks

x = (x1, . . . , xm)T, xi ∈ Rpi withm∑i=1

pi = p,

Page 6: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2145

and we look for a diagonal matrix Γn of the form

Γn =

λ1nIp1

0 · · · 0 00 λ2

nIp2· · · 0 0

......

......

0 0 · · · 0 λmn Ipm

.

The ∆k method is extended as follows:Let x0 be given.

For n = 0, 1, . . .

xn+1 = xn − (−1)kΓn∆kxn,

(4)

or equivalently Let x0 be given.

For n = 0, 1, . . .

xn+1 = xn − (−1)kZnΛn,

(5)

where Zn is a p × m matrix and Λn a vector of Rm which satisfies the followingconsistency condition (the consistency being considered with respect to (4)):

ZnΛn = Γn∆kxn.

We find that we must have(i)

Λn = (λ1n, λ

2n, . . . , λ

mn )T ,

(ii)

Zn =

(∆kxn)1

0. . . 0

(∆kxn)m

,

where (∆kxn)i denotes the ith block of the vector ∆kxn according to the block de-composition described above.

This procedure corresponds to the procedure described in section 1 with the choicezn = (−1)k∆kxn.

3.2. Computation of the minimizing residual relaxation matrix. Westart from the equation for the propagation of the error. Let x be a fixed pointof F in the neighborhood of which we choose x0. We set en = xn − x. We have

en+1 = en − (−1)kZnΛn.

Denoting by Ψ the Jacobian matrix of F at x, we have the relation

∆kxn = (Ψ− I)ken + o(en).

Page 7: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2146 C. BREZINSKI AND J.-P. CHEHAB

Therefore

en+1 = en − (−1)kZ ′nΛn + o(en),

where

Z ′n =

((Ψ− I)ken)1

0. . . 0

((Ψ− I)ken)m

.

We now introduce the residual rn = F (xn)− xn, which is related to en by

rn = Ψen − en + o(en)

= (Ψ− I)en + o(en).

The equation for the propagation of the error becomes, after multiplying eachterm on the left by Ψ− I,

rn+1 = rn − (−1)k(Ψ− I)Z ′nΛn + o(rn).(6)

At this point, we remark that

(Ψ− I)Z ′nΛn = TnΛn,(7)

where

Tn =

((Ψ− I)k+1en)1

0. . . 0

((Ψ− I)k+1en)m

=

((Ψ− I)krn)1

0. . . 0

((Ψ− I)krn)m

+ o(rn)

=

(∆k+1xn)1

0. . . 0

(∆k+1xn)m

+ o(rn) = Tn + o(rn).

We have that

rn+1 = rn − (−1)kTnΛn + o(rn).(8)

Let ρn+1 be the approximation of rn+1 defined by

ρn+1 = rn − (−1)kTnΛn.(9)

ρn+1 is an approximation of rn+1 because ρn+1 = rn+1 + o(rn) (xn is supposed to beclose enough to x).

As in the linear case, we can now compute the vector Λn minimizing ‖ρn‖2. Sucha Λn is the least squares solution of the normal equations

[(Tn)T Tn]Λn = (−1)k(Tn)T rn.(10)

Page 8: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2147

4. Generation of different scales. The multiparameter schemes presentedin the previous section were introduced for generalizing Richardson’s method whenthe solution vector can be decomposed in several blocks of unknowns which have adifferent order of magnitude. This gives the opportunity of applying one of the mainideas of the nonlinear Galerkin method to nonevolutive problems; here, through themultirelaxation process, the various subsets of components are treated in a differentway. It is natural to expect that the presence of different levels of structures canincrease the speed of convergence of such an iterative process. The problem of thegeneration of the different scales for an arbitrary vector, or sequence of vectors, is adifficult one; particular properties of this sequence of vectors, or of this vector, mustbe exploited.

These reasons motivate the use of IUs together with a finite difference discretiza-tion when the problem to be solved comes out from, e.g., an elliptic PDE. The IUmethod is a procedure used to generate several structures on distinct points of a gridand to have available efficient preconditioners for the discretization matrix of ellipticoperators. Let us recall some generalities about IUs that justify our purpose.

4.1. IU method. The original motivation of the introduction of the IU methodwas the approximation of inertial manifolds [30] when finite differences are used [28].This new approach gives a link between hierarchical methods and nonlinear Galerkinmethods (see [23] and [24], for example). From a technical point of view, the IUs canbe defined when multilevel discretizations are used: If two levels of discretization areconsidered, the IUs consist of the usual nodal values at the coarse grid points and anincrement to the values at suitable neighboring points of the fine grid which do notbelong to the coarse grid. Of course, such a procedure can be repeated recursively,leading to the use of several levels of IUs. Since the IUs do not have the same order asthe coarse grid components, they can be treated numerically in a different way, thenleading to generalizations of iterative processes that are not at all obvious when allthe unknowns have the same order, namely, that of the physical solution.

The IUs also provide very good preconditioners for the matrices associated toself-adjoint elliptic operators: In [17], it was shown that the condition number ofthese matrices was considerably reduced and, then, algorithms like the conjugategradient became very efficient; see [16] for the case of a uniform mesh. Similar resultsare given in [14] for a nonuniform mesh. In [12], the extension of the IU method to ashifted mesh of MAC type gave an efficient hierarchical preconditioner for the Uzawaoperator associated with a generalized Stokes problem.

The IU method was also applied to the solution of nonlinear eigenvalue problemsgiving efficient multiresolution generalizations of the Marder–Weitzner (MW) [22]scheme where the IUs were treated differently according to the associated grid level[10, 15, 11]. In these works, both the preconditioning properties of the IUs and thepresence of several structures were exploited.

Finally, in [27], the wavelet-like IUs, introduced in [18], were used, implementingthe nonlinear Galerkin method, for solving the driven cavity problem.

At this point, we recall the construction of the (second order) IUs that we willuse for the implementations of the multiparameter scheme presented above, which is,in this context, a multilevel scheme.

The construction of the IUs can be accomplished in two steps.i) Hierarchization. Let u be a regular function defined on an open set Ω =

(0, 1)n, n = 1, 2. We denote by Ui, i = 1, . . . , 2N − 1, (respectively, Ui,j , i, j =1, . . . , 2N − 1) the approximation of u at the grid point (Ui ' u(ih) in dimension

Page 9: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2148 C. BREZINSKI AND J.-P. CHEHAB

.. o × o × o × o × o .

Fig. 1. Space dimension 1; Ω =]0, 1[; × = points in GH ; o = points in Gh \GH .

.. . . . . . . . .

.. o o o o o o o .

.. o × o × o × o .

.. o o o o o o o .

.. o × o × o × o .

.. o o o o o o o .

.. o × o × o × o .

.. o o o o o o o .

.. . . . . . . . .

Fig. 2. Space dimension 2; Ω = (]0, 1[)2; × = points in GH ; o = points in Gh \GH .

1, Ui,j ' u(ih, jh) in dimension 2). First, we separate the nodal unknowns accordingto the grid to which they belong: We consider first the unknowns of GH , denoted byY , and then those of Gh \GH , denoted by Uf . Each family is ordered in the standardway.

In dimension 1, the unknowns of the complementary grid (Gh\GH) have the samegeometric characteristics (see Fig. 1). In dimension 2, we distinguish three kinds ofpoints in Gh \GH : points of type f1, f2, and f3 (see Figs. 2 and 3),

ii) Change of variable. Now, let us introduce a change of variable operating onlyin Gh \GH and leaving the unknowns Y unchanged. We can express it in the form

Zf = Uf −RY,where R : GH → Gh \ GH is a second order interpolation operator. We define R asfollows.

a) One-dimensional case. Let Uj , j = 0, . . . , 2N − 1, be the nodal unknowns onGh; we set

Z2j+1 = U2j+1 − 1

2(U2j + U2j+2) for j = 0, . . . , N − 1.(11)

b) Two-dimensional case. For the points f1

Z2i,2j+1 = U2i,2j+1 − 1

2(U2i,2j + U2i,2j+2).(12)

For the points f2

Z2i+1,2j = U2i+1,2j − 1

2(U2i,2j + U2i+2,2j).(13)

For the points f3

Z2i+1,2j+1 = U2i+1,2j+1 − 1

4(U2i,2j + U2i,2j+2 + U2i+2,2j + U2i+2,2j+2)(14)

for i, j = 0, . . . , N − 1.

Page 10: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2149

×o×

× o ×× ×

o× ×

points of type f1 points of type f2 points of type f3

Fig. 3. The various types of points of Gh \GH .

The numbers Z are the IUs. According to Taylor’s formula, their magnitudeis O(h2). However, a priori estimates of energy type show that the IUs are indeedsmall (see [16] for a uniform mesh and [14] for a nonuniform one). Of course, theprocess described above can be recursively repeated using d ≥ 1 levels of discretization.Denoting by Zi the successive Z-levels, we define the transfer matrix S by

YUf1

Uf2

...Ufd

= S

YZ1

Z2

...Zd

.

Finally, let us recall that the IUs can have different orders of magnitude, i.e., O(hp), p =1, 2, . . . , [13, 18, 27] and that their definition can be easily extended to higher dimen-sions using appropriate interpolators (see, e.g., [13, 19, 21] for the three-dimensionalcase).

4.2. Implementation of the schemes using IUs. When the discrete problemto be solved comes from the discretization of a PDE, it can be expressed in the nodalbasis as

AU = F (U) ∈ Rp,(15)

where A is a finite difference discretization matrix of a self-adjoint elliptic operatoron a regular grid of mesh size h = 1/(p + 1). F (U) can be a nonlinear mapping aswell as a constant vector; U is a vector containing the approximation of the solutionof the continuous problem at the grid points.

Applying recursively the hierarchy process on l grids, the previous system can berewritten as

AU = F (U) in Rp.

The components of U are those of U but rearranged in the hierarchical order.At this point, we introduce the IUs via the transfer matrix S; the system to be

solved can be expressed as

ASU = F (SU) in Rp

and, finally, multiplying by ST on the left, we obtain

ST ASU = ST F (SU) in Rp.(16)

Page 11: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2150 C. BREZINSKI AND J.-P. CHEHAB

Solving (16) rather that (15) offers many advantages. At first, under this (equivalent)form, several scales make the discrete problem suitable for applying a multiparameterscheme. Second, the matrix A = ST AS, which is symmetric, is much better condi-tioned than A, and this is of course an important point when considering an iterativemethod whose speed of convergence is, as often, related to the condition number ofthe matrix.

Finally, we introduce the following notation. We will say that a grid has a Ck,lconfiguration (or is of Ck,l type) if it is obtained with l dyadic refinements of a gridcomposed of k points in each direction of the domain. The fine grid is thus composedof 2l(k + 1)− 1 points in each direction.

5. Numerical results. Following are some numerical results showing the ef-ficiency of the iterative multiparameter minimizing residual preconditioner. Theyconcern both a linear problem and a nonlinear one.

5.1. A linear problem. Let us begin with numerical results that are concernedwith the solution of a linear problem by using the multiparameter scheme describedin section 1. We write this problem as follows:

Find x ∈ Rp such that Ax = b,

where A is a p× p nonsingular matrix.We have chosen, as test problems, a family of finite-dimensional linear systems

which do not come from a PDE but whose solution has block components of differentorders of magnitude. Hence, we first give an illustration of the implementation ofa block (or multiparameter) convergence acceleration method. More generally, theintroduction of these multiparameter schemes deals with a new aspect of the acceler-ation of convergence, when there are different scales.

For our applications, we consider the following family of matrices:

A =

1 1 · · · 1 1a1 1 · · · 1 1a1 a2 · · · 1 1...

......

...a1 a2 · · · ap−1 1

,

where ai, i = 1, . . . , p − 1, are given real numbers. Notice that this kind of matrixwas considered in the study of techniques for avoiding the breakdown when the Bi-CGSTAB [31] is used (see [7, 8], for example).

We consider the partition of X into m blocks: x = (x1, x2, . . . , xm)T , the ithblock being of dimension mi with

∑mi=1mi = p and we choose the right-hand side b

such that the numbers xij = cj have different orders of magnitude for i = 1, . . . ,mand j = 1, . . . ,mi.

We will compare the results obtained when m = 1 and when m > 1; m = 1corresponds to the steepest descent and m > 1 to the multiparameter Richardsonscheme described in section 2.

Finally we set m1 = N , mi = 2mi−1, i = 2, . . . ,m.Remark 1. Notice that, for this type of choice of x, if we fix N and let m vary,

then the linear system is bordered (see [9, p. 30]).Since the methods that we implement here are iterative, we must use a stopping

test. A natural one stops the iterations when the norm of the current residual is

Page 12: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2151

Fig. 4. (a) Example 1. ak = 0.1 ∀k = 1, . . . , p− 1; (b) Example 2. ak = 0.9 ∀k = 1, . . . , p− 1.

smaller than a given value ε. We take ε = 10−9. The multiparameter scheme containsinner iterations and the minimization matrix is computed in the least squares senseby using a conjugate gradient method. We fixed the accuracy of the least squaresstep to 10−17.

The use of an iterative method, such as the conjugate gradient instead of a directmethod, for solving the residual system is justified by the fact that the matrix of thesystem to be solved is obtained by multiplying together four matrices, two of which

Page 13: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2152 C. BREZINSKI AND J.-P. CHEHAB

Table 1Comparison of steepest multiparameter Richardson methods.

m / Scheme Richardson Multiparameter Richardsonm = 3 (p=7) 48 15m = 4 (p=15) 50 14m = 5 (p=31) 49 13m = 6 (p=63) 48 13m = 7 (p=127) 46 12m = 8 (p=255) 44 11m = 9 (p=511) 41 10

(A and AT ) could be large. Therefore, the determination of all the coefficients canbe expensive when p, the size of the original system, is large, even if the number mof block is very small (formula (2)). In addition, the matrix of the residual system,(AZn)TAZn, changes at each iteration n. Hence, it is enough to be able to computematrix-vector products, with the matrix (AZn)TAZn, for solving the residual system.

Example 1. ak = 0.1 ∀k = 1, . . . , p − 1. We set Xij = 10−i+2, for i = 1, . . . ,m

and j = 1, . . . ,mi and we take N = 1. The initial guess is the zero vector.We have plotted in Fig. 4(a) the evolution of the residual along the iterations. We

observe the improvement of the convergence obtained by the multiparameter versionof the steepest descent method. Particularly, we remark that MP (multiparameter)Richardson converges in approximatively half the number of iterations of the steepestdescent method. Here m = 7 and then p = 127.

Example 2. ak = 2 ∀k = 1, . . . , p− 1. We set here Xij = 2−i+2 for i = 1, . . . ,m

and j = 1, . . . ,mi, N = 1. The evolution of the residual along the iterations isrepresented in Fig. 4(b). The conclusions are the same.

Finally, we took the solution (xij = 10−i+2) and we let the number of blocks mvary. Hence, the dimension p of the matrix is p = 2m − 1 (we have set m1 = 1).We observe in Table 1, which summarizes some results, that the number of iterationsdecreases slightly when m increases. In addition we remark that the ratio of thenumber of iterations of the (multiparameter) Richardson method and that of thesteepest descent decreases when m increases. Here ak = 5 ∀k = 1, . . . , p− 1.

Remark 2. Of course, the solution of the linear problem considered here can beobtained by using other methods such as Bi-CGSTAB; in some cases, Bi-CGSTAB isbetter, but not always. Our goal here is to illustrate the improvement obtained bythe multiparameter version of the Richardson method as compared to the classical(one-parameter) method. This is a problem to be considered before applying ourschemes to the solution of nonlinear problems.

5.2. A nonlinear problem. We consider the numerical solution of the nonlin-ear elliptic problem

−∆u = γu− ν|u|εu in Ω = ]0, 1[n, n = 1, 2,

u = 0 on ∂Ω,(17)

where γ ≥ ν and ε are given strictly nonnegative real numbers.As is well known, this type of problem exhibits successive bifurcations. Moreover,

(17) possesses unstable solutions when γ is large enough.

Page 14: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2153

Here, we look at the computation of some unstable solutions of (17). This problemwas considered, e.g., in [1, 2, 10, 15].

5.2.1. The schemes used. We compute the unstable solution by using thegeneralization of the ∆k method presented in section 3, namely,

Let x0 be given.

For n = 0, 1, . . .

xn+1 = xn − (−1)kΓn∆kxn,

where Γn is a relaxation matrix computed in order to minimize a suitable approxima-tion of the residual.

We will restrict ourselves to the cases k = 1, 2, i.e., to the generalizations of themethods of Lemarechal and Marder–Weitzner, respectively.

As for the linear case, the distribution of the a priori nonzero coefficients of Γncharacterizes the form of the matrix Zn which is associated with the minimizationprocess. We consider the two following situations:

• Γn = αnIp. The corresponding method is nothing else but the ∆k methodwith a minimizing relaxation parameter.• Γn = diag(λ1I1, λ2I2, . . . , λmIm). The corresponding method is the multi-

parameter ∆k scheme, denoted by MP∆k, as described in section 3.In what follows, the block decomposition will coincide with the partition of the vectorcomponents by using the IUs: If d levels of IUs are considered, then we will takem = d, and to each block of the decomposition of the multirelaxation procedure willcorrespond one level of IU.

Finally, the use of the ∆k scheme needs the solution of a linear (symmetric) systemat each inner iteration. This can be realized by using an LU factorization in the one-dimensional case and by using the conjugate gradient method in the two-dimensionalcase. When the multiparameter version of the method ∆k is implemented—say,MP∆k—a supplementary linear (symmetric) problem has to be solved: It correspondsto the determination of the minimizing residual relaxation matrix in the least squaressense. We have also used the conjugate gradient method for that purpose.

We present here the numerical results on the two-dimensional case, but similarresults are obtained in the one-dimensional case; for example, the multiparameterversions of the ∆k method also improve the pointwise one in the one-dimensionalcase.

5.2.2. The two-dimensional case. We consider the problem−∆u = γu− |u|u in Ω =]0, 1[2,

u = 0 on ∂Ω.(18)

We compute some unstable solutions for a given value of the parameter γ, startingfrom different initial vectors U0

i,j = u0(i.h, j.h), i, j = 0, . . . , p+ 1, h = 1/(p+ 1). Werecover some unstable solutions computed by Bolley in [1] (case γ = 80).

In all the examples, we took ε = 5.10−9 for both the solution of the discreteDirichlet problems and the computation of the minimizing residual relaxation matrix(least squares step). The iterations are stopped when the norm of the current residualis less than 5.10−8. Finally, we point out that the bifurcation parameter γ was chosen,for each example, close to a bifurcation value so that the numerical solution of theproblem is not easy. Here, all the computations were realized on the Cray YMP ofthe Universite Paris-Sud at Orsay, France.

Page 15: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2154 C. BREZINSKI AND J.-P. CHEHAB

1e-08

1e-07

1e-06

1e-05

0.0001

0.001

0.01

0.1

1

10

100

0 10 20 30 40 50 60

residual scale log. 10

iterations

MP DELTA 1MP DELTA 2

DELTA 1DELTA 2

1e-08

1e-07

1e-06

1e-05

0.0001

0.001

0.01

0.1

1

10

100

0 5 10 15 20 25 30 35 40

residual scale log. 10

CPU time

MP DELTA 1MP DELTA 2

DELTA 1

Fig. 5. γ = 80; u0(x, y) = 30 sin(2πx) sin(πy). (a) Residual versus iterations, (b) residualversus CPU time. The grid is of type C1,7 and is composed of 127 × 127 points.

(a) γ = 80. We have taken a grid of type C1,7, with one point on the coarsestgrid and seven levels of IUs. The finer grid is then composed of 127×127 points. Theinitial function is u0(x, y) = 30 sin(2πx) sin(πy).

As one can see in Fig. 5, the schemes MP∆k, k = 1, 2, and ∆1 converge to thelocal unstable solution. ∆2 generates a stationary sequence after a transitive numberof iterations; for the one-dimensional problem, the relaxation parameter becomes verysmall. We observe that the schemes MP∆k, k = 1, 2, are more efficient than ∆1. Thisbetter speed of convergence gives a gain in the CPU time (Fig. 5(b)).

Page 16: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2155

1e-08

1e-07

1e-06

1e-05

0.0001

0.001

0.01

0.1

1

10

100

0 50 100 150 200 250 300

residual scale log. 10

iterations

MP DELTA 1MP DELTA 2

DELTA 1DELTA 2

1e-08

1e-07

1e-06

1e-05

0.0001

0.001

0.01

0.1

1

10

100

0 5 10 15 20 25 30

residual scale log. 10

CPU time

MP DELTA 1MP DELTA 2

DELTA 1

Fig. 6. γ = 180; u0(x, y) = 200 sin(2πx) sin(πy). (a) Residual versus iterations, (b) residualversus CPU time. The grid is of type C1,6 and is composed of 63× 63 points.

(b) γ = 180. Here we consider another value of γ(=180) which is very close tothe eigenvalue of the Laplacian 18.π2.

(b-1) u0(x, y) = 30 sin(2πx) sin(πy). As one can see in Fig. 6, the schemes MP∆k,k = 1, 2, and ∆1 converge to the local unstable solution while ∆2 generates a station-ary sequence. We observe that the multiparameter schemes are more efficient thanthe others because they give the solution in fewer iterations (Fig. 6(a)) and less CPUtime (Fig. 6(b)). The grid used here is of type C1,7.

(b-2) u0(x, y) = 100 sin(2πx) sin(2πy). The multiparameter schemes MP∆k,

Page 17: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2156 C. BREZINSKI AND J.-P. CHEHAB

1e-08

1e-07

1e-06

1e-05

0.0001

0.001

0.01

0.1

1

10

100

1000

0 50 100 150 200 250 300 350 400 450 500

residual scale log. 10

iterations

MP DELTA 1MP DELTA 2

DELTA 1DELTA 2

1e-08

1e-07

1e-06

1e-05

0.0001

0.001

0.01

0.1

1

10

100

1000

0 1 2 3 4 5 6 7

residual scale log. 10

CPU time

MP DELTA 1MP DELTA 2

Fig. 7. γ = 180; u0(x, y) = 100 sin(2πx) sin(2πy). (a) Residual versus iterations, (b) residualversus CPU time. The grid is of type C1,6 and is composed of 63× 63 points.

k = 1, 2, converge to the local unstable solution. However, the sequence generated by∆1 is close to that computed by the multiparameter methods but is less accurate, asone can see in Fig. 7(a). In Fig. 7(b), we can remark that MP∆1 is about two timesless expensive in CPU time than MP∆2. This is because, at each step, MP∆2 needsone more evaluation of the nonlinear functional at the current iterate. Here the gridis of type C1,6 and is composed of 63×63 discretization points.

(b-3) u0(x, y) = 80 sin(3πx) sin(2πy). MP∆1 converges to the local unstable so-lution. The other schemes, MP∆2 and ∆1, do not converge although, after a certain

Page 18: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2157

1e-08

1e-07

1e-06

1e-05

0.0001

0.001

0.01

0.1

1

10

0 50 100 150 200 250

residual scale log. 10

iterations

MP DELTA 1MP DELTA 2

DELTA 1

1e-08

1e-07

1e-06

1e-05

0.0001

0.001

0.01

0.1

1

10

0 100 200 300 400 500 600

residual scale log. 10

CPU time

MP DELTA 1MP DELTA 2

DELTA 1

Fig. 8. γ = 180; u0(x, y) = 80 sin(3πx) sin(2πy). (a) Residual versus iterations, (b) residualversus CPU time. The grid is of type C2,7 and is composed of 191×191 points.

number of iterations, the corresponding iterates are relatively close to the local solu-tion (see Fig. 8). In this case, it seems that it is impossible to obtain an accuracybetter than O(h2). The grid used here is of type C3,6 and is composed of 191×191discretization points; the coarsest grid is composed of 1 point in the x-direction and3 in the y-direction.

(b-4) u0(x, y) = 20 sin(4πx) sin(πy). Finally, for this initial guess, the MP∆1

method is the only one which converges to the local unstable solution. The grid usedhere is of type C1,7 and is composed of 127×127 discretization points.

Page 19: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

2158 C. BREZINSKI AND J.-P. CHEHAB

In summary, we can say that the multiparameter extension of the ∆k schemesimproves their convergence. In any case, it seems that the best method is MP∆1.

Remark 3. A drawback of our new multiparameter methods is that the numberand the size of the block of components must be fixed at the beginning of the iterationprocess. These schemes would surely be improved if both the size and the number ofthe relaxation levels could be adapted along the iterations, following a grid strategy.This question will be addressed in a future work. More generally, we do not have atour disposal a technique for selecting the proper block sizes. Here we have considereda problem coming from the discretization of a (nonlinear) elliptic PDE and then weused an argument of dyadic refinement for constructing the various blocks.

Remark 4. In this paper, we have used finite differences and restricted ourselvesto rectangular and structured meshes. The implementation of our schemes using ahierarchical finite element method [32] with, e.g., triangular meshes and local refine-ments, was not considered. However, since the IUs use nested grids exactly as thehierarchical bases, the multiparameter schemes can be adapted to a finite elementdiscretization in a natural way.

6. Concluding remarks. The numerical results presented here illustrate thatour new multiparameter methods are efficient and robust. In particular, we observethat the convergence of the nonlinear Richardson schemes ∆k is improved when a(optimal) multirelaxation is used instead of the usual punctual relaxation with anoptimal parameter. It clearly appears that this improvement is numerically possiblethanks to the presence of several structures (IUs or block components) and of a dif-ferent and appropriate treatment of each of them. This idea, which comes from thenonlinear Galerkin method, was applied in the present work to a stationary problem.In a more general way, and as can be seen in our method, the different treatment ofthe various scales in a given numerical process allows an extension of the notion ofstability in directions that is difficult to obtain when a classical approach is consid-ered, such as when all the unknowns are treated in the same manner. For example,when the classical Marder–Weitzner method is considered, the classical (sufficient)stability condition depends on the largest eigenvalue of the derivative of the nonlinearfunctional at a local solution, while a multirelaxation process with larger relaxationparameters can be used without any loss of convergence (see [10, 15]). Here, followingthe same principle, we have generalized the optimal relaxation parameter case.

The multiparameter methods implemented here are based on an (optimal) relax-ation by a diagonal matrix having a particular form. However, more general forms ofthe relaxation matrix can be used. Indeed, it suffices to fix the distribution of the apriori nonzero coefficients of Γn for obtaining the form of the rectangular matrix Znand the dimension of the vector Λn (which contains the coefficients of Γn).

The implementation of nonlinear hybrid schemes will be addressed in a futurework.

REFERENCES

[1] C. Bolley, Solutions numeriques de problemes de bifurcation, RAIRO Model. Math. Anal.Numer., 14 (1980), pp. 127–147.

[2] C. Bolley, Multiple solutions of a bifurcation problem, in Bifurcation and Nonlinear EigenvalueProblems, Lecture Notes in Math. 782, C. Bardos, ed., Springer-Verlag, Berlin, 1978,pp. 42–53.

[3] C. Brezinski, Variations on Richardson’s method and acceleration, in Numerical Analysis: ANumerical Analysis Conference in Honour of Jean Meinguet, Bull. Belg. Math. Soc. SimonStevin, 1996, suppl., pp. 33–44.

Page 20: Multiparameter Iterative Schemes for the Solution of Systems of Linear and Nonlinear Equations

MULTIPARAMETER ITERATIVE SCHEMES 2159

[4] C. Brezinski, Projection Methods for Systems of Equations, North–Holland, Amsterdam, 1997.[5] C. Brezinski and J.–P. Chehab, Nonlinear hybrid procedures and fixed point iterations, Nu-

mer. Funct. Anal. Optim., 19 (1998), pp. 465–487.[6] C. Brezinski and M. Redivo Zaglia, Hybrid procedures for solving linear systems, Numer.

Math., 67 (1994), pp. 1–19.[7] C. Brezinski, M. Redivo Zaglia, and H. Sadok, Avoiding breakdown and near-breakdown

in Lanczos type algorithms, Numer. Algorithms, 1 (1991), pp. 261–284.[8] C. Brezinski and M. Redivo Zaglia, Look-ahead in BI-CGSTAB and other product-type

methods for linear systems, BIT, 35 (1995), pp. 169–201.[9] C. Brezinski and M. Redivo Zaglia, Extrapolation Methods. Theory and Practice, North–

Holland, Amsterdam, 1991.[10] J.-P. Chehab, Methode des inconnues incrementales: Applications au calcul des bifurcations,

These, Universite de Paris XI Orsay, Paris, France, January 1993.[11] J.-P. Chehab, A nonlinear adaptive multiresolution method in finite differences with incre-

mental unknowns, RAIRO Model. Math. Anal. Numer., 29 (1995), pp. 451–475.[12] J.-P. Chehab, Solution of generalized Stokes problems using hierarchical methods and incre-

mental unknowns, Appl. Numer. Math., 21 (1996), pp. 9–42.[13] J.-P. Chehab, Incremental unknowns method and compact schemes, RAIRO Model. Math.

Anal. Numer., 32 (1998), pp. 51–83.[14] J.-P. Chehab and A. Miranville, Incremental unknowns on nonuniform meshes, RAIRO

Model. Math. Anal. Numer., 32 (1998), pp. 539–577.[15] J.-P. Chehab and R. Temam, Incremental unknowns for solving nonlinear eigenvalue prob-

lems: New multiresolution methods, Numer. Methods Partial Differential Equations, 11(1995), pp. 199–228.

[16] M. Chen and R. Temam, Incremental unknowns for solving partial differential equations,Numer. Math., 59 (1991), pp. 255–271.

[17] M. Chen and R. Temam, Incremental unknowns in finite differences: Condition number ofthe matrix, SIAM J. Matrix Anal. Appl., 14 (1993), pp. 432–455.

[18] M. Chen and R. Temam, Nonlinear Galerkin method in finite difference case and wavelet–likeincremental unknowns, Numer. Math., 64 (1993), pp. 271–294.

[19] M. Chen, A. Miranville, and R. Temam, Incremental unknowns in finite differences in threespace dimensions, Comput. Appl. Math., 14 (1995), pp. 1–15.

[20] G.H. Golub and G.A. Meurant, Resolution numerique des grands systemes lineaires, Ey-rolles, Paris, 1983.

[21] O. Goyon, Resolution numerique de problemes stationnaires et evolutifs non lineaires par lamethode des inconnues incrementales, These, Universite de Paris XI Orsay, Paris, France,December 1994.

[22] B. Marder and H. Weitzner, A bifurcation problem in E–layer equilibria, Plasma Physics,12 (1970), pp. 435–445.

[23] M. Marion and R. Temam, Nonlinear Galerkin methods, SIAM J. Numer. Anal., 26 (1989),pp. 1139–1157.

[24] M. Marion and R. Temam, Nonlinear Galerkin methods: The finite elements case, Numer.Math., 57 (1990), pp. 205–226.

[25] W. Schonauer, Scientific Computing on Vector Computers, North–Holland, Amsterdam,1987.

[26] W. Schonauer, H. Muller, and E. Schnepf, Numerical tests with biconjugate gradient typemethods, Z. Angew. Math. Mech., 65 (1985), pp. T400–T402.

[27] T. Tachim Medjo, Sur les equations de Navier-Stokes en formulation Vitesse–Vorticite:Implementation des inconnues incrementales oscillantes dans les equations de Navier-Stokes, These, Universite Paris–Sud, Orsay, Paris, France, June 1995.

[28] R. Temam, Inertial manifolds and multigrid methods, SIAM J. Math. Anal., 21 (1990), pp. 154–178.

[29] R. Temam, Multiresolution methods for partial differential equations, in Mathematics of Com-putation 1943–1993: A Half–Century of Computational Mathematics, W. Gautschi, ed.,AMS, Providence, RI, 1994, pp. 225–240.

[30] R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, Springer-Verlag, Berlin, 1988.

[31] H. A. Van Der Vorst, BI-CGSTAB: A fast and smoothly converging variant of BI-CG for thesolution of nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 13 (1992), pp. 631–644.

[32] H. Yserentant, On multilevel splitting of finite element spaces, Numer. Math., 49 (1986),pp. 379-412.