Top Banner
26

Differential Stability of Two-Stage Stochastic Programs

May 14, 2023

Download

Documents

Bernard Gallois
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTICPROGRAMS �DARINKA DENTCHEVAyAND WERNER R�OMISCHzAbstract. Two-stage stochastic programs with random right-hand side are considered. Optimalvalues and solution sets are regarded as mappings of the expected recourse functions and theirperturbations, respectively. Conditions are identi�ed implying that these mappings are directionallydi�erentiable and semidi�erentiable on appropriate functional spaces. Explicit formulas for thederivatives are derived. Special attention is paid to the role of a Lipschitz condition for solutionsets as well as of a quadratic growth condition of the objective function.Key words. Two-stage stochastic programs, sensitivity analysis, directional derivatives, semi-derivatives, solution sets.AMS subject classi�cation. 90C 15, 90 C311. Introduction. Two-stage stochastic programming is concerned with prob-lems that require a here-and-now decision on the basis of given probabilistic infor-mation on the random data without making further observations. The costs to beminimized consist of the direct costs of the here-and-now (or �rst stage) decision aswell as the costs generated by the need of taking a recourse (or second stage) de-cision in response to the random environment. Recourse costs are often formulatedby means of expected values with respect to the probability distribution of the in-volved random data. In this way, two-stage models and their solutions depend onthe underlying probability distribution. Since this distribution is often incompletelyknown in applied models, or it has to be approximated for computational purposes,the stability behaviour of stochastic programming models when changing the prob-ability measure is important. This problem is studied in a number of papers. Weonly mention here the surveys [13], [40] and the papers [1], [12], [18], [26], [27], [34]and [35]. The paper [1] contains general results on continuity properties of optimalvalues and solutions when perturbing the probability measures with respect to thetopology of weak convergence. Quantitative continuity results of solution sets to two-stage stochastic programs with respect to suitable distances of probability measuresare obtained in [26] and [27]. Asymptotic properties of statistical estimators of valuesand solutions to stochastic programs are derived in [18], [34], [35]. They are based ondirectional di�erentiability properties of the underlying optimization problems withrespect to the parameter that carries the randomness ([18], [35]) or the probabilitymeasure ([34]). These directional di�erentiability results for values ([35]) and solu-tions ([13], [18], [34]) lead to asymptotic results via the so-called delta-method . For adescription of the delta-method we refer to Chapter 6 in [28], [35], to [36] for an up-to-date presentation and to [16] for a set-valued variant. These papers illuminate theimportance of the Hadamard directional di�erentiability (for single-valued functions)and of the semidi�erentiability (for set-valued mappings) in the context of asymptoticstatistics.�This research has been supported by the Deutsche ForschungsgemeinschaftyHumboldt-Universit�at Berlin, Institut f�ur Mathematik, 10099 Berlin, Germany; current address:Lehigh University, Department of Industrial and Manufacturing Systems Engineering, Bethlehem,PA 15018, U.S.A.; [email protected]�at Berlin, Institut f�ur Mathematik, 10099 Berlin, Germany;[email protected] 1

Page 2: Differential Stability of Two-Stage Stochastic Programs

2 D. DENTCHEVA AND W. R�OMISCHThe present paper aims at contributing to this line of di�erential stability stud-ies. The results in [18], [34] apply to fairly general stochastic optimization models,but impose conditions that are rather restrictive in our context. The present paperdeals with special two-stage models and, using structural properties, avoids certainassumptions that complicate or even prevent the applicability of the general resultsto two-stage stochastic programs. Such assumptions are the (local) uniqueness ofsolutions and di�erentiability properties of perturbed problems, which are indispens-able in [18], [34]. Before discussing this in more detail, let us introduce the class oftwo-stage stochastic programs, we want to consider:minfg(x) +Q�(Ax) : x 2 Cg;(1.1)where g : IRm ! IR is a convex function, C � IRm is a nonempty closed convex set,A is a (s;m)-matrix and Q� is the expected recourse function with respect to the(Borel) probability measure � on IRs,Q�(y) = ZIRs ~Q(! � y)�(d!);(1.2) ~Q(t) = inffhq; ui : Wu = t; u � 0g (t 2 IRs):(1.3)Here q 2 IR �m are the recourse costs, W is an (s; �m)-matrix and called the recoursematrix, and ~Q(!�Ax) corresponds to the value of the optimal second stage decisionfor compensating a possible violation of the (random) constraint Ax = !. To havethe problem (1.1) { (1.3) well-de�ned, we assume(A1) posW = fWu : u 2 IR �m+ g = IRs (complete recourse),(A2) MD = ft 2 IRs :W T t � qg 6= ; (dual feasibility),(A3) ZIRs k!k�(d!) <1 (�nite �rst moment).The assumptions (A1) and (A2) imply that ~Q is �nite, convex and polyhedral onIRs. Due to (A3) also Q� is �nite and convex on IRs (cf. [15], [39]). Observe that,in general, an expected recourse function Q� may be nondi�erentiable on a certainunion of hyperplanes in IRs and that, indeed, di�erentiability properties of Q� dependon the degree of smoothness induced by the measure � (cf. [15], [21], [38], [39] andRemark 4.10). Another observation is that the uniqueness of solutions to (1.1) isguaranteed only if the constraint set C picks just one element from the relevant levelset of g(�) +Q�(A �). As the next example shows, this set may be large since Q�(A �)is constant on translates of the null space of the matrix A.Example 1.1. In (1.1) - (1.3) let m = 3, n = 2, g(x) = 14 (x2 � x3), C = [0; 12 ]3 ,A = � 1 0 �11 �1 0 � , q = (1; 1; 1; 1) , W = � 1 0 �1 00 1 0 �1 �, and � be theuniform distribution on the square [� 12 ; 12 ]2 in IR2.Then we have ~Q(t) = jt1j+ jt2j and Q�(y) = y21 + y22 + 12 , for y = (y1; y2) 2 [� 12 ; 12 ]2.The optimization problem (1.1) and its solution set (Q�) take the formmin f 14(x2 � x3) + (x1 � x3)2 + (x1 � x2)2 + 12 : (x1; x2; x3) 2 [0; 12 ]3g ;

Page 3: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 3 (Q�) = f (18 + u; u; 14 + u) : u 2 [0; 14 ]g = f(18 ; 0; 14) + kerAg \ C ;where kerA = f(u; u; u) : u 2 IRg is the null space of A.Proposition 2.1 below provides some more insight into the structure of the solutionset to (1.1) and elucidates the role of the set-valued mapping �(y) := argminfg(x) :x 2 C;Ax = yg in this respect.Note that assumption (A1) could be relaxed by introducing the set K = fy 2IRs : Q�(y) < +1g. Then (A2) and (A3) imply that K is a closed convex polyhedronand that Q� is convex and continuous on K (cf. [39]). Now (A1) can be replaced bythe condition K � A(C) (relatively complete recourse), and much of the work donein this paper carries over to this more general setting by using spaces of functionsde�ned on K instead of IRs.Let KC denote the set of all convex functions on IRs which forms a convex conein the space C0(IRs) of all continuous functions on IRs. KC will serve as the set ofpossible perturbations of the given expected recourse function Q� 2 KC . We de�ne'(Q) := inffg(x) +Q(Ax) : x 2 Cg; (Q) := argminfg(x) +Q(Ax) : x 2 Cgand regard ' and as mappings from KC into the extended reals and the set of allclosed convex subsets of IRm, respectively.In this paper we develop a sensitivity analysis for the mappings ' and at some givenfunction Q�. The stochastic programming origin of the model (1.1) takes a back seatand our results are stated in terms of general conditions on Q� and its perturbationsQ. We identify conditions such that the value function ' has �rst and second orderdirectional derivatives and the solution-set mapping is directionally di�erentiableat Q� into admissible directions. Here, admissibility means that the direction belongsto the radial tangent cone to KC at Q�, i.e.,T r(KC ;Q�) = f�(Q�Q�) : Q 2 KC ; � > 0g;ensuring that the di�erence quotients are well-de�ned. For v belonging to T r(KC ;Q�)the Gateaux directional derivatives of ' and at Q� and (Q�; �x), �x 2 (Q�), respec-tively, are de�ned as'0(Q�; v) = limt!0+ 1t ('(Q� + tv)� '(Q�));'00(Q�; v) = limt!0+ 1t2 ('(Q� + tv)� '(Q�)� t'0(Q�; v)); 0(Q�; �x; v) = limt!0+ 1t ( (Q� + tv)� �x);if the limits exist. The third limit is understood in the sense of (Painlev�e-Kuratowski)set convergence (e.g. [2]). Recall that the lower and upper set limits of a family (St)t>0of subsets of a metric space (X; d) are de�ned aslim inft!0+ St = fx 2 X : limt!0+ d(x; St) = 0g;lim supt!0+ St = fx 2 X : lim inft!0+ d(x; St) = 0g:Both sets are closed and the lower set limit is contained in the upper limit. If bothlimits coincide, the family (St)t>0 is said to converge and its limit set is denoted

Page 4: Differential Stability of Two-Stage Stochastic Programs

4 D. DENTCHEVA AND W. R�OMISCHby limt!0+St. For sequences of sets (Sn)n2IN the de�nitions of set limits are modi�edcorrespondingly.We also derive conditions implying that the limits de�ning the directional deriva-tives exist uniformly with respect to directions v belonging to compact subsets ofcertain functional spaces. The limits are then called (�rst or second order) Hadamarddirectional derivatives and semiderivatives for set-valued maps, respectively. The cor-responding directional derivatives are de�ned on tangent cones to the cone of convexfunctions in certain functional spaces. For more information on concepts of directionaldi�erentiability and multifunction di�erentiability we refer to [4], [33] and to [2], [3],[23], [25], respectively.Let us �x some notations used throughout the paper. k � k and h�; �i denote thenorm and scalar product, respectively, in some Euclidean space IRn; B(x; r) denotesthe open ball around x 2 IRn with radius r > 0; d(x;D) denotes the distance ofx 2 IRn to the set D � IRn; for a real-valued function f on IRn, rf denotes itsgradient in IRn and the (n; n)-matrix r2f its Hessian; if f is locally Lipschitziannear x 2 IRn, @f(x) denotes the Clarke subdi�erential of f at x; f 0(x; d) denotes thedirectional derivative of f at x in direction d if it exists; for x 2 C, T (C;x) denotes thetangent cone to C at x, i.e., T (C;x) = lim inft!0+ 1t (C �x) = clf�(y�x) : y 2 C; � > 0g,where cl stands for closure; for x 2 C, � 2 T (C;x), T 2(C;x; �) denotes the secondorder tangent set to C at x in direction �, i.e., T 2(C;x; �) = lim inft!0+ 1t2 (C � x � t�)(note that T 2(C;x; �) is closed and convex; see [10], [6] for further properties).In our paper, we use the following linear metric spaces of real-valued functions onIRs: The space C0(IRs) of continuous functions on IRs equipped with the distanced1(f; ~f) = 1Xn=1 2�n kf � ~fk1;n1 + kf � ~fk1;n , wherekfk1;r = maxkyk�r jf(y)j, for f; ~f 2 C0(IRs) and r > 0;the space C0;1(IRs) of locally Lipschitzian functions on IRs with the metricdL(f; ~f) = 1Xn=1 2�n kf � ~fk1;n + kf � ~fkL;n1 + kf � ~fk1;n + kf � ~fkL;n , wherekfkL;r = supn jf(y)� f(~y)jky � ~yk : kyk � r; k~yk � r; y 6= ~yo;= supfkzk : z 2 @f(y); kyk � rg; for f; ~f 2 C0;1(IRs) and r > 0;the space C1(IRs) of continuously di�erentiable functions on IRs with the metricd(f; ~f) = d1(f; ~f)+d1(rf;r ~f), f; ~f 2 C1(IRs), and the space C1;1(IRs) of functionsin C1(IRs) whose gradients are locally Lipschitzian on IRs equipped with the distanced(f; ~f) = d1(f; ~f) + d1(rf;r ~f) + dL(rf;r ~f), f; ~f 2 C1;1(IRs).The sensitivity analysis of the mappings ' and is carried out by exploitingstructural properties of the optimization model (1.1). We obtain novel di�erentiabilityproperties of solution sets and extend our earlier results on directional di�erentiabilityof optimal values in [12] considerably. As one might expect, the basic ingredients ofour analysis are a Lipschitz continuity result for solution sets with respect to thedistance in C0;1(IRs) (Theorem 2.3) and a quadratic growth condition near solutionsets (Theorem 2.7). Both theorems extend earlier results in [27] to more general

Page 5: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 5situations for the �rst stage costs g and constraint set C. All results in the paperapply to the linear-quadratic case, i.e., to linear or convex quadratic g and polyhedralC. Indeed, all results are formulated as general as possible and most of them areaccompanied by illustrative examples. The second order analysis of ' in Section 3utilizes some ideas from [31] and [32], but its proof is entirely di�erent and its Gateauxdi�erentiability part is valid for nondi�erentiable directions (Theorem 3.4). It isalso elaborated that the Hadamard directional di�erentiability properties require theC0-topology for the �rst order result and the C1-topology for the second order one(Theorem 3.8), while the C1;1-topology is needed for the semidi�erentiability of thesolution-set mapping (Theorem 4.9). All results on di�erentiability properties of in Section 4 are new and do not follow from recent sensitivity results (e.g. [5], [8],[7], [17], [32]; see also the survey [8] for further references and Remark 4.4 for a moredetailed discussion).The results of Sections 3 and 4 have direct implications to asymptotic propertiesof values and solution sets of two-stage stochastic programs when applying (smooth)nonparametric estimation procedures to approximate Q�. For a discussion of some ofthe related aspects we refer to the brief exposition in Remark 4.11. Further applica-tions to asymptotics are beyond the scope of this paper and will be done elsewhere.2. Basic directional properties. The �rst step in our analysis of directionalproperties consists in establishing results on the lower Lipschitz continuity of andon the directional uniform quadratic growth of the objective near its solution set.Both results become important for our method of deriving directional di�erentiabilityproperties for the optimal value function ' and the solution set mapping at somegiven expected recourse function Q�. Their proofs are based on a decomposition ofthe program minfg(x) +Q(Ax) : x 2 Cg;(2.1)with Q belonging to KC , into two auxiliary problems. The �rst one is a convexprogram with decisions taken from A(C) and the second represents a parametricconvex program which does not depend on Q.Proposition 2.1. Let Q 2 KC and (Q) be nonempty. Then we have'(Q) = inff�(y) +Q(y) : y 2 A(C)g = �(Ax) +Q(Ax); for any x 2 (Q); and (Q) = �(Y (Q)); whereY (Q) := argminf�(y) +Q(y) : y 2 A(C)g;�(y) := inffg(x) : x 2 C;Ax = yg; and�(y) := argminfg(x) : x 2 C;Ax = yg (y 2 A(C)):Moreover, � is convex on A(C) and dom � is nonempty.Proof. Let �x 2 (Q). Then we have'(Q) = g(�x) +Q(A�x) � �(A�x) +Q(A�x) � inff�(y) +Q(y) : y 2 A(C)g:For the converse inequality, let " > 0 and �y 2 A(C) be such that�(�y) +Q(�y) � inff�(y) +Q(y) : y 2 A(C)g+ "2 :Then there exists a �x 2 C such that A�x = �y and g(�x) � �(�y) + "2 . Hence,'(Q) � g(�x) +Q(A�x) � �(�y) +Q(�y) + "2� inff�(y) +Q(y) : y 2 A(C)g+ ":

Page 6: Differential Stability of Two-Stage Stochastic Programs

6 D. DENTCHEVA AND W. R�OMISCHSince " > 0 is arbitrary, the �rst statement has been shown. In particular, x 2 �(Ax)and Ax 2 Y (Q) for any x 2 (Q) . Hence, it holds that (Q) � �(Y (Q)). Conversely,let x 2 �(Y (Q)). Then x 2 �(y) for some y 2 Y (Q). Thus Ax = y and g(x) = �(y) =�(Ax) implyingg(x) +Q(Ax) = �(Ax) +Q(Ax) = inff�(y) +Q(y) : y 2 A(C)g= '(Q) and x 2 (Q):Since the convexity of � is immediate, the proof is complete.In the following, it will turn out that Lipschitzian properties of the solution setmapping y 7! �(y) and a quadratic growth property of g near �(y) are essential. Forthe linear-quadratic case we are in a comfortable situation in this respect. Namely,we have the followingProposition 2.2. Let g be linear or convex quadratic, C be convex polyhedraland assume dom� to be nonempty. Then � is a polyhedral multifunction which isHausdor� Lipschitzian on its domain dom � = A(C), i.e., there exists a constantL > 0 such that dH(�(y); �(~y)) � Lky � ~yk; for all y; ~y 2 A(C);where dH denotes the (extended) Hausdor� distance on subsets of IRm.Moreover, for each r > 0 there exists a constant �(r) > 0 such thatg(x) � �(Ax) + �(r)d(x; �(Ax))2 ; for all x 2 C \B(0; r):(Here � and � are de�ned as in Proposition 2.1).Proof. The Lipschitz property of � is shown in [19], Theorem 4.2. To prove thesecond statement, let g be of the form g(x) = hHx; xi+ hc; xi, where H is symmetric,positive semide�nite and c 2 IRm. For each y 2 A(C) we �x some z(y) 2 �(y). Anelementary characterization of solution sets to convex quadratic programs with linearconstraints yields that�(y) = fx 2 C : Ax = y;Hx = Hz(y); hc; xi = hc; z(y)ig:Due to the Lipschitz behaviour of convex polyhedra (cf. [37]), there exists a constantL� > 0 such thatd(x; �(y)) � L�(kHx�Hz(y)k+ jhc; xi � hc; z(y)ij);for all y 2 A(C) and x 2 C with Ax = y. Using the decomposition H = H 12H 12 ,where H 12 denotes the square root of H , and the representation hc; xi � hc; z(y)i =g(x)� �(y)� kH 12 xk2 + kH 12 z(y)k2, one arrives at the estimated(x; �(y)) � L�(kH 12 k(1 + kxk+ kz(y)k)kH 12 (x� z(y))k+ g(x)� �(y))for all y 2 A(C) and x 2 C with Ax = y.Now, let r > 0 and let us �x some element �x 2 C \ B(0; r) and a correspond-ing z(A�x) 2 �(A�x). For each y 2 A(C) we now select z(y) 2 �(y) such thatkz(y) � z(A�x)k = d(z(A�x); �(y)). Since � is Hausdor� Lipschitzian on A(C), thisimplies kz(y)� z(A�x)k � LkA�x� yk for all y 2 A(C). Hence, there exists a constant

Page 7: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 7K(r) > 0 such that kz(Ax)k � K(r) for all x 2 C\B(0; r). Thus our estimate contin-ues to d(x; �(Ax))2 � L̂(r)(kH 12 (x�z(Ax))k2+(g(x)��(Ax))2) for all x 2 C\B(0; r)and some constant L̂(r) > 0. Furthermore, the equationg�12(x+ z(y))� = 12g(x) + 12g(z(y))� 14kH 12 (x� z(y))k2implies kH 12 (x � z(y))k2 � 2(g(x) � �(y)), for all y 2 A(C), x 2 C with Ax = y.Therefore, we �nally obtaind(x; �(Ax))2 � L̂(r)(2(g(x) � �(Ax)) + (g(x) � �(Ax))2)� L̂(r)maxf2;K(r)g(g(x)� �(Ax))for all x 2 C \ B(0; r), where K(r) := supx2C\B(0;r)(g(x) � �(Ax)).Due to the above proposition, the main results in this section apply to the linear-quadratic case. Although this case represents the main application of our results, theassumptions of the following theorems are formulated in terms of general conditionson the mapping � in order to widen the range of applications. The �rst theoremstates (lower) Lipschitz continuity of at Q� and supplements Theorem 2.4 in [27].Theorem 2.3. Let Q� 2 KC, (Q�) be nonempty, bounded and Q� be stronglyconvex on some open, convex neighbourhood of A (Q�). Let �x 2 (Q�) and assumethat there exist a constant L > 0 and a neighbourhood U of �y with f�yg = A (Q�)such that d(�x; �(y)) � Lk�y � yk; for all y 2 A(C) \ U:Then there exist constants L̂ > 0, � > 0 and r > 0 such thatd(�x; (Q)) � L̂kQ�Q�kL;rwhenever Q 2 KC and kQ�Q�kL;r < �.Proof. We may assume that U is open, convex and that Q� is strongly convexon U . Let V be an open, convex, bounded subset of IRm such that (Q�) � Vand A(V ) � U . It follows from Proposition 2.3 in [27] (where a slightly di�erentterminology is used) that there exists a constant � > 0 such that ; 6= (Q) � Vwhenever Q 2 KC andsupfkzk : z 2 @(Q�Q�)(y); y 2 clA(V )g < �:Let r > 0 be chosen such that cl A(V ) � �B(0; r). Hence, we have ; 6= (Q) � Vwhenever Q 2 KC , kQ � Q�kL;r < �. Then Proposition 2.1 yields the relation (Q) = �(Y (q)), where Y (Q) = argminf�(y) + Q(y) : y 2 A(C)g. Since Q� isstrongly convex on U , there exists a constant � > 0 such that�ky � �yk2 � �(y) +Q�(y)� (�(�y) +Q�(�y)); for all y 2 U:Let Q 2 KC with kQ�Q�kL;r < � and let ~y 2 Y (Q). Since y belongs to A(V ) � U ,we obtain�k~y � �yk2 � �(~y) +Q�(~y)� (�(�y) +Q�(�y)) + �(�y) +Q(�y)� (�(~y) +Q(~y))= (Q�Q�)(�y)� (Q�Q�)(~y)

Page 8: Differential Stability of Two-Stage Stochastic Programs

8 D. DENTCHEVA AND W. R�OMISCHand, hence, k~y � �yk � 1� (Q�Q�)(�y)� (Q�Q�)(~y)k�y � ~yk � 1�kQ�Q�kL;r:The proof can now be completed as follows. LetQ 2 KC be such that kQ�Q�kL;r < �.Then d(�x; (Q)) = d(�x; �(Y (Q))) � supy2Y (Q) d(�x; �(y))� L supy2Y (Q) k�y � yk � L� kQ�Q�kL;r:Remark 2.4. The proof shows that a Lipschitz modulus of can be chosen asthe quotient of a Lipschitz constant to � and a strong convexity constant to Q�.>From the proof it is immediate that replacing the local Lipschitz condition on �by stronger conditions likesupx2�(�y) d(x; �(y)) � Lk�y � yk ordH(�(�y); �(y)) � Lk�y � yk; for all y 2 A(C) \ U;leads to corresponding stronger Lipschitz continuity properties of solution sets. Be-cause of Proposition 2.2, all of this applies to the linear-quadratic case. However, it isworth mentioning that the theorem also applies to more general problems such thatthe corresponding solution sets �(y) enjoy Lipschitzian properties. Conditions ensur-ing Lipschitz behaviour of � can be derived from stability results for the correspondingparametric generalized equation0 2 rL(x; �; y) +NC�IRs(x; �)(2.2)which describes the �rst order necessary optimality condition. Here L(x; �; y) :=g(x) + �T (Ax � y) is the Lagrangian function, rL(x; �; y) = � rg(x) +AT�Ax� y �,where g is assumed to be continuously di�erentiable, and NC�IRs is the normal conemap of convex analysis. Such stability results are presently available for broad classesof parametric generalized equations (e.g. [17], [22], [24]). A typical recent result inthis direction, which applies to our situation for twice continuously di�erentiable g,is Theorem 5.1 in [22]. It says that the solution set mapping of the parametric gener-alized equation (2.2) is pseudo-Lipschitzian around (�x; ��; �y) if the adjoint generalizedequation 0 2 r2L(�x; ��; �y)w� +D�NC�IRs(�x; ��;�rL(�x; ��; �y))(w�)(2.3)has only the trivial solution w� = 0.Here D�NC�IRs(�x; ��;�rL(�x; ��; �y)) is the Mordukhovich coderivative ([22]) of thenormal cone multifunction at the point (�x; ��;�rL(�x; ��; �y)) belonging to the graphof NC�IRs . Translating this into our framework, we obtain that the mapping � ispseudo-Lipschitzian around (�x; �y) if the following two conditions are satis�ed:(a) There exists an element x̂ belonging to the relative interior of C such thatAx̂ = �y (Slater condition);

Page 9: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 9(b) the equations Aw�1 = 0 and 0 2 r2g(�x)w�1 + ATw�2 +D�NC(�x; ��;�rg(�x) �AT ��)(w�1) have only the trivial solution w�1 = 0, w�2 = 0. (Here (�x; ��) is asolution of (2.2) for y = �y.)The next examples show that the theorem applies to instances of two-stagestochastic programs with nonunique solutions and with nonpolyhedral convex con-straint sets C.Example 2.5. We revisit Example 1.1 and obtain with the notations of Proposition2.1 that A(C) = [� 12 ; 12 ]2, �(y) = 14 (y1�y2), Y (Q�) = argminf 14 (y1�y2)+y21+y22+ 12 :y 2 A(C)g = f(� 18 ; 18 )g and �(y) = f(u; u� y2; u � y1) : u 2 IRg \ C for y 2 A(C).Hence, Y (Q�) is a singleton, but (Q�) = �(Y (Q�)) forms a line segment. Moreover,� is Hausdor� Lipschitzian on A(C) and Theorem 2.3 applies.Example 2.6. In (1.1) { (1.3) let m = 2, s = 1, g(x) � 0, A = (1; 0), q = (1; 1),W = (1;�1), � be the uniform distribution on [� 12 ; 12 ] and C = f(x1; x2) 2 IR2 : x22 �x1g. Then we have ~Q(t) = jtj, Q�(y) = RIR j! � yj�(d!) = � y2 + 14 ; y 2 [� 12 ; 12 ]jyj otherwise , (Q�) = f(0; 0)g and Q� is strongly convex on (� 12 ; 12 ). For y 2 A(C) = IR+ we have�(y) = fx 2 C : Ax = yg = f(y; x2) 2 IR2 : x22 � yg = fyg � [�py;py]and, hence d((0; 0); �(y)) = y for all y 2 IR+. Thus Theorem 2.3 applies for �x = (0; 0).Example 2.9 shows that Theorem 2.3 gets lost if Q� fails to be strongly convexon some neighbourhood of A (Q�). Our next result establishes a su�cient conditionfor the uniform quadratic growth near solution sets.Theorem 2.7. Let Q� 2 KC, (Q�) be nonempty, bounded and Q� be stronglyconvex on some open convex neighbourhood U of A (Q�). Assume that there existsa constant L > 0 such thatdH(�(y); �(~y)) � Lky � ~yk; for all y; ~y 2 A(C);and, for each r > 0 there exists a constant �(r) > 0 such thatg(x) � �(Ax) + �(r)d(x; �(Ax))2 ; for all x 2 C \B(0; r):Then, for some open, bounded neighbourhood V of (Q�) and each v 2 T r(KC ;Q�),there exist constants c > 0 and � > 0 such that the following uniform growth conditionholds: g(x) + (Q� + tv)(Ax) � '(Q� + tv) + cd(x; (Q� + tv))2;for all x 2 C \ V and t 2 [0; �).Proof. Let v 2 T r(KC ; Q�) and V be an open, bounded subset of IRm suchthat (Q�) � V and A(V ) � U . As in Theorem 2.3 we choose � > 0 such that; 6= (Q� + tv) � V and, in addition, that Q� + tv is strongly convex on U for allt 2 [0; �) (with a uniform constant � > 0). For each t 2 [0; �) Proposition 2.1 thenyields that (Q� + tv) = �(yt), where yt is the unique minimizer of the stronglyconvex function � + Q� + tv on A(C) and, moreover, we have �ky � ytk2 � �(y) +(Q� + tv)(y) � '(Q� + tv), for all y 2 A(C) \ U . Now, we choose r > 0 such thatV � B(0; r) and continue for each x 2 C \ V and t 2 [0; �) as follows:d(x; (Q� + tv))2 = d(x; �(yt))2� 2(d(x; �(Ax))2 + dH(�(Ax); �(yt))2)

Page 10: Differential Stability of Two-Stage Stochastic Programs

10 D. DENTCHEVA AND W. R�OMISCH� 2� 1�(r) (g(x)� �(Ax)) + L2kAx� ytk2�� 2� 1�(r) (g(x)��(Ax))+L2� (�(Ax)+(Q�+tv)(Ax)�'(Q�+tv))�� 2maxn 1�(r) ; L2� o(g(x) + (Q� + tv)(Ax) � '(Q� + tv))Putting c�1 = 2maxf 1�(r) ; L2� g completes the proof.The following examples show that the quadratic growth condition gets lost evenfor the original problem, i.e. t = 0, if either the Lipschitz condition for � or the strongconvexity property for Q� are violated.Example 2.8. Consider again the set-up of Example 2.6. Since it holds thatdH(�(y); �(0)) = (y2+y) 12 , for all y 2 IR+ = A(C), � is not Hausdor� Lipschitzian onA(C). Supposed there exists a neighbourhood V of (Q�) = f(0; 0)g and a constant% > 0 such that the growth condition% d(x; (Q�))2 = %kxk2 � Q�(x1)� '(Q�) = x21; for all x 2 C \ V;is satis�ed. Since the sequence (( 1n ; 1pn )) belongs to C\V for su�ciently large n 2 IN ,this would imply %( 1n2 + 1n ) � 1n2 for large n, which is a contradiction.Example 2.9. In (1.1) { (1.3) let m = s = 1, g(x) � 0, A = 1, C = IR, q = (1; 1),W = (1;�1) and � be the probability distribution on IR having the densityf�(z) = � jzj; z 2 [�1; 1]0 otherwiseThen Q�(y) = RIR j! � yj�(d!) = � 13 jyj3 + 23 ; y 2 [�1; 1]jyj otherwise , (Q�) = f0g, and there is no neighbourhood of (Q�) where Q� is strongly convex.It is clear that the quadratic growth condition fails to hold, since the inequality%x2 � Q�(x) � '(Q�) = 13 jxj3 cannot be true for some % > 0 and all x belonging tosome neighbourhood of x = 0.With the linear function v(x) = �x (x 2 IR) we obtain for all t 2 [0; 1] that (Q� +tv) = fptg (cf. Example 3.7). Hence, the lower Lipschitz property of fails to holdas well. Since the strong convexity and later also the strict convexity of the expectedrecourse function Q� (on certain convex subsets of IRs) form essential conditions inmost of our results, we record a theorem (Theorem 2.2 in [30]) that provides a handycriterion to check these properties for problem (1.1) { (1.3).Proposition 2.10. Let V � IRs be open convex and assume (A1), (A3). Con-sider the following conditions:(A2)� intMD = ft 2 IRs : W T t < qg 6= ;;(A4) � is absolutely continuous on IRs;(A4)� � satis�es (A4) and there exist a density f� for � and a constant� > 0 such that f�(z) � � whenever d(z; V ) � �:Then (A2)� and (A4) imply that Q� is strictly convex on V if V is a subset of thesupport of �, and (A2)�, (A4)� imply that Q� is strongly convex on V .

Page 11: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 11In addition, it is shown in [30] that under (A1) { (A4) the condition (A2)� isalso necessary for the strict convexity of Q�. For extended simple recourse models(i.e. W = (H;�H) with some nonsingular (s; s)-matrix H) (A2)� is equivalent toq+ + q� > 0 (componentwise), where q = (q+; q�) and q+; q� 2 IRs. This may beused to check strict or strong convexity properties in the Examples 2.6 and 2.9.3. Directional derivatives of optimal values. In this section, we study �rstand second order directional di�erentiability properties of the optimal value function' on its domain KC . We begin with the �rst-order analysis and show that ' as amapping from KC to the extended reals is Hadamard directionally di�erentiable atsome given expected recourse function Q� 2 KC . Here KC is regarded as a subset ofC0(IRs). Recall that ' is Hadamard directionally di�erentiable at Q� on KC i� forall sequences (vn) converging to some v in C0(IRs) and all sequences tn ! 0+ suchthat the elements Q� + tnvn belong to KC the limit'0(Q�; v) = limn!1 1tn ('(Q� + tnvn)� '(Q�))exists. Since the condition Q� + tnvn 2 KC means that vn = 1tn (Qn � Q�) forsome Qn 2 KC , the limit v belongs to the tangent cone T (KC ;Q�) to KC at Q� inC0(IRs). In [35], [36] this property is also called Hadamard directional di�erentiabilitytangentially to KC .Proposition 3.1. Let Q� 2 KC and assume that (Q�) is nonempty, bounded.Then ' is Hadamard directionally di�erentiable at Q� on KC and it holds for allv 2 T (KC;Q�), '0(Q�; v) = minfv(Ax) : x 2 (Q�)g:If, in addition, Q� is strictly convex on some open convex neighbourhood of A (Q�),we have '0(Q�; v) = v(�y); where f�yg = A (Q�):Proof. Arguing similarly as in the proof of Propostion 2.1 in [26] there exists aneighbourhood N of Q� in C0(IRs) such that (Q) is nonempty for all Q 2 KC \N .Let (tn) and (vn) be sequences such that tn ! 0+, vn ! v in C0(IRs) and Q� + tnvnbelongs to KC for all n 2 IN . Then Q�+ tnvn 2 KC \N for su�ciently large n 2 IN .Let xn 2 (Q� + tnvn) for those n 2 IN . Since is Berge upper semicontinuous atQ� ([26]), the sequence (xn) has an accumulation point x 2 (Q�) and we obtainlim supn!1 1tn ('(Q� + tnvn)� '(Q�))� lim supn!1 1tn (g(xn) + (Q� + tnvn)(Axn)� g(xn)�Q�(Axn)) � v(Ax);where the last inequality follows from the uniform convergence of (vn) to v on boundedsubsets of IRs. In order to show the reverse inequality for lim inf, let x 2 (Q�). Thenlim infn!1 1tn ('(Q� + tnvn)� '(Q�))� lim infn!1 1tn (g(x) + (Q� + tnvn)(Ax) � g(x)�Q�(Ax)) = v(Ax):

Page 12: Differential Stability of Two-Stage Stochastic Programs

12 D. DENTCHEVA AND W. R�OMISCHThis completes the proof of the �rst part. The second part is an immediate conclusion,since A (Q�) is a singleton whenever Q� is strictly convex on some of its open, convexneighbourhoods.The preceding result can also be proved by using the methodology of Theo-rem 6.4.1 in [28]. There the compactness of the constraint set is assumed and Gateauxdirectional di�erentiability of ' at Q� together with its Lipschitz continuity is shown.Here we prefer a direct two-sided argument, which will also be used in the subsequentsecond order analysis of '. Namely, we will �rst derive an upper bound for the secondorder Hadamard directional derivative of ' at some Q� 2 KC , where KC is equippedwith the C0;1-topology. Secondly, we identify conditions implying that the upperbound coincides with the Gateaux directional derivative of ' at Q� for all directionstaken from T r(KC ;Q�).Lemma 3.2. Let y 2 IRs, Q� 2 KC, tn ! 0+, (Qn) be a sequence in KC suchthat vn := 1tn (Qn � Q�) ! v in C0;1(IRs) and let (�n) be a sequence converging to �in IRs. Then we have lim supn!1 1tn (vn(y + tn�n)� vn(y)) � max�2@v(y)h�; �i.Proof. Each function vn is locally Lipschitzian on IRs and, hence, Lebourg's meanvalue theorem for Clarke's subdi�erential ([9]) implies the existence of elements ~ynbelonging to the segments [y; y + tn�n] such that1tn (vn(y + tn�n)� vn(y)) 2 fh�; �ni : � 2 @vn(~yn)g:The convergence vn ! v in C0;1(IRs) implies thatsupfk�k : � 2 @(vn � v)(y); kyk � rg �!n!1 0holds for any r > 0. This yieldsdH(@vn(~yn); @v(~yn)) � supfk�k : � 2 @(vn � v)(~yn)g �!n!1 0:Here dH denotes the Hausdor� distance and the inequality is a consequence of generalproperties of the subdi�erential (cf. Lemma 2.1 in [27]). Hence, there exist elements~�n belonging to @v(~yn) such that1tn (vn(y + tn�n)� vn(y)) � k�nkdH(@vn(~yn); @v(~yn)) + h~�n; �niand, for some ~� 2 @v(y),lim supn!1 1tn (vn(y + tn�n)� vn(y)) � lim supn!1 h~�n; �ni = h~�; �i � max�2@v(y)h�; �i;where the upper semicontinuity of @v(�) is used. This completes the proof.Proposition 3.3. Let Q� 2 KC and assume that (Q�) is nonempty, bounded.Let g be twice continuously di�erentiable, Q� be strictly convex on some open convexneighbourhood of A (Q�) and twice continuously di�erentiable at �y, where f�yg =A (Q�). Let �x 2 (Q�), tn ! 0+ and (Qn) be a sequence in KC such that vn :=1tn (Qn �Q�)! v in C0;1(IRs). Thenlim supn!1 1t2n ('(Q� + tnvn)� '(Q�)� tn'0(Q�; vn))� inffhrg(�x); zi+ hrQ�(�y); Azi+ 12hr2g(�x); �; �i+12hr2Q�(�y)A�;A�i + max�2@v(�y)h�; A�i : � 2 S(�x); z 2 T 2(C; �x; �)g;

Page 13: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 13where S(�x) := f� 2 T (C; �x) : hrg(�x); �i+ hrQ�(�y); A�i = 0g, T (C; �x) is the tangentcone to C at �x and T 2(C; �x; �) the second order tangent set to C at �x in direction �.Proof. Let � 2 S(�x) and z 2 T 2(C; �x; �). Then there exists a sequence (zn) suchthat zn ! z and �x+ tn�+ t2nzn 2 C for all n 2 IN . Using Proposition 3.1, this allowsfor the following estimate'(Q� + tnvn)� '(Q�)� tn'0(Q�; vn)� g(�x+ tn� + t2nzn) +Q�(A(�x + tn� + t2nzn)) + tnvn(A(�x+ tn� + t2nzn))� g(�x)�Q�(A�x)� tnvn(A�x)= [g(�x+ tn� + t2nzn)� g(�x)� tnhrg(�x); �i]+[Q�(A(�x + tn� + t2nzn))�Q�(A�x)� tnhrQ�(A�x); A�i]+tn[vn(A(�x + tn� + t2nzn))� vn(A�x)]:After dividing by t2n and using Lemma 3.2 the limes superior as n!1 of the right-hand side can be bounded above byhrg(�x); zi+12 hr2g(�x)�; �i+hrQ�(A�x); Azi+12hr2Q�(A�x)A�;A�i+ max�2@v(A�x)h�; A�i:Taking the in�mum on the right-hand side yields the assertion.We notice that the upper second order Hadamard directional derivativelim supn!1 1t2n ('(Q� + tnvn) � '(Q�) � tn'0(Q�; vn)) is nonpositive, since ' is concaveon KC and, hence, the inequality '(Q� + tnvn) � '(Q�) = '(Qn) � '(Q�) �'0(Q�;Qn � Q�) = tn'0(Q�; vn) is valid. We also note that the upper bound isnonpositive, since (0; 0) belongs to S(�x)�T 2(C; �x; 0) = S(�x)�T (C; �x). Next we con-sider particular perturbations Qn of Q�, namely, Qn := Q� + �tn(Q�Q�) for someQ 2 KC , � > 0 and su�ciently large n 2 IN . Then vn = �(Q�Q�) 2 T r(KC ;Q�).The next result provides conditions implying that the second order (Gateaux) direc-tional derivative exists and coincides with the upper bound of the previous proposition.To state the result we need the notion of second order regularity (cf. [6]). The con-straint set C is called second order regular at �x 2 C if for any direction � 2 T (C; �x) andany sequence xn 2 C of the form xn = �x+ tn� + t2nrn where tn ! 0+ and rn being asequence in IRm satisfying tnrn ! 0 it holds that limn!1 d(rn; T 2(C; �x; �)) = 0. Forexample, C is second order regular at �x 2 C if 0 2 T 2(C; �x; �) for every � 2 T (C; �x)(cf. [6]). In particular, a polyhedral (convex) set C is second order regular at any�x 2 C.Theorem 3.4. Let Q� 2 KC and assume that (Q�) is nonempty, bounded.Let g be twice continuously di�erentiable, Q� be strictly convex on some open convexneighbourhood of A (Q�) and twice continuously di�erentiable at �y, where f�yg =A (Q�). Let �x 2 (Q�), v 2 T r(KC ;Q�) and assume that(i) d(�x; (Q� + tv)) = O(t) for small t > 0, and(ii) C is second order regular at �x.Then the second order Gateaux directional derivative of ' at Q� in direction vexists and it holds that'00(Q�; v) = limt!0+ 1t2 ('(Q� + tnv)� '(Q�)� t'0(Q�; v))= inf n12hr2g(�x)�; �i + 12hr2Q�(�y)A�;A�i + v0(�y;A�) + b(�) : � 2 S(�x)o(3.1)

Page 14: Differential Stability of Two-Stage Stochastic Programs

14 D. DENTCHEVA AND W. R�OMISCHwhere b(�) = inffhrg(�x); zi + hrQ�(�y); Azi : z 2 T 2(C; �x; �)g is nonnegative andconvex on S(�x). Moreover, the in�mum in (3.1) is attained at some �� 2 S(�x) havingthe property that '00(Q�; v) = 12v0(�y;A��) + 12b(��).(Here S(�x) and T 2(C; �x; �) are de�ned as in the previous result, v0(�y; �) is thedirectional derivative of v at �y in direction � and O(t) denotes a real quantity suchthat 1t jO(t)j is bounded as t! 0+.)Proof. (i) implies that there exist constants L > 0, � > 0 and elements x(t) 2 (Q� + tv) such that kx(t) � �xk � Lt for all t 2 (0; �). Now take a sequence (tn)tending to 0+ in such a way thatlim inft!0+ 1t2 ('(Q� + tv)� '(Q�)� t'0(Q�; v))= limn!1 1t2n ('(Q� + tnv)� '(Q�)� tn'0(Q�; v))and that �n := 1tn (x(tn)� �x) �!n!1 ��. The latter is possible since k 1tn (x(tn)� �x)k � Lfor n 2 IN su�ciently large. Then �� 2 T (C; �x) and Proposition 3.1 yieldsv(A�x) = '0(Q�; v) = limn!1 1tn ('(Q� + tnv)� '(Q�))= limn!1 1tn (g(�x+ tn�n) + (Q� + tnv)(A(�x + tn�n))� g(�x)�Q�(A�x))= hrg(�x); ��i+ hrQ�(A�x); A��i+ v(A�x):This implies �� 2 S(�x). We put rn = 1tn (�n � ��) and xn = x(tn) = �x+ tn�� + t2nrn. Byexpanding g and Q� and using Proposition 3.1 we obtain'(Q� + tnv)� '(Q�)� tn'0(Q�; v)= g(xn) +Q�(Axn) + tnv(Axn)� g(�x)�Q�(A�x)� tnv(A�x)= hrg(�x); xn � �xi+ 12hr2g(�x)(xn � �x); xn � �xi+hrQ�(A�x); Axn � �x)i+ 12hr2Q�(A�x)(A(xn � �x)); A(xn � �x)i+tn(v(Axn)� v(A�x)) + o(kxn � �xk2)= t2n(hrg(�x); rni+ 12 hr2g(�x)��; ��i) + t2n(hrQ�(A�x); Arn)i+12hr2Q�(A�x)A��; A��i) + tn(v(Axn)� v(A�x)) + o(t2n):Here we used that o(kxn � �xk2) = o(t2n) where o(tk) denotes a real quantity havingthe property 1tk o(t)! 0 as t! 0+ (k 2 IN).Since C is second order regular at �x, there exists a sequence zn 2 T 2(C; �x; ��) suchthat limn!1 krn � znk = 0 and we get from the previous chain of equalities1t2n ('(Q� + tnv)� '(Q�)� tn'0(Q�; v))= hrg(�x); zni+ hrQ�(�y); Azni+ 12 hr2g(�x)��; ��i+12hr2Q�(�y)A��; A��i+ 1tn (v(�y + tnA�n)� v(�y)) + o(1)� b(��) + 12 hr2g(�x)��; ��i+ 12hr2Q�(�y)A��; A��i+ 1tn (v(�y + tnA�n)� v(�y)) + o(1):

Page 15: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 15Using the fact that v is Hadamard directionally di�erentiable and Clarke regular ([9]),i.e., v0(�y; �) = max�2@v(�y)h�; �i, we obtainlim inft!0+ 1t2 ('(Q� + tv)� '(Q�)� t'0(Q�; v))� 12hr2g(�x)��; ��i+ 12hr2Q�(�y)A��; A��i+ v0(�y;A��) + b(��)� inf n12 hr2g(�x)�; �i+ 12hr2Q�(�y)A�;A�i + v0(�y;A�) + b(�) : � 2 S(�x)oProposition 3.3 implies that this lower bound for lim inft!0+ is also an upper bound forlim supt!0+ . Hence, the limit limt!0+ 1t2 ('(Q�+ tv)�'(Q�)� t'0(Q�; v)) exists and is equalto the in�mum subject to � 2 S(�x). Moreover, this in�mum is attained at �� 2 S(�x).The nonnegativity of b is due to the fact that the necessary optimality condition for(1.1) at �x yieldshrg(�x); zi+ hrQ�(�y); Azi � 0; for all z 2 T 2(C; �x; �); � 2 S(�x):The convexity of b follows from the property T 2(C; �x; ��+(1��)~�) � �T 2(C; �x; �)+(1� �)T 2(C; �x; ~�), for all �; ~� 2 T (C; �x) and � 2 [0; 1].For the remainder of the proof we put a(�) := v0(�y;A�) andB(�) := 12 hr2g(�x)�; �i+ 12hr2Q�(�y)A�;A�i; for all � 2 IRm:Since S(�x) is a (convex) cone, we have S(�x) = �S(�x) for any � > 0. Moreover, itholds T 2(C; �x; ���) = �T 2(C; �x; ��) and thus b(���) = �b(��) for any � > 0. Hence, weobtain 0 � f(�) := B(���) + a(���) + b(���)�B(��)� a(��)� b(��)= �2B(��) + (�� 1)(a(��) + b(��))�B(��); for all � > 0:In case of B(��) > 0, the quadratic function f vanishes at � = 1 with the propertyf 0(1) = 2B(��) + a(��) + b(��) = 0 and the �nal assertion is shown. If B(��) = 0, thefact that 0 � f(�) = (�� 1)(a(��) + b(��)) holds for any � > 0, implies a(��) + b(��) = 0.Thus '00(Q�; v) = 0 = 12 (a(��) + b(��)) and the proof is complete.The theorem extends our earlier work in [12] where essentially polyhedrality ofC is assumed. Compared to [12] the additional term b(:) enters the formula for'00(Q�; v). The convex function b(:) re ects second order properties of the constraintset C and vanishes if C is polyhedral. Next we state a more handy criterion implyingthat '00(Q�; v) exists for any direction v 2 T r(KC ;Q�).Corollary 3.5. Let Q� 2 KC and assume that (Q�) is nonempty, bounded.Let g be twice continuously di�erentiable, Q� be strongly convex on some open convexneighbourhood of A (Q�) and twice continuously di�erentiable at �y where f�yg =A (Q�). Let �x 2 (Q�) and assume that(i)0 there exist a constant L > 0 and a neighbourhood U of �y such thatd(�x; �(y)) � Lk�y � yk for all y 2 A(C) \ U , where�(y) := argminfg(x) : x 2 C;Ax = yg, y 2 A(C),(ii) C is second order regular at �x.

Page 16: Differential Stability of Two-Stage Stochastic Programs

16 D. DENTCHEVA AND W. R�OMISCHThen the second order Gateaux directional derivative of ' at Q� exists for any direc-tion v 2 T r(KC ;Q�) and the formula for '00(Q�; v) in Theorem 3.4 holds true.Moreover, conditions (i)0 and (ii) are satis�ed for any �x 2 (Q�) if C is polyhedraland g is linear or (convex) quadratic.Proof. Let v 2 T r(KC ;Q�). Theorem 2.3 then says that there exist constantsL̂ > 0, � > 0, r > 0 such thatd(�x; (Q� + tv)) � L̂kvkL;rt whenever kvkL;rt < �:Hence, the strong convexity of Q� and condition (i)0 imply that condition (i) of theprevious theorem is satis�ed and that the �rst part of the assertion is shown. If C ispolyhedral and g is linear or (convex) quadratic, (ii) is satis�ed and Proposition 2.2implies (i)0 to hold for any �x 2 (Q�) = �(�y).Let us consider two illustrative examples to provide some insight into the bene�tand limits of the previous results.Example 3.6. We revisit Example 2.6 and know that the general assumptions ofCorollary 3.5 and condition (i)0 are satis�ed for �x = (0; 0). Furthermore, it holds thatT (C; �x) = IR+ � IR andT 2(C; �x; �) =8<: IR2; �1 > 0fx1 2 IR : x1 � �22g � IR; �1 = 0 ; for any � 2 T (C; �x):Moreover, C is second order regular at �x (as can be seen from Proposition 4.1 in [6])and it holds b(�) = 0 for all � 2 IR2. Hence, Corollary 3.5 implies that '00(Q�; v)exists for any v 2 T r(KC ;Q�) and that '00(Q�; v) = 12v0(0; ��1), where �� = (��1; ��2) 2argminf�21 + v0(0; �1) : (�1; �2) 2 IR+ � IRg.Example 3.7. Here we revisit Example 2.9, and haveQ�(y) = 13 jyj3 + 23 ; for all jyj � 1; and (Q�) = f0g; '(Q�) = 23 :For the function v(x) = �x (x 2 IR) and t 2 [0; 1) we obtain'(Q� + tv) = inffQ�(x) � tx : x 2 IRg = 23(1� t 32 ); (Q� + tv) = argminfQ�(x) � tx : x 2 IRg = fptg:Then '0(Q�; v) = 0 and 1t2 ('(Q� + tv) � '(Q�) � '0(Q�; v)) = � 23 t� 12 . Hence, 'has no second order directional derivative at Q� in direction v. Note that there is noneighbourhood of �x = 0 where Q� is strongly convex.Finally, we aim at showing that ' is even second order Hadamard directionallydi�erentiable at Q� when equipping KC with a suitable topology. To this end weneed a certain counterpart of Lemma 3.2 for the corresponding limes inferior. Sincesuch a bound does not exist nonsmooth functions, it is a natural idea to consider thespace C1(IRs), to restrict ' to the subset KC \ C1 and to equip KC \ C1 with theC1-topology. Then we are able to show that the assumptions of Corollary 3.5 implythe second order Hadamard directional di�erentiability of ' at Q�.Theorem 3.8. Let Q� 2 KC\C1 and assume that (Q�) is nonempty, bounded.Let g be twice continuously di�erentiable, Q� be strongly convex on some open con-vex neighbourhood of A (Q�) and twice continuously di�erentiable at �y where f�yg =

Page 17: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 17A (Q�). Let �x 2 (Q�) and assume the conditions (i)0 and (ii) of Corollary 3.5 tohold.Then the second order Hadamard directional derivative of ' at Q� exists in any di-rection v belonging to the tangent cone T (KC \C1;Q�) in C1(IRs), i.e., for any suchv, and all sequences tn ! 0+ and (Qn) in KC such that vn := 1tn (Qn � Q�) ! v inC1(IRs) the limit'00(Q�; v) = limn!1 1t2n ('(Q� + tnvn)� '(Q�)� tn'0(Q�; vn))exists, and it holds'00(Q�; v) = inf n12 hr2g(�x)�; �i+12 hr2Q�(�y)A�;A�i+hrv(�y); A�i+b(�) : � 2 S(�x)o:Proof. Let v 2 T (KC\C1;Q�), tn ! 0+ and (Qn) be a sequence in KC such thatvn = 1tn (Qn � Q�) ! v in C1(IRs). Condition (i)0 together with Theorem 2.3 thenimply that there exist constants L > 0, r > 0, n0 2 IN and elements xn 2 (Q�+tnvn)such that kxn � �xk � LtnkvnkL;r; for all n 2 IN; n � n0:Since the sequence (vn) converges in C1(IRs), the norms kvnkL;r are uniformly boundedand we have kxn � �xk = O(tn). As in the proof of Theorem 3.4 we select a sub-sequence of (tn), which is again denoted by (tn), tending to 0+ such that �n :=1tn (xn � �x) �!n!1 �� 2 S(�x). Analogously, we obtain for su�ciently large n:1t2n ('(Q� + tnvn)� '(Q�)� tn'0(Q�; vn))� b(��) + 12hr2g(�x)��; ��i+ 12hr2Q�(�y)A��; A��i+ 1tn (vn(�y + tnA�n)� vn(�y)) + o(1):Using the mean value theorem for vn we may continue with some �yn 2 [�y; �y + tnA�n]as follows: 1t2n ('(Q� + tnvn)� '(Q�)� tn'0(Q�; vn))� 12 hr2g(�x)��; ��i+ 12 hr2Q�(�y)A��; A��i+ hrvn(�yn); A�ni+ b(��) + o(1):Arguing as in the proof of Theorem 3.4 and using vn ! v in C1(IRs) we arrive at theestimate lim infn!1 1t2n ('(Q� + tnvn)� '(Q�)� tn'0(Q�; vn))� 12hr2g(�x)��; ��i+ 12 hr2Q�(�y)A��; A��i+ hrv(�y); A��i+ b(��)and using Proposition 3.3 at the desired result.Let us �nally note that all minimization problems appearing as bounds or formulasfor second order directional derivatives represent convex programs. Those in theresults 3.4, 3.5 and 3.8 have convex cone constraints, which are polyhedral if C ispolyhedral. Moreover, the solution sets of the convex minimization problems in 3.4,3.5 and 3.8 are nonempty. Indeed, we show next that these solution sets representcertain derivatives of the set-valued mapping at the pair (Q�; �x).

Page 18: Differential Stability of Two-Stage Stochastic Programs

18 D. DENTCHEVA AND W. R�OMISCH4. Di�erentiability of solution sets. It is well-known that second order dif-ferentiability properties of optimal values in perturbed optimization are intrinsic forestablishing the di�erentiability of solutions (see e.g. [8]). We also pursue this ap-proach and derive conditions implying directional di�erentiability properties of thesolution set mapping by exploiting the results of the previous section. Our �rst re-sults in this direction concern Gateaux directional di�erentiability, and complementTheorem 3.4 and its corollary.Theorem 4.1. Assume that the general conditions on g, Q� and C of Theo-rem 3.4 are satis�ed. Let �x 2 (Q�), v 2 T r(KC ;Q�) and suppose the conditions (i)and (ii) of Theorem 3.4 to be satis�ed. In addition, assume that(iii) there exist a neighbourhood V of (Q�) and constants c > 0, � > 0 such thatthe uniform growth conditiong(x) + (Q� + tv)(Ax) � '(Q� + tv) + cd(x; (Q� + tv))2;for all x 2 C \ V and t 2 [0; �), is satis�ed.Then the Gateaux directional derivative of at the pair (Q�; �x) into direction v existsand it holds that 0(Q�; �x; v) = limt!0+ 1t ( (Q� + tv)� �x)= argminn12 hr2g(�x)�; �i+ 12hr2Q�(�y)A�;A�i + v0(�y;A�) + b(�) : � 2 S(�x)o:Proof. Let M(�x; v) denote the solution set in the assertion. First we show thatlim supt!0+ 1t ( (Q� + tv)� �x) �M(�x; v).Let � 2 lim supt!0+ 1t ( (Q� + tv) � �x). Then there exists a sequence (tn; �n) convergingto (0+; �) such that �n 2 1tn ( (Q� + tnv)� �x) and, thus, �x+ tn�n 2 (Q� + tnv) forall n 2 IN . Analogously to the proof of Theorem 3.4 we show that � belongs to S(�x)and that '00(Q�; v) = 12 hr2g(�x)�; �i + 12 hr2Q�(�y)A�;A�i + v0(�y;A�) + b(�). Hence� 2M(�x; v).In the second step we demonstrate thatM(�x; v) � lim inft!0+ 1t ( (Q� + tv)� �x)or, equivalently, that it holds for any � 2M(�x; v),limt!0 1t d(�x+ t�; (Q� + tv)) = 0:Let � 2 M(�x; v) and (tn) be a sequence with tn ! 0+. We have to show thatlimn!1 1tn d(�x+ tn�; (Q� + tnv)) = 0.Let " > 0 be given, and let z 2 T 2(C; �x; �) be such that hrg(�x); zi+ hrQ�(�y); Azi �b(�)+". Then there exists a sequence (zn) converging to z with xn = �x+tn�+t2nzn 2 Cfor all n 2 IN . Hence, it su�ces to show thatlimn!1 1tn d(�x + tn� + t2nzn; (Q� + tnv)) = 0:

Page 19: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 19Condition (iii) implies the following estimate for all su�ciently large n 2 IN :cd(�x+ tn� + t2nzn; (Q� + tnv))2� g(�x+ tn� + t2nzn) + (Q� + tnv)(A(�x + tn� + t2nzn))� '(Q� + tnv):By expanding g and Q� as in the proof of Theorem 3.4 and using the fact that �belongs to S(�x), we may continue= t2nhrg(�x); zni+ 12 t2nhr2g(�x)(� + tnzn); � + tnzni+t2nhrQ�(�y); Azni+ 12t2nhr2Q�(�y)(A(� + tnzn)); A(� + tnzn)i�('(Q� + tnv)� '(Q�)� tn'0(Q�; v))+tn(v(A(�x + tn� + t2nzn))� v(A�x)) + o(t2nk� + tnznk2):After dividing by t2n and taking the lim supn!1 on both sides of the latter inequality, weobtain lim supn!1 ct2n d(�x+ tn� + t2nzn; (Q� + tnv))2� hrg(�x); zi+ hrQ�(�y); Azi+ 12hr2g(�x)�; �i+12hr2Q�(�y)A�;A�i � '00(Q�; v) + v0(�y;A�) � ";where we made use of the choice of z, � 2M(�x; v) and Theorem 3.4. This completesthe proof.Complementing Corollary 3.5 we provide a result on the directional di�erentia-bility of at Q� into any direction v 2 T r(KC ;Q�).Theorem 4.2. Assume that the general conditions on g, Q� and C of Corol-lary 3.5 are satis�ed. Let �x 2 (Q�) and assume that(i)00 there exists a constant L > 0 such thatdH(�(y); �(~y)) � Lky � ~yk; for all y; ~y 2 A(C);and, for each r > 0, there exists a constant �(r) > 0 such thatg(x) � �(Ax) + �(r)d(x; �(Ax))2 ; for all x 2 C \ B(0; r);where �(y) = inffg(x) : x 2 C;Ax = yg and�(y) = argminfg(x) : x 2 C;Ax = yg; y 2 A(C),(ii) C is second order regular at �x.Then the Gateaux directional derivative 0(Q�; �x; v) of at the pair (Q�; �x) exists forany direction v 2 T r(KC ;Q�) and satis�es the formula in Theorem 4.1.Moreover, conditions (i)00 and (ii) are satis�ed if C is polyhedral and g is linear or(convex) quadratic.Proof. Let v 2 T r(KC ;Q�). Since Q� is strongly convex on some open con-vex neighbourhood of A (Q�), we infer from condition (i)00 and Theorem 2.7 thatcondition (iii) of Theorem 4.1 is satis�ed. Moreover, condition (i)00 implies (i)0 and,thus, Corollary 3.5 says that the second order directional derivative '00(Q�; v) exists.Hence, the �rst part of the assertion follows from the proof of the previous theorem.

Page 20: Differential Stability of Two-Stage Stochastic Programs

20 D. DENTCHEVA AND W. R�OMISCHCondition (ii) is satis�ed if C is polyhedral, and if, in addition, g is convex quadratic,Proposition 2.2 implies condition (i)00 to hold.We note that Example 3.7 shows that, in general, the directional di�erentiabilityproperty of gets lost at pairs (Q�; �x), �x 2 (Q�), whereQ� is not strongly convex onsome neighbourhood of A (Q�). Our next example demonstrates that Theorem 4.2applies to situations where the solution set and its Gateaux directional derivatives arenot singletons.Example 4.3. We revisit the Examples 1.1 and 2.5 and observe that the assump-tions of Theorem 4.2 are satis�ed for any �x 2 (Q�). Hence, the Gateaux direc-tional derivative 0(Q�; �x; v) exists at any pair (Q�; �x), �x 2 (Q�), and any directionv 2 T r(KC ;Q�). Since it holds that r2g(�x) = 0, hrg(�x); �i + hrQ�(A�x); A�i = 0,for all � 2 IR3, and r2Q�(A�x) = 2� 1 00 1 �, it takes the form 0(Q�; �x; v) =argminfkA�k2+v0(A�x;A�) : � 2 T (C; �x)g. Since the function y 7! kyk2+v0(A�x; y) isstrongly convex on A(T (C; �x)), it has a unique minimizer �y(v) 2 A(T (C; �x)). Hence,there exists an element ��(v) 2 T (C; �x) such that A��(v) = �y(v) and 0(Q�; �x; v) =(��(v) + kerA)\ T (C; �x). In particular, the Gateaux directional derivative 0(Q�; �x; :)is a set-valued mapping of the direction.Remark 4.4. The approach we followed for deriving Gateaux directional di�er-entiability of solution sets to (1.1) into directions v 2 T r(KC ;Q�) is based on lowerand upper estimates for the optimal value function. Compared to the work in [5], [8],and [32], where this approach is developed and reviewed, we do neither assume thatthe data of the perturbed problems minfg(x) +Q(Ax) : x 2 Cg is di�erentiable northat solutions to (1.1) are unique. The (set-valued) Gateaux directional derivatives 0(Q�; �x; v) in the previous results are valid for the case v = Q �Q� with a generalQ 2 KC . Hence, the results complement earlier work on contaminated distributions(e.g. [13], [14]). They apply to situations where Q is an expected recourse functionwith respect to a Dirac measure with unit mass placed at !�, i.e., Q(y) = ~Q(!� � y),and, hence, are relevant to study the in uence of a speci�c scenario on changes ofsolution sets.Another prominent approach to sensitivity analysis of optimization problems is basedon the perturbation analysis of �rst order necessary optimality conditions written asgeneralized equations (e.g. [17], [22], [24]). Applying this technique to study sensi-tivity of (1.1) requires C1-properties of perturbed expected recourse functions Q. Incase of (1.1) and Q 2 C1 the parametric generalized equation reads0 2 rg(x) +ATrQ(Ax) +NC(x)where NC(x) denotes the normal cone to C at x and Q plays the role of a pa-rameter. Relevant conditions in this context implying Lipschitz and di�erentiabil-ity properties of solutions at some (Q�; �x) are the strong regularity of the gener-alized equation at parameter Q� ([24]), and the subinvertibility of the set-valuedmapping F (x) = rg(x) + ATrQ�(Ax) + NC(x) ([17]) together with the single-valuedness of the inverse of the contingent derivative of F at (�x; 0) (cf. [2]), re-spectively. To see that both conditions are violated in general, we consider the linearcase (i.e. g is linear and C is polyhedral). Then both conditions are equivalent ifQ� 2 C2 (Theorem 6.1 in [17]). The contingent derivative of F at (�x; 0) has the formDF (�x; 0)(u) = ATr2Q�(A�x)Au+DNC(�x;�rg(�x)�ATrQ�(A�x))(u) (cf. Sect. 5.1in [2]), where the contingent derivative DNC is again a polyhedral multifunction.Since the �rst summand remains constant on translates of the null space of the ma-

Page 21: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 21trix A, single-valuedness of the inverse of DF (�x; 0)(u) fails to hold in general. This isessentially due to the same structural property, which leads to multiple solutions inExample 1.1 and to set-valued Gateaux directional derivatives in Example 4.3.Finally, we turn to directional di�erentiability properties of where the deriva-tives exist uniformly with respect to directions taken from compact sets of certainfunctional spaces. For our �rst result we consider the space C1(IRs) and equip theset KC \ C1 with the C1-topology.Proposition 4.5. Let Q� 2 KC \C1 and assume that the general conditions ong, Q� and C in Proposition 3.3 are satis�ed. In addition, we suppose condition (ii)of Theorem 3.4 to be satis�ed. Let �x 2 (Q�), tn ! 0+, and (Qn) be a sequence inKC such that vn := 1tn (Qn �Q�)! v in C1(IRs).Then the upper set limit of the sequence ( 1tn ( (Q�+tnvn)��x) of closed convex subsetsin IRm, i.e., lim supn!1 1tn ( (Q� + tnvn)� �x)), is contained in the closed convex setargminn12hr2g(�x)�; �i+ 12 hr2Q�(�y)A�;A�i+ hrv(�y); A�i+ b(�) : � 2 S(�x)o:Proof. Let Dn := 1tn ( (Q� + tnvn) � �x) for all n 2 IN and let �� belong to theupper set limit lim supn!1 Dn. Then there exist a subsequence (again denoted by (Dn))and elements �n 2 Dn such that �n ! ��. Since �x + tn�n 2 (Q� + tnvn) � C, wehave that �� 2 T (C; �x), and as in the proof of Theorem 3.4 we deduce that �� 2 S(�x).By expanding g and Q� as in the proof of Theorem 3.4 we obtain analogously:'(Q� + tnvn)� '(Q�)� tn'0(Q�; vn)= g(�x+ tn�n) +Q�(A(�x + tn�n)) + tnvn(A(�x + tn�n))� g(�x)�Q�(A�x)� tnvn(A�x)� t2nb(��) + 12 t2nhr2g(�x)��; ��i+ 12 t2nhr2Q�(A�x)A��; A��i+ tn(vn(A(�x + tn�n))� vn(A�x)) + o(t2n):After dividing by t2n and taking the lim supn!1 on both sides of the inequality, we obtainas in the proof of Theorem 3.8lim supn!1 1t2n ('(Q� + tnvn)� '(Q�)� tn'0(Q�; vn))� 12 hr2g(�x)��; ��i+ 12hr2Q�(A�x)A��; A��i+ hrv(A�x); A��i+ b(��):Hence, we may conclude from Proposition 3.3 that �� belongs to the setargminf 12 hr2g(�x)�; �i + 12 hr2Q�(�y)A�;A�i+ hrv(�y); A�i+ b(�) : � 2 S(�x)gand we are done.Remark 4.6. The upper limit of the sequence ( 1tn ( (Q� + tnvn)� �x) in Proposi-tion 4.5 is nonempty if the mapping d(�x; (�)) fromKC into the extended reals has theLipschitzian property of Theorem 2.3 at Q�. Indeed, we may select xn 2 (Q�+tnvn)for large n 2 IN , such that for some constants L̂ > 0 and r > 0, k�x � xnk =d(�x; (Q� + tnvn)) � L̂tnkvnkL;r. Hence, the sequence ( 1tn (xn � �x)) is bounded andhas a convergent subsequence whose limit belongs to lim supn!1 1tn ( (Q� + tnvn)� �x).If the Lipschitz property of d(�x; (�)) is violated, the upper set limit may be empty.

Page 22: Differential Stability of Two-Stage Stochastic Programs

22 D. DENTCHEVA AND W. R�OMISCHThis is illustrated by Example 3.7, in which we have �x = 0, (Q� + tnv) = fptngand, thus, 1tn ( (Q� + tnv)� �x) = ft� 12n g.In order to establish the semidi�erentiability of at a pair (Q�; �x) belonging tothe graph of , it remains to show, according to Proposition 4.5, that the solution setargminn12 hr2g(�x)�; �i+ 12hr2Q�(�y)A�;A�i + hrv(�y); A�i+ b(�) : � 2 S(�x)ois contained in the lower set limit lim infn!1 1tn ( (Q� + tnvn)� �x), where vn := 1tn (Qn�Q�), Qn 2 KC , for all n 2 IN , and (vn) converges to v. To this end, a uniformquadratic growth condition of the objective functions g(�)+(Q�+ tnvn)(A �), for largen 2 IN , is signi�cant. In view of Theorem 2.7, the uniform strong convexity of Q� andits approximationsQn, for large n 2 IN , is decisive for the growth condition. The nextexample and the following result show that the approximations Qn do not maintainthe strong convexity property of Q� in general if the sequence (Qn) converges to Q�in C1(IRs), but that the situation is much more advantageous when considering theC1;1-topology.Example 4.7. Let Q�(y) = y2, for all y 2 IR, and Qn be the following di�eren-tiable convex functionQn(y) := maxn0;�y � 1no2 +maxn0; y � 1no2; for all y 2 IR; n 2 IN:Note that Qn(y) = 0, for all y 2 [� 1n ; 1n ], and Qn is not strongly convex for eachn 2 IN , but (Qn) converges to Q� in C1(IRs).Lemma 4.8. Let Q� 2 KC\C1;1(IRs) be strongly convex on some bounded convexset U � IRs (with some constant � > 0).Then there exists a neighbourhood N of Q� in C1;1(IRs) such that each function Qbelonging to N is strongly convex on U with constant �2 .Proof. The strong convexity of Q� on U (with constant � > 0) is equivalent tothe condition hrQ�(y) �rQ�(~y); y � ~yi � �ky � ~yk2, for all y; ~y 2 U . Let r > 0 bechosen such that cl U � B(0; r) and let N be a neighbourhood of Q� in C1;1(IRs)having the property kr(Q� � Q)kL;r � �2 , for all Q 2 N . Let y; ~y 2 U with y 6= ~y.Then we obtain for any Q 2 N ,� � hrQ�(y)�rQ�(~y); y � ~yiky � ~yk2= hrQ(y)�rQ(~y); y � ~yiky � ~yk2 + hr(Q� �Q)(y)�r(Q� �Q)(~y); y � ~yiky � ~yk2� hrQ(y)�rQ(~y); y � ~yiky � ~yk2 + kr(Q� �Q)(y)�r(Q� �Q)(~y)kky � ~yk2� hrQ(y)�rQ(~y); y � ~yiky � ~yk2 + kr(Q� �Q)kL;rand, hence, �2 ky � ~yk2 � hrQ(y)�rQ(~y); y � ~yi:This means that Q is strongly convex on U with constant �2 .Now we are able to show that the solution set mapping is semidi�erentiable onKC \ C1;1 at some pairs (Q�; �x), �x 2 (Q�), into any direction v from the tangent

Page 23: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 23cone T (KC \ C1;1;Q�) to KC \ C1;1(IRs) at Q� in C1;1(IRs). The assumptions areessentially the same as in Theorem 4.2.Theorem 4.9. Let Q� 2 KC\C1;1 and assume that (Q�) is nonempty, bounded.Let g be twice continuously di�erentiable, Q� be strongly convex on some open convexneighbourhood U of A (Q�) and twice continuously di�erentiable at �y, where f�yg =A (Q�). Assume that condition (i)00 of Theorem 4.2 is satis�ed.Then the solution set mapping from KC \ C1;1 into IRm is semidi�erentiable atany pair (Q�; �x), �x 2 (Q�), such that C is second order regular at �x, and into anydirection v 2 T (KC \ C1;1;Q�), i.e., for any such �x and v, tn ! 0+, and (Qn) inKC \ C1;1 with vn = 1tn (Qn �Q�)! v in C1;1(IRs) the set limitD (Q�; �x; v) = limn!1 1tn ( (Q� + tnvn)� �x)exists. The semiderivative D (Q�; �x; v) is equal to the setargminn12hr2g(�x)�; �i+ 12 hr2Q�(�y)A�;A�i+ hrv(�y); A�i+ b(�) : � 2 S(�x)o:Moreover, is semidi�erentiable at any pair (Q�; �x), �x 2 (Q�), into any directionv 2 T (KC \ C1;1;Q�) if C is polyhedral. Condition (i)00 is satis�ed if C is polyhedraland g is linear or (convex) quadratic.Proof. Let �x 2 (Q�) be such that C is second order regular at �x,v 2 T (KC \ C1;1;Q�), and vn = 1tn (Qn � Q�) ! v in C1;1(IRs), where tn ! 0+and (Qn) is a sequence in KC \ C1;1. We may assume that the neighbourhood U isbounded. Since (Qn) converges to Q� in C1;1(IRs), we obtain from Lemma 4.8 thatthere exists an n0 2 IN , such that Qn is strongly convex on U for each n � n0 witha uniform constant � > 0. Moreover, we choose n0 su�ciently large such that (Qn)is nonempty, for each n � n0. Arguing as in the proof of Theorem 2.7, we obtain aconstant c > 0 and a neighbourhood V of (Qn) such that the growth conditiong(x) +Qn(Ax) � '(Qn) + cd(x; (Qn))2holds for all x 2 C \ V and n � n0.Let �� 2 S(�x) be a minimizer of the function 12 hr2g(�x)�; �i + 12 hr2Q�(�y)A�;A�i +hrv(�y); A�i+ b(�) subject to � 2 S(�x). Because of Proposition 4.5 it remains to showthat �� belongs to the lower limit lim infn!1 1n ( (Q�+ tnvn)� �x) = lim infn!1 1tn ( (Qn)� �x).To this end we argue as in the proof of Theorem 4.1. Let " > 0 be given, and letz 2 T 2(C; �x; ��) be such that hrg(�x); zi+ hrQ�(�y); Azi � b(��) + ". Then there existsa sequence (zn) converging to z with xn = �x+ tn� + t2nzn 2 C for all n 2 IN . Then itsu�ces to show that limn!1 1tn d(�x+ tn �� + t2nzn; (Qn)) = 0:By using the above growth condition and by expanding the function g and Q�, weobtain similar to the proof of Theorem 4.1 thatlim supn!1 ct2n d(�x + tn �� + t2nzn; (Qn))2� hrg(�x); zi+ hrQ�(�y); Azi+ 12hr2g(�x)��; ��i+12hr2Q�(�y)A��; A��i � '00(Q�; v) + hrv(�y); A��i � " :

Page 24: Differential Stability of Two-Stage Stochastic Programs

24 D. DENTCHEVA AND W. R�OMISCHThis implies �� 2 lim infn!1 1tn ( (Qn)� �x) and the semidi�erentiability of at (Q�; �x) indirection v is shown. The remaining part of the assertion follows as in the proof ofTheorem 4.2.For the linear-quadratic case, the essential assumptions in Theorem 4.9 are thestrong convexity of Q�, and the smoothness properties of Q� and its perturbations Q,respectively. While criteria for strong convexity were already discussed in Section 2,we now add some comments on C1;1- and C2-properties of expected recourse functions.Later we close by indicating some conclusions of the results of Sections 3 and 4 onasymptotic properties of statistical estimators of optimal values and solution sets.Remark 4.10. Assume (A1) { (A3) and � to have a density with respect to theLebesgue measure on IRs. Then the function Q� in (1.2) is continuously di�erentiableon IRs and its gradient is of the form rQ�(y) = P̀i=1 di�(y+Bi(IRs+)), for all y 2 IRs,where Bi, i = 1; : : : ; `, are certain basis submatrices of the recourse matrix W suchthat the simplicial cones Bi(IRs+), i = 1; : : : ; `, are linearity regions of ~Q and �di isthe gradient of ~Q on int Bi(IRs+), i = 1; : : : ; ` (cf. [15], [39]). Denoting by F� thedistribution function of � and using the formula�(y +B(IRs+)) = F��(�B)(�B�1y); for all y 2 IRs;for any nonsingular (s; s)-matrix B, C1;1- and C2-properties of Q� may thus be formu-lated in terms of Lipschitz and di�erentiability properties of the distribution functionsF��(�Bi) to the linear transforms � � (�Bi), i = 1; : : : ; `, of the measure �.The distribution function F� of a probability measure � on IRs is locally Lipschitzianif all one-dimensional marginal distribution functions of � are locally Lipschitzian (cf.[26], [38]). F� is continuously di�erentiable if � has a continuous density functionand all one-dimensional marginal distribution functions of � are continuously di�er-entiable (cf. [21], [38]). If � has a continuous density function, then � � B has acontinuous density for any nonsingular (s; s)-matrix B, too. Hence, we may conclude,for instance, that Q� belongs to C1;1(IRs) (and C2(IRs)) if � has a (continuous) den-sity and the above-mentioned conditions on the one-dimensional marginal distributionfunctions for F��B belonging to C0;1(IRs) (and C1(IRs), respectively) are satis�ed forany nonsingular (s; s)-matrix B. This criterion is particularly useful for probabilitydistributions � which have the property that all one-dimensional marginal distribu-tions of � and all linear transforms � � B, for all nonsingular matrices B, belong tothe same class of measures. For instance, all multivariate normal and all logarithmicconcave probability measures (e.g. [15]) form classes having this property.Remark 4.11. We consider a sequence (Qn) of nonparametric estimators of Q�and assume that each Qn is a random variable with values in some linear metric(function) space Z and in KC . Furthermore, we assume that a central limit result ofthe form ��1n (Qn �Q�)!d �is satis�ed for some sequence of positive numbers (�n) decreasing to 0 and for somerandom variable � taking values in a separable subset of Z. Here, we denote by !dthe convergence in distribution of Z-valued random variables. Then versions of thedelta-method (see e.g. [36]) together with the second order Hadamard di�erentiabilityof the optimal value ' at Q� (Theorem 3.8 and Z = C1(IRs)) and the semidi�eren-tiability of the solution set at Q� (Theorem 4.9 and Z = C1;1(IRs)) lead to central

Page 25: Differential Stability of Two-Stage Stochastic Programs

DIFFERENTIAL STABILITY OF TWO-STAGE STOCHASTIC PROGRAMS 25limit formulas for the sequence ('(Qn)) of real random variables and the sequence ofrandom sets ( (Qn)), respectively. In particular, we obtain from Theorem 3.8 and asecond order version of the delta-method that��2n ('(Qn)�'(Q�)�'0(Q�;Qn�Q�)) = ��2n ('(Qn)�g(�x)�Qn(A�x))!d '00(Q�; �)where �x 2 (Q�) and !d refers to convergence in distribution of real-valued randomvariables. Theorem 4.9 and a set-valued version of the delta-method ([16], [20]) imply��1n ( (Qn)� �x)!d D (Q�; �x; �)where �x 2 (Q�) and !d refers to convergence in distribution of closed-valued mea-surable multifunctions in IRm (cf. [29]). The asymptotic distributions in both centrallimit results are the probability distributions of the optimal value and of the solutionset, respectively, of the random convex program that consists in minimizing the (ran-dom) objective 12 hr2g(�x)�; �i+ 12 hr2Q�(�y)A�;A�i + hr�(�y); A�i+ b(�) subject to �satisfying the (deterministic) constraints � 2 T (C; �x) and hrg(�x); �i+hrQ�(�y); A�i =0 . Furthermore, in the linear-quadratic case the set-valued central limit result maybe complemented by limit theorems for selections forming a Castaing representationof (cf. [11]).Acknowledgements. The authors wish to thank Alexander Shapiro (Georgia In-stitute of Technology, Atlanta) and Ren�e Henrion (WIAS Berlin) for bene�cial discussions.Moreover, the comments and suggestions of the Associate Editor and of a referee are grate-fully acknowledged. REFERENCES[1] Z. Artstein and R. J.-B. Wets, Stability results for stochastic programs and sensors, allow-ing for discontinuous objective functions, SIAM J. Optimization, 4 (1994), pp. 537{550.[2] J.-P. Aubin and H. Frankowska, Set-Valued Analysis, Birkh�auser, Boston 1990.[3] A. Auslender and R. Cominetti, A comparative study of multifunction di�erentiabilitywith applications in mathematical programming , Mathematics of Operations Research,16 (1991), pp. 240{258.[4] A. Ben-Tal and J. Zowe, Directional derivatives in nonsmooth optimization, J. Optimiza-tion Theory and Applications, 47 (1985), pp. 483{490.[5] J. F. Bonnans and R. Cominetti, Perturbed optimization in Banach spaces I: A generaltheory based on a weak directional constraint quali�cation, SIAM J. Control and Opti-mization, 34 (1996), pp. 1151{1171.[6] J. F. Bonnans, R. Cominetti, and A. Shapiro, Second order optimality conditions basedon parabolic second order tangent sets, SIAM J. Optimization, 9 (1999), pp. 462{492..[7] J. F. Bonnans and A. D. Ioffe, Quadratic growth and stability in convex programmingproblems with multiple solutions, J. Convex Analysis, 2 (1995), pp. 41{57.[8] J. F. Bonnans and A. Shapiro, Optimization problems with perturbations: a guided tour ,SIAM Review, 40 (1998), pp. 228{264.[9] F. H. Clarke, Optimization and Nonsmooth Analysis, Wiley, New York 1983.[10] R. Cominetti, Metric regularity, tangent sets and second order optimality conditions, Ap-plied Mathematics and Optimization, 21 (1990), pp. 265{287.[11] D. Dentcheva, Di�erentiable selections and Castaing representations of multifunctions, J.Mathematical Analysis and Applications, 223 (1998), pp. 371{396.[12] D. Dentcheva, W. R�omisch, and R. Schultz, Strong convexity and directional derivativesof marginal values in two-stage stochastic programming , in Stochastic Programming, K.Marti and P. Kall, eds., Lecture Notes in Economics and Mathematical Systems Vol. 423,Springer-Verlag, Berlin 1995, pp. 8{21.[13] J. Dupa�cov�a, Stability and sensitivity analysis for stochastic programming , Annals of Oper-ations Research, 27 (1990), pp. 115{142.[14] J. Dupa�cov�a, Stability in stochastic programming with recourse. Contaminated distributions,Mathematical Programming Study, 27 (1986), pp. 133{144.

Page 26: Differential Stability of Two-Stage Stochastic Programs

26 D. DENTCHEVA AND W. R�OMISCH[15] P. Kall, Stochastic Linear Programming , Springer-Verlag, Berlin 1976.[16] A. J. King, Generalized delta theorems for multivalued mappings and measurable selections,Mathematics of Operations Research, 14 (1989), pp. 720{736.[17] A. J. King and R. T. Rockafellar, Sensitivity analysis for nonsmooth generalized equa-tions, Mathematical Programming, 55 (1992), pp. 193{212.[18] A. J. King and R. T. Rockafellar, Asymptotic theory for solutions in statistical estimationand stochastic programming , Mathematics of Operations Research, 18 (1993), pp. 148{162.[19] D. Klatte and G. Thiere, Error bounds for solutions of linear equations and inequalities,ZOR-Mathematical Methods of Operations Research, 41 (1995), pp. 191{214.[20] P. Lachout, On multifunction transforms of probability measures, Annals of OperationsResearch, 56 (1995), pp. 241{249.[21] K. Marti, Approximationen der Entscheidungsprobleme mit linearer Ergebnisfunktion undpositiv homogener, subadditiver Verlustfunktion, Zeitschrift Wahrscheinlichkeitstheorieund verwandte Gebiete, 31 (1975), pp. 203{233.[22] B. S. Mordukhovich, Stability theory for parametric generalized equations and variationalinequalities via nonsmooth analysis, Transactions AMS, 343 (1994), pp. 609{657.[23] J.-P. Penot, Di�erentiability of relations and di�erential stability of perturbed optimizationproblems, SIAM J. Control and Optimization, 22 (1984), pp. 529{551.[24] S. M. Robinson, Strongly regular generalized equations, Mathematics of Operations Research,5 (1980), pp. 43{62.[25] R. T. Rockafellar, Proto-di�erentiability of set-valued mappings and its applications inoptimization, Annales de l'Institut Henri Poincare, Analyse Non Lin�eaire (H. Attouch etal. eds.), Gauthier-Villares, Paris, Suppl�ement au, 6 (1989), pp. 449{482.[26] W. R�omisch and R. Schultz, Stability of solutions for stochastic programs with completerecourse, Mathematics of Operations Research, 18 (1993), pp. 590{609.[27] W. R�omisch and R. Schultz, Lipschitz stability for stochastic programs with complete re-course, SIAM J. Optimization, 6 (1996), pp. 531{547.[28] R. Y. Rubinstein and A. Shapiro,Discrete Event Systems. Sensitivity Analysis and Stochas-tic Optimization by the Score Function Method , Wiley, Chichester 1993.[29] G. Salinetti and R. J.-B. Wets, On the convergence in distribution of measurable multi-functions (random sets), normal integrands, stochastic processes and stochastic in�ma,Mathematics of Operations Research, 11 (1986), pp. 385{419.[30] R. Schultz, Strong convexity in stochastic programs with complete recourse, J. Computa-tional and Applied Mathematics, 56 (1994), pp. 3{22.[31] A. Seeger, Second order directional derivatives in parametric optimization problems, Math-ematics of Operations Research, 13 (1988), pp. 628{645.[32] A. Shapiro, Sensitivity analysis of nonlinear programs and di�erentiability properties ofmetric projections, SIAM J. Control and Optimization, 26 (1988), pp. 628{645.[33] A. Shapiro, On concepts of directional di�erentiability, J. Optimization Theory and Appli-cations, 66 (1990), pp. 477{487.[34] A. Shapiro, On di�erential stability in stochastic programming , Mathematical Programming,47 (1990), pp. 107{116.[35] A. Shapiro, Asymptotic analysis of stochastic programs, Annals of Operations Research, 30(1991), pp. 169{186.[36] A. W. van der Vaart and J. A. Wellner, Weak Convergence and Empirical Processes,Springer Series in Statistics, Springer-Verlag, New York 1996.[37] D. Walkup and R. J.-B. Wets, A Lipschitzian characterization of convex polyhedra, Pro-ceedings of the AMS, 23 (1969), pp. 167{173.[38] J. Wang, Distribution sensitivity analysis for stochastic programs with complete recourse,Mathematical Programming, 31 (1985), pp. 286{297.[39] R. J.-B. Wets, Stochastic programs with �xed recourse: the equivalent deterministic program,SIAM Review, 16 (1974), pp. 309{339.[40] R. J.-B. Wets, Stochastic programming , in Handbooks in Operations Research and Manage-ment Science, Vol. 1, Optimization, G. L. Nemhauser, A. H. G. Rinnoy Kan, M. J. Todd,eds., North-Holland, Amsterdam 1989, pp. 573{629.