RELIABILITY BASED DESIGN OPTIMIZATION: FORMULATIONS AND METHODOLOGIES A Dissertation Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Harish Agarwal, B.Tech.(Hons), M.S.M.E. John E. Renaud, Director Graduate Program in Aerospace and Mechanical Engineering Notre Dame, Indiana December 2004
136
Embed
Reliability Based Design Optimization - Formulations and Methodologies
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
RELIABILITY BASED DESIGN OPTIMIZATION: FORMULATIONS AND
METHODOLOGIES
A Dissertation
Submitted to the Graduate School
of the University of Notre Dame
in Partial Fulfillment of the Requirements
for the Degree of
Doctor of Philosophy
by
Harish Agarwal, B.Tech.(Hons), M.S.M.E.
John E. Renaud, Director
Graduate Program in Aerospace and Mechanical Engineering
Notre Dame, Indiana
December 2004
RELIABILITY BASED DESIGN OPTIMIZATION: FORMULATIONS AND
METHODOLOGIES
Abstract
by
Harish Agarwal
Modern products ranging from simple components to complex systems should
be designed to be optimal and reliable. The challenge of modern engineering is to
ensure that manufacturing costs are reduced and design cycle times are minimized
while achieving requirements for performance and reliability. If the market for the
product is competitive, improved quality and reliability can generate very strong
competitive advantages. Simulation based design plays an important role in design-
ing almost any kind of automotive, aerospace, and consumer products under these
competitive conditions. Single discipline simulations used for analysis are being cou-
pled together to create complex coupled simulation tools. This investigation focuses
on the development of efficient and robust methodologies for reliability based design
optimization in a simulation based design environment.
Original contributions of this research are the development of a novel efficient
and robust unilevel methodology for reliability based design optimization, the de-
velopment of an innovative decoupled reliability based design optimization method-
ology, the application of homotopy techniques in unilevel reliability based design
optimization methodology, and the development of a new framework for reliability
based design optimization under epistemic uncertainty.
Harish Agarwal
The unilevel methodology for reliability based design optimization is shown to
be mathematically equivalent to the traditional nested formulation. Numerical test
problems show that the unilevel methodology can reduce computational cost by at
least 50% as compared to the nested approach. The decoupled reliability based de-
sign optimization methodology is an approximate technique to obtain consistent re-
liable designs at lesser computational expense. Test problems show that the method-
ology is computationally efficient compared to the nested approach. A framework for
performing reliability based design optimization under epistemic uncertainty is also
developed. A trust region managed sequential approximate optimization method-
ology is employed for this purpose. Results from numerical test studies indicate
that the methodology can be used for performing design optimization under severe
1.3.1 Decoupled methodology for reliability based design optimization 51.3.2 Unilevel methodology for reliability based design optimization 61.3.3 Application of continuation methods in unilevel reliability based
Similarly, the distribution and other statistical properties of g can be obtained.
However, it is not always possible when g is obtained from expensive analysis tools.
Approximate techniques can be used for this purpose. They are described in the
next chapter.
3.2.2 Dempster-Shafer Theory
In this section, the basic concepts of evidence theory are summarized. The uncertain
measures provided by evidence theory are mentioned along with their advantages
and disadvantages.
Evidence theory uses two measures of uncertainty, belief and plausibility. In
comparison, probability theory uses just one measure, the probability of an event.
Belief and plausibility measures are determined from the known evidence for a
proposition without being necessary to distribute the evidence to subsets of the
25
Figure 3.2. Belief (Bel) and Plausibility (Pl)[6]
proposition. This means that evidence in the form of experimental data or expert
opinion can be obtained for a parameter value within an interval. The evidence
does not assume a particular value within the interval or the likelihood of any value
with regard to any other value in the interval. Since there is uncertainty in the
given information, the evidential measure for the occurrence of an event and the
evidential measure for its negation do not have to sum to unity as shown in Figure
3.2 (Bel(A) + Bel(A) 6= 1).
The basic measure in evidence theory is known as the Basic Probability Assign-
ment (BPA). It is a function m that maps the power set (2U) of the universal set U(also known as the frame of discernment) to [0, 1]. The power set simply represents
all the possible subsets of the universal set U . Let E represents some event which is
subset of the universal set U . Then m(E) refers to the BPA corresponding exactly
to the event E and it expresses the degree of support of the evidential claim that the
true alternative (prediction, diagnosis, etc.) is in the set E but not in any special
subset of E . Any additional evidence supporting the claim that the true alternative
is in a subset of E , say A ⊂ E , must be expressed by another nonzero value m(A).
26
m(E) must satisfy the following axioms of evidence theory.
(1) m(E) ≥ 0 for any E ∈ 2U
(2) m(∅) = 0
(3)∑
m(E) = 1 for all E ∈ 2U
where ∅ denotes an empty set. All the possible events E which are subsets of the
universal set U (E ⊆ U) and have m(E) > 0 are known as the focal elements. The
pair 〈F ,m〉, where F denotes the set of all focal elements induced by m is called a
body of evidence.
The measures of uncertainty provided by evidence theory are known as belief
(Bel) and plausibility (Pl). Once a body of evidence is given, these measures can
be obtained by using the following formulas.
Bel(A) =∑
B|B⊆Am(B) (3.8)
Pl(A) =∑
B|B∩A6=∅m(B) (3.9)
Observe that the belief of an event Bel(A) is calculated by summing the BPAs
of the propositions that totally agree with the event A whereas the plausibility of an
event is calculated by summing BPAs of propositions that agree with the event Atotally and partially. In other words, Bel and Pl give the lower and upper bounds
of the event respectively. They are related to each other by the following equation.
Pl(A) + Bel(A) = 1 (3.10)
where A represents the negation of the event A. The BPA can be obtained from
the Belief measure with the following inverse relation.
m(A) =∑
B|B⊆A(−1)|A−B|Bel(B) (3.11)
27
where |A − B| is the difference of the cardinality of the two sets.
Sometimes the available evidence can come from different sources. Such bodies
of evidences can be aggregated using existing rules of combination. Commonly used
combination rules are listed below[61].
(1) The Dempster rule of combination
(2) Discount+Combine Method
(3) Yager’s modified Dempster’s rule
(4) Inagaki’s unified combination rule
(5) Zhang’s center combination rule
(6) Dubois and Prade’s disjunctive consensus rule
(7) Mixing or Averaging
Dempster-Shafer theory is based on the assumption that these sources are inde-
pendent. However, information obtained from a variety of resources needs to be
properly aggregated. There is always a debate about which combination rule is
most appropriate. Dempster’s rule of combination (1) is one of the most popular
rules of combination used. It is given by the following formula.
m(A) =
∑B∩C=A
m1(B)m2(C)
1− ∑B∩C=∅
m1(B)m2(C), A 6= ∅ (3.12)
Dempster’s rule has been subject to some criticism in the sense that it tends to com-
pletely ignore the conflicts that exist between the available evidence from different
sources. This is because of the normalization factor in the denominator. Thus, it is
not suitable for cases where there is lot of inconsistency in the available evidence.
However, it is appropriate to apply, when there is some degree of consistency or suf-
ficient agreement among the opinions of different sources. In the present research,
it will be assumed that there is some consistency in the available evidence from
different sources. Hence, Dempster’s rule of combination will be applied to combine
28
evidence. When there is little or no consistency among the evidences from different
sources, it is appropriate to use mixing or averaging rule [47].
3.2.3 Convex Models of Uncertainty and Interval Analysis
In some cases, uncertain events form patterns that can be modeled using Convex
Models of Uncertainty [7]. Examples of convex models include intervals, ellipses or
any convex sets. Convex Models of uncertainties require less detailed information
to characterize uncertainties than a probability model of uncertainties. They often
require a worst case analysis in design applications which can be formulated as a
constrained optimization problem. Depending on the nature of the performance
function, local or global optimization techniques will be required. When the convex
models are intervals, techniques in interval analysis can be used.
3.2.4 Possibility/Fuzzy Set Theory Based Approaches
Fuzzy set theory [19] can be used to model uncertainties when there is little infor-
mation or sparse data. The conventional sets have fixed boundaries and are called
crisp sets, which is a special case of fuzzy sets. Let A be a fuzzy set or event (e.g. a
quantity is “equal” to 10) in a universe of discourse U (e.g. all possible values of the
quantity) and let x ∈ U , then the degree of membership of x in A is defined using a
membership function, also called the characteristic function, µA(x) (e.g. a triangle
shaped function with a peak of 1 at x = 10 and non-zero only when 9 < x < 11 and
0 elsewhere.
Possibility theory can be used when there is insufficient information about ran-
dom variations. Possibility distributions can be assigned to such variations that
are analogous to cumulative distribution functions in probability theory. The basic
definitions in possibility theory associated with the possibility of a compliment of an
event, union or intersection of events are very different from that used in probability
29
theory. The membership function associated with a fuzzy set can be assumed to
be a possibility distribution of that set. The possibility distribution (membership
function) of a function of a variable (fuzzy set) with a given possibility distribution
(membership function) can be found using Zadeh’s Extension Principle also called
Vertex Method. The vertex method is based on combinatorial interval analysis, and
the computational expense increases exponentially with dimension of the uncertain
variables and increases with nonlinearity of the function. The vertex method can
be used to find the induced preferences on performance parameters due to pre-
scribed preferences for design variables, and possibility distributions of performance
parameters due to uncertain variables characterized by possibility distributions.
Researchers [11, 40, 10] have shown that using possibility theory can yield more
conservative designs as compared to probability theory. This is especially true when
the available information is scarce or when the design criterion is to achieve low
probability or possibility of failure.
3.3 Optimization Under Uncertainty
A deterministic optimization formulation does not account for the uncertainties in
the design variables and parameters and simulation models. Optimized designs
based on a deterministic formulation are usually associated with a high probability
of failure because of the violation of certain hard constraints and can be subjected
to failure in service. This is particularly true if the hard constraints are active at
the deterministic optimum. In today’s competitive marketplace, it is very impor-
tant that the resulting designs are optimum and at the same time reliable. Hence,
it is extremely important that design optimization accounts for the uncertainties.
Some of the existing methodologies that perform optimization accounting for the
uncertainties are discussed in the following sections.
30
3.3.1 Robust Design
In some applications, it is important to ensure that the performance function is
insensitive to variations. A robust design needs to be found for such applications.
In [52], a formulation for performing robust design optimization which corresponds
to finding designs with minimum variation of certain performance characteristics
is presented. A Signal to Noise Ratio is maximized, where noise corresponds to
variation type uncertainties and signal corresponds to a performance parameter,
PP. In Taguchi techniques, experimental arrays are used to conduct experiments
at various levels of control factors (design variables, d), and for each experimental
control setting, a Signal to Noise Ratio is calculated. This usually requires another
experimental array for different settings of the uncertain variables, x. The Signal
to Noise Ratio (S/N) is calculated as follows
S/N(dj) = −10 log
[m∑
i=1
(PP (dj,xi)− τ)2 × 1
m
](3.13)
When the uncertainties are known probabilistically, a robust design optimization
corresponds to minimizing the variance of the performance function.
3.3.2 Reliability based design optimization
The basic idea in reliability based design optimization is to employ numerical opti-
mization algorithms to obtain optimal designs ensuring reliability. When the opti-
mization is performed without accounting the uncertainties, certain hard constraints
that are active at the deterministic solution may lead to system failure. Figure 3.3
illustrates such a case, where the chance that the deterministic solution fails is about
75% due to uncertainties in design variable settings. The reliable solution is char-
acterized by a slightly higher function value and is located inside the feasible region.
In most practical applications, the uncertainties are modeled using probability the-
ory. The probability of failure corresponding to a failure mode can be obtained and
31
90
70
50
Deterministic Optimum
Reliable Optimum
Almost 75% of the designs around fails
Figure 3.3. Design trade-off in RBDO
can be posed as a constraint in the optimization problem to obtain safer designs.
3.3.3 Fuzzy Optimization
In fuzzy optimization, the uncertainties (fuzzy requirements or preferences) are mod-
eled by using fuzzy sets. If the performance parameters are PPi and the corre-
sponding preferences be µi, the design optimization problem is to maximize all the
preferences and hence is a multiobjective problem, usually called fuzzy program-
ming or fuzzy multi-objective optimization [60]. Usually all the preferences cannot
be simultaneously maximized and hence requires a trade-off between preferences.
This is achieved through an aggregation operator P (.). The optimization problem
therefore consist of maximizing the aggregated preference. Typical examples of P
are
P (µ1, µ2, .., µk) = min(µ1, µ2, .., µk) (3.14)
P (µ1, µ2, .., µk) = (µ1, µ2, .., µk)1k . (3.15)
Eqn. (3.14) represents a non-compensating trade-off while Eqn. (3.15) represents
an equally compensating trade-off among the various preferences.
32
Antonsson and Otto[5] developed a method of imprecision (Mol) in which de-
signer preferences for design variables and performance variables are modeled in
terms of membership functions in the entire design variable and performance pa-
rameter space. A design trade-off strategy is identified and the optimal solution is
obtained by maximizing the aggregate preference functions for the design variables
and the induced preference on the performance parameters. The vertex method
can be used for computing induced preferences and as well as identifying the design
variables corresponding to the maxima of the aggregate preference [37].
Jensen and Sepulveda[29] provided a fuzzy optimization methodology in which
a trust region based sequential approximate optimization framework is used for
minimizing a non-compensating aggregate preference function. The methodology
develops approximations for intermediate variables and employs the vertex method
for computing the preference functions of the approximate intermediate variables in
an inexpensive way.
3.3.4 Reliability Based Design Optimization Using Evidence Theory
Uncertainty in engineering systems can be either aleatory or epistemic. Aleatory
uncertainty also known as stochastic uncertainty is associated with the inherent
random nature of the parameters of the system. It can be described mathematically
using probabilistic means. Once the probabilistic description is available for the
random parameters, the risk associated with the systems responses can be quan-
tified in terms of probability measure using appropriate methods such as FORM,
SORM, HORM (higher-order reliability method), Monte Carlo, etc. However, there
exist cases where all the uncertain parameters in a system cannot be described
probabilistically. In such cases, the usual practice is to assume some distribution
for the parameters for which probabilistic description is not available and perform
33
probabilistic analysis. The results obtained from such analysis can be faulty. Epis-
temic uncertainty also known as subjective uncertainty arises due to ignorance, lack
of knowledge or incomplete information. A variety of different theories exist to
quantify such uncertainty. In engineering systems, the epistemic uncertainty can
be either parametric or model-based. Parametric uncertainty is associated with the
uncertain parameters for which the information available is sparse or inadequate and
hence cannot be described probabilistically. Model-form uncertainty is associated
with improper models of the system due to a lack of knowledge of the physics of
the system. Model form uncertainties also arise when variable fidelity mathematical
models are employed for simulation and design.
Fuzzy sets, possibility theory, Dempster-Shafer theory, etc provides a means for
mathematically quantifying epistemic uncertainty. In this dissertation, an attempt
is made on how uncertainty can be quantified in multidisciplinary systems analysis
subject to epistemic uncertainty associated with the disciplinary design tools and
input parameters. Evidence theory is used to quantify uncertainty in terms of the
uncertain measures of belief and plausibility.
After the epistemic uncertainty has been quantified mathematically, the designer
seeks the optimum design under uncertainty. The measures of uncertainty provided
by evidence theory are discontinuous functions. Such non-smooth functions can-
not be used in traditional gradient-based optimizers because the sensitivities of the
uncertain measures do not exist. In this research surrogate models are used to repre-
sent the uncertain measures as continuous functions. A formal trust region managed
sequential approximate optimization approach is used to drive the optimization pro-
cess. The trust region is managed by a trust region ratio based on the performance
of the Lagrangian which is a penalty function of the objective and the constraints.
The methodology is illustrated in application to multidisciplinary problems.
34
3.4 Summary
A variety of uncertainties exist during simulation based design of an engineering
system. These include aleatory uncertainty, epistemic uncertainty, and errors. In
general, probability theory is used to model aleatory uncertainty. Other uncertainty
theories such as Dempster-Shafer theory, fuzzy set theory, possibility theory, and
convex models of uncertainty, can be used to model epistemic uncertainty. It is ex-
tremely important that the uncertainties are taken into account in design optimiza-
tion. A deterministic design optimization does not account for the uncertainties. A
variety of techniques have been developed in the last few decades to address this
issue. These techniques include robust design, reliability based design optimization,
fuzzy optimization, and so on. This dissertation mainly focuses on reliability based
design optimization.
35
CHAPTER 4
RELIABILITY BASED DESIGN OPTIMIZATION
In this chapter, aleatory uncertainty is considered in design optimization. This
is typically referred as reliability based design optimization. Reliability based de-
sign optimization (RBDO) is a methodology for finding optimized designs that are
characterized with a low probability of failure. Primarily, reliability based design
optimization consists of optimizing a merit function while satisfying reliability con-
straints. The reliability constraints are constraints on the probability of failure
corresponding to each of the failure modes of the system or a single constraint on
the system probability of failure. The probability of failure is usually estimated by
performing a reliability analysis. During the last few years, a variety of different
formulations have been developed for reliability based design optimization. This
chapter presents RBDO formulations and research issues associated with standard
methodologies.
4.1 RBDO formulations: efficiency and robustness
There are two important concepts in relation to a RBDO formulation. They are
efficiency and robustness. An efficient formulation is one in which the solution can
be obtained faster as compared to the other formulations. A real engineering de-
sign problem usually consists of large number of failure modes. Traditional RBDO
formulations requires solutions to nested optimization which is computationally in-
36
efficient. Thus it is important that the formulation that is solved for obtaining
reliable designs is computationally efficient.
Robustness, on the other hand, means that the RBDO formulation does not
depend on the starting point, etc. It implies that if the optimizer is invoked, it will
provide a local optimal solution. Some of the existing RBDO formulations are not
robust in the sense that there could be designs at which the formulation may not
hold. Hence it is also important that the formulation used is robust.
In the last two decades, researchers have proposed a variety of frameworks for
efficiently performing reliability based design optimization. A careful survey of the
literature reveals that the various RBDO methods can be divided into three broad
categories.
4.1.1 Double Loop Methods for RBDO
A deterministic optimization formulation does not account for the uncertainties in
the design variables and parameters. Optimized designs based on a deterministic
formulation are usually associated with a high probability of failure because of the
likely violation of certain hard constraints in service. This is particularly true if
the hard constraints are active at the deterministic optimal solution. To obtain a
reliable optimal solution, a deterministic optimization formulation is replaced with
a reliability based design optimization formulation.
Traditionally, the reliability based optimization problem has been formulated as a
double loop optimization problem. In a typical RBDO formulation, the critical hard
constraints from the deterministic formulation are replaced by reliability constraints,
37
as in
min f(d,p,y(d,p)) (4.1)
subject to grc(X, ηηη) ≥ 0, (4.2)
gDj (d,p,y(d,p)) ≥ 0, j = 1, .., Nsoft, (4.3)
dl ≤ d ≤ du, (4.4)
where grc are the reliability constraints. They are either constraints on probabilities
of failure corresponding to each hard constraint or are a single constraint on the
overall system probability of failure. In this dissertation, only component failure
modes are considered. It should be noted that the reliability constraints depend on
the random variables X and limit state parameters ηηη. The distribution parameters
of the random variables are obtained from the design variables d and the fixed
parameters p (see section 4.1.2 on reliability analysis below). grc can be formulated
as
grci = Pallowi
− Pi, i = 1, .., Nhard, (4.5)
where Pi is the failure probability of the hard constraint gRi at a given design, and
Pallowiis the allowable probability of failure for this failure mode. The probability
of failure is usually estimated by employing standard reliability techniques. A brief
description of standard reliability methods is given in the next section. It has to
be noted that the RBDO formulation given above (Equations (4.1)-(4.4)) assumes
that the violation of soft constraints due to variational uncertainties are permissible
and can be traded off for more reliable designs. For practical problems, design
robustness represented by the merit function and the soft constraints could be a
significant issue, one that would require the solution to a hybrid robustness and
reliability based design optimization formulation.
38
4.1.2 Probabilistic reliability analysis
Reliability analysis is a tool to compute the reliability index or the probability of
failure corresponding to a given failure mode or for the entire system [27]. The un-
certainties are modeled as continuous random variables, X = (X1, X2, ..., Xn)T , with
known (or assumed) joint cumulative distribution function (CDF), FX(x). The de-
sign variables, d, consist of either distribution parameters θθθ of the random variables
X, such as means, modes, standard deviations, and coefficients of variation, or de-
terministic parameters, also called limit state parameters, denoted by ηηη. The design
parameters p consist of either the means, the modes, or any first order distribution
quantities of certain random variables. Mathematically this can be represented by
the statements
[p,d] = [θθθ,ηηη] , (4.6)
p is a subvector of θ. (4.7)
Random variables can be consistently denoted as X(θθθ), and the ith failure mode
can be denoted as gRi (X, ηηη). In the following, x denotes a realization of the random
variables X, and the subscript i is dropped without loss of clarity. Letting gR(x, ηηη) ≤0 represent the failure domain, and gR(x, ηηη) = 0 be the so-called limit state function,
the time-invariant probability of failure for the hard constraint is given by
P (θθθ,ηηη) =
∫
gR(x,ηηη)≤0
fX(x) dx, (4.8)
where fX(x) is the joint probability density function (PDF) of X. It is usually impos-
sible to find an analytical expression for the above integral. In standard reliability
techniques, a probability distribution transformation T : Rn → Rn is usually em-
ployed. An arbitrary n-dimensional random vector X = (X1, X2, ..., Xn)T is mapped
into an independent standard normal vector U = (U1, U2, ..., Un)T . This transforma-
tion is known as the Rosenblatt Transformation [58]. The standard normal random
39
variables are characterized by a zero mean and unit variance. The limit state func-
tion in U-space can be obtained as gR(x, ηηη) = gR(T−1(u), ηηη) = GR(u, ηηη) = 0. The
failure domain in U-space is GR(u, ηηη) ≤ 0. Equation (4.8) thus transforms to
Pi(θθθ,ηηη) =
∫
GR(u,ηηη)≤0
φU(u) du, (4.9)
where φU(u) is the standard normal density. If the limit state function in U-space is
affine, i.e., if GR(u, ηηη) = αααTu + β, then an exact result for the probability of failure
is Pf = Φ(− β‖ααα‖), where Φ(·) is the cumulative Gaussian distribution function. If
the limit state function is close to being affine, i.e., if GR(u, ηηη) ≈ αααTu + β with
β = −αααTu∗, where u∗ is the solution of the following optimization problem,
min ||u|| (4.10)
subject to GR(u, ηηη) = 0, (4.11)
then the first order estimate of the probability of failure is Pf = Φ(− β‖ααα‖), where
ααα represents a normal to the manifold (4.11) at the solution point. The solution
u∗ of the above optimization problem, the so-called design point, β-point or the
most probable point (MPP) of failure, defines the reliability index βp = −αααT u∗‖ααα‖ . This
method of estimating the probability of failure is known as the first order reliability
method (FORM) [27].
In the second order reliability method (SORM), the limit state function is ap-
proximated as a quadratic surface. A simple closed form solution for the probability
computation using a second order approximation was given by Breitung [9] using
the theory of asymptotic approximations as
Pf (θθθ,ηηη) =
∫
GR(u,ηηη)≤0
φU(u) du
≈ Φ(−βp)n−1∏
l=1
(1− κl)−1/2, (4.12)
40
where the κl are related to the principal curvatures of the limit state function at the
minimum distance point u∗, and βp is the reliability index using FORM. Breitung
[9] showed that the second-order probability estimate asymptotically approaches the
first order estimate as βp approaches infinity if βpκl remains constant. Tvedt [70]
presented a numerical integration scheme to obtain exact probability of failure for
a general quadratic limit state function. Kiureghian et al [32] presented a method
of computing probability of failure by fitting parabolic approximation to the limit
state surface. The probability of failure can also be computed by using importance
sampling techniques [28, 21] that employ sampling around the MPP, thereby requir-
ing fewer samples than a traditional Monte Carlo technique. The concept of FORM
and SORM is illustrated in Figure 4.1 for an example with two random variables,
X1 and X2.
(Failed)
(Safe)
Original Space
(Failed)
(Safe)
Standard Space
Figure 4.1. Reliability analysis
The first order approximation, Pf ≈ Φ(−βp), is sufficiently accurate for most
practical cases. Thus, only first order approximations of the probability of failure are
used in practice. Using the FORM estimate, the reliability constraints in Equation
(4.5) can be written in terms of reliability indices as
grci = βi − βreqdi
, (4.13)
41
where βi is the first order reliability index, and βreqdi= −Φ−1(Pallowi
) is the desired
reliability index for the ith hard constraint. When the reliability constraints are
formulated as given in equation (4.14), the approach is referred to as the reliability
index approach (RIA).
The sensitivities of the probabilities of failure and reliability index with respect
to the distribution parameters and limit-state parameters can also be obtained. The
sensitivities in FORM are given as follows
∂β
∂θθθ= − ∇uG
R(u∗, ηηη)
||∇uGR(u∗, ηηη)||∂T (x∗, θθθ)
∂θθθ(4.14)
∂β
∂ηηη= − 1
||∇uGR(u∗, ηηη)||∂GR(u∗, ηηη)
∂ηηη(4.15)
A big advantage of reliability analysis is that the influence of the uncertainties on
the probability of failure can be found from the components of ααα∗. The designer can
recommend those uncertainties be reduced that influences the probability of failure
the most by appropriate quality control measures.
The system probability of failure can be computed if the components constituting
its failure are known. The system failure can either be a series event or a parallel
event. In a series system, failure of any component (failure mode) corresponds to
the failure of the system. In a parallel system, the failure modes that constitute
system failure has to be defined. In a K-out-of-N system, K component modes have
to fail for the system to fail. For complex systems, fault tree diagrams are used to
analyze the system failure. The system probability of failure for series or parallel
systems can be bounded by using unimodal bounds or relaxed bimodal bounds. The
unimodal bounds for series and parallel system are as follows
maxi
P (GRi ≤ 0) ≤ P (∪GR
i ≤ 0) ≤n∑
i=1
P (GRi ≤ 0) (4.16)
0 ≤ P (∩GRi ≤ 0) ≤ min
iP (GR
i ≤ 0) (4.17)
42
In this dissertation, only series systems are considered. Moreover, the first order
approximation to the probability of failure, Pf (ηηη) ≈ Φ(−βp), is reasonably accurate
for most practical cases. Thus, only first order approximations of the probability of
failure will be employed.
It should be noted that the first order reliability analysis involves a probability
distribution transformation, the search for the MPP, and the evaluation of the cu-
mulative Gaussian distribution function. To solve the FORM problem (Equations
4.10-4.11), various algorithms have been reported in the literature [39]. One of the
approaches is the Hasofer-Lind and Rackwitz-Fiessler (HL-RF) algorithm that is
based on a Newton-Raphson root solving approach. Variants of the HL-RF meth-
ods exist that use additional line searches to HL-RF scheme. The family of HL-RF
algorithms can exhibit poor convergence for highly nonlinear or badly scaled prob-
lems, since they are based on first order approximations of the hard constraint.
Using a sequential quadratic programming (SQP) algorithm is often a more robust
approach. The solution typically requires many system analysis evaluations. More-
over, there might be cases where the optimizer may fail to provide a solution to
the FORM problem, especially when the limit state surface is far from the origin in
U-space or when the case GR(u, ηηη) = 0 never occurs at a particular design variable
setting.
In design automation it cannot be known a priori what design points the upper
level optimizer (minimizing the merit function subject to reliability and determin-
istic constraints) will visit, therefore it is not known if the optimizer for the FORM
problem (evaluation of reliability constraints) will provide a consistent result. This
problem was addressed recently by Padmanabhan et al [48] by using a trust region
algorithm for equality constrained problems. For cases when GR(u, ηηη) = 0 does not
43
occur, the algorithm provided the best possible solution for the problem through
min ‖u‖ (4.18)
subject to GR(u, ηηη) = c. (4.19)
The reliability constraints formulated by the RIA are therefore not robust. RIA
is usually more effective if the probabilistic constraint is violated, but it yields a
singularity if the design has zero failure probability [69]. To overcome this difficulty,
Tu et al [69] provided an improved formulation to solve the RBDO problem. In
this method, known as the performance measure approach (PMA), the reliability
constraints are stated by an inverse formulation as
grci = GR
i (ui∗β=ρ, ηηη) i = 1, .., Nhard. (4.20)
ui∗β=ρ is the solution to the inverse reliability analysis (IRA) optimization problem
min GRi (u, ηηη) (4.21)
subject to ‖u‖ = ρ = βreqdi, (4.22)
where the optimum solution ui∗β=ρ corresponds to MPP in IRA of the ith hard con-
straint. Solving RBDO by the PMA formulation is usually more efficient and robust
than the RIA formulation where the reliability is evaluated directly. The efficiency
lies in the fact that the search for the MPP of an inverse reliability problem is
easier to solve than the search for the MPP corresponding to an actual reliability
[69]. The RIA and the PMA approaches for RBDO are essentially inverse of one
another and would yield the same solution if the constraints are active at the op-
timum [69]. If the constraint on the reliability index (as in the RIA formulation)
or the constraint on the optimum value of the limit-state function (as in the PMA
formulation) is not active at the solution, the reliable solution obtained from the
two approaches might differ. In general, the RIA formulation yields a conservative
44
solution. Similar RBDO formulations were independently developed by other re-
searchers [53, 59, 31]. In these RBDO formulations, constraint (4.22) is considered
as an inequality constraint (‖u‖ ≤ βreqdi), which is a more robust way of handling
the constraint on the reliability index. The major difference lies in the fact that
in these papers semi-infinite optimization algorithms were employed to solve the
RBDO problem. Semi-infinite optimization algorithms solve the inner optimization
problem approximately. However, the overall RBDO is still a nested double-loop
optimization procedure. As mentioned earlier, such formulations are computation-
ally intensive for problems where the function evaluations are expensive. Moreover,
the formulation becomes impractical when the number of hard constraints increase,
which is often the case in real-life design problems. To alleviate the computational
cost associated with the nested formulation, sequential RBDO methods have been
developed.
4.1.3 Sequential Methods for RBDO
The basic concept behind sequential RBDO techniques is to decouple the upper level
optimization from the reliability analysis to avoid a nested optimization problem.
In sequential RBDO methods, the main optimization and the search of the MPPs
of failure (reliability analysis) is performed separately and the procedure is repeated
until desired convergence is achieved. The idea is to find a consistent reliable de-
sign at considerably lower computational cost as compared to the nested approach.
A consistent reliable design is a feasible design that satisfies all the reliability con-
straints and other soft constraints. The reliability analysis is used to check if a given
design meets the desired reliability level. In most sequential techniques of RBDO,
a design obtained by performing a deterministic optimization is updated based on
the information obtained from the reliability analysis or by using some nonlinear
45
transformations, and the updated design is used as a starting point for the next
cycle.
Chen et al [13] proposed a sequential RBDO methodology for normally dis-
tributed random variables. Wang and Kodiyalam [72] generalized this methodol-
ogy for nonnormal random variables and reported enormous computational savings
when compared to the nested RBDO formulation. The methodology was extended
for multidisciplinary systems in Agarwal et al [2]. Instead of using the reliability
analysis (inverse reliability problem) to obtain the true MPP of failure (u∗β=ρ), Wang
and Kodiyalam [72] use the direction cosines of the probabilistic constraint at the
mean values of the random variables in the standard space (ααα =∂gR
∂u
‖ ∂gR
∂u‖) and the
target reliability index (ρ) to make an estimate of the MPP of failure (u∗β=ρ = −ρααα)
(see figure 4.2). It should be noted that the estimated MPPs lie on the target
reliability sphere. During optimization the corresponding MPP in X-space needs
to be calculated to evaluate the probabilistic performance functions. The MPP of
failure in X-space is found by mapping u∗β=ρ to the original space. If the random
46
variables in X-space are independent and normally distributed, then the MPP in
original space is given by x∗ = µx − u∗β=ρσx. If the variables have a nonnormal dis-
tribution, then the equivalent means (µ′x) and equivalent standard deviations (σ
′x) of
an approximate normal distribution are computed and used in the above expression
to estimate the MPP in X-space [72].
The advantage of this methodology is that it completely eliminates the lower
level optimization for evaluating the reliability based constraints. The most probable
point (MPP) of failure for each failure driven constraint is estimated approximately.
If the limit state function is close to linear in the standard space, then the estimate
of the MPP in U-space will be accurate enough and the final solution may be close
to the actual solution. However, if the limit state function in the standard space in
sufficiently nonlinear, which is often the case in most real-life design problems, then
the MPP estimates might be extremely inaccurate, which might result in a design
which is not truly optimal. This is referred to as spurious optimal design.
Chen and Du [12] also proposed a sequential optimization and reliability assess-
ment methodology (SORA). In SORA, the boundaries of the violated constraints
(with low reliability) are shifted into the feasible direction based on the reliability
information obtained in the previous iteration. Two different formulations were
used for reliability assessment, the probability formulation (RIA) and the percentile
performance formulation (PMA). The percentile formulation was reported to be
computationally less demanding compared to the probability formulation and the
overall cost of RBDO was reported to be significantly less compared to the nested
formulation. It should be noted that in this methodology an exact first order re-
liability analysis is performed to obtain the MPP of failure for each failure driven
constraint, which was not the case in the approximate RBDO methodology of [72].
Therefore, a consistent reliable design is almost guaranteed to be obtained from this
47
framework. However, a true local optimum cannot be guaranteed. This is because
the MPP of failure for the hard constraints are obtained at the previous design point.
A shift factor, si, from the mean values of the random variables is calculated and
is used to update the MPP of failure for probabilistic constraint evaluation during
the deterministic optimization phase in the next iteration, as the optimizer varies
the mean values of the random variables. This MPP update might be inaccurate
because of the fact that as the optimizer varies the design variables, the MPP of
failure (and hence the shift factor) also changes and is not addressed in SORA. This
might lead to spurious optimal designs.
4.1.4 Unilevel Methods for RBDO
During the last few years, researchers in the area of structural and multidisciplinary
optimization have continuously faced the challenge to develop more efficient tech-
niques to solve the RBDO problem. As outlined before, RBDO is typically a nested
optimization problem, requiring a large number of system analysis evaluations. The
major concern in evaluating reliability constraints is the fact that the reliability
analysis methods are formulated as optimization problems [54]. To overcome this
difficulty, a unilevel formulation has been developed in Kuschel and Rackwitz [36].
In their method, the direct FORM problem (lower level optimization - Eqs. (4.10)-
(4.11) ) is replaced by the corresponding first order Karush-Kuhn-Tucker (KKT)
optimality conditions of the first order reliability problem. As mentioned earlier,
the direct FORM problem can be ill conditioned, and the same may be true for the
unilevel formulation given by Kuschel and Rackwitz [36]. The reason being that
the probabilistic hard constraints might have a zero failure probability at a partic-
ular design setting, and hence the optimizer might not converge due to the hard
constraints (which are posed as equality constraints) not being satisfied. Moreover,
48
the conditions under which such a replacement is equivalent to the original bi-level
formulation was not detailed in Kuschel and Rackwitz [36]. Therefore, the unilevel
approach of Kuschel and Rackwitz [36] does not guarantee that the unilevel ap-
proach is mathematically equivalent to the bilevel approach. In this investigation, a
new unilevel method is being developed which enforces the constraint qualification
of the KKT conditions and avoids the singularities associated with zero probability
of failure.
4.2 Summary
It has been noted that the traditional reliability based optimization problem is a
nested optimization problem. Solving such nested optimization problems for a large
number of failure driven constraints and/or nondeterministic variables is extremely
expensive. Researchers have developed sequential approaches to speed up the opti-
mization process and to obtain a consistent reliability based design. However, these
methodologies are not guaranteed to provide the true optimal solution. A unilevel
formulation has been developed to perform the optimization and reliability eval-
uation in a single optimization. But the existing formulation does not guarantee
mathematical equivalency to the original bi-level problem.
49
CHAPTER 5
DECOUPLED RBDO METHODOLOGY
In this chapter, a novel reliability based design optimization methodology (RBDO)
is developed. A new decoupled method for reliability based design optimization is
developed. In the proposed method, the sensitivities of the Most Probable Point
(MPP) of failure with respect to the decision variables are introduced to update the
MPPs during the deterministic optimization phase of the proposed RBDO approach.
For the test problem considered, the method not only finds the optimal solution but
it also locates the exact MPP of failure, which is important to ensure that the target
reliability index is met. The MPP update is based on the first order Taylor series
expansion around the design point from the last cycle. The MPP update is found to
be extremely accurate, especially around the vicinity of the point from the previous
cycle.
5.1 A new sequential RBDO methodology
In this investigation, a framework for efficiently performing RBDO is also developed.
As described earlier, a traditional nested RBDO formulation is extremely impracti-
cal for most real-life design problems of reasonable size (100 variables and 10 fail-
ure modes) and scope (e.g., multidisciplinary systems). Researchers have proposed
sequential RBDO approaches to speed up the optimization process and therefore
obtain a consistent reliability based design. These approaches are practical and at-
50
tractive because of the fact that a workable design can be obtained at considerably
lower computational cost. In the following, a new sequential RBDO methodology
is described in which the main optimization and reliability assessment phases are
decoupled. The sensitivities of the MPPs with respect to the design variables are
used to update them during the deterministic optimization phase. This helps in a
good estimate of the MPP as the design variables are varied by the optimizer during
the deterministic optimization phase.
The flowchart of the proposed RBDO methodology is shown in Figure 6. The
Inverse Reliability Assessment
Deterministic Optimization
ConvergeYes Final
Design
No
Calculate Optimal Sensitivities of MPPs
Figure 5.1. Proposed RBDO Methodology
methodology consist of the following steps.
1. Given an initial design d0. Set iteration counter k = 0.
51
2. Solve the following deterministic optimization problem starting from designdk to get a new design dk+1.
mind
: f(d,p) (5.1)
sub to : gRi (xi
∗, ηηη) ≥ 0 i = 1, .., Nhard (5.2)
gDj (d,p) ≥ 0 j = 1, .., Nsoft (5.3)
dl ≤ d ≤ du (5.4)
During the first iteraton (k = 0), the MPP of failure for evaluating theprobabilistic constraints is set equal to the mean values of the random vari-ables (xi
∗ = µx). It should be noted that the mean of the random variables(distribution parameters) is a subset of the design variables and fixed pa-rameters (see equation (4.6) ). This corresponds to solving the deterministicoptimization problem (equations (2.1) - (2.4)). From authors experience, ithas been observed that starting from a deterministic solution results in lowercomputational cost for RBDO.
In subsequent iterations (k > 0), the MPP of failure for evaluating theprobabilistic constraints is obtained from the first order Taylor series expansionabout the previous design point
ui∗ = ui
∗,k +∂ui
∗,k
∂d(d− dk) i = 1, .., Nhard. (5.5)
Note that ∂ui∗,k
∂dis a matrix and its columns contains the gradient of the
MPPs with respect to each of the decision variables. For example, the firstcolumn of the matrix contains the gradient of the MPP vector, ui
∗, withrespect to the first design variable d1. The MPPs in the X-space are obtainedby using the transformation.
3. At the new design dk+1, perform an exact first order inverse reliability analysis(equations (4.21) - (4.22) ) for each hard constraint. This gives the MPP offailure of each hard constraint, (ui
∗,k+1).
4. Check for convergence on the design variables and the MPPs and that theconstraints are satisfied. If converged, stop. Else, go to the next step.
5. Compute the post-optimal sensitivities ∂ui∗,k
∂dfor each hard constraint (i.e.,
how the MPP of failure will change with a change in design variables - seesection below).
6. Set k=k+1 and go to step 2.
5.1.1 Sensitivity of Optimal Solution to Problem Parameters
The proposed RBDO framework requires the sensitivities of the MPPs with re-
spect to the design variables. The post-optimal sensitivities are needed to update
52
the MPPs based on linearization around the previous design point. The following
techniques could be used to compute the post-optimal sensitivities for the MPPs.
1. The sensitivity of the optimal solution to problem parameters can be computedby differentiating the first order Karush-Kuhn-Tucker (KKT) optimality con-ditions [55]. The Lagrangian L for the inverse reliability optimization problem(equations (4.21) - (4.22)) is
L = GR(u, ηηη) + λ(uTu− ρ2), (5.6)
where λ is the lagrange multiplier corresponding to the equality constraint(say hR). The first order optimality conditions for this problem are
∂L
∂u=
∂GR
∂u∗+ 2λu∗ = 0. (5.7)
Differentiating the first order KKT optimality conditions with respect to aparameter in the vector, z = [θθθ,ηηη]T , the following linear system of equationsis obtained
∂2L∂u2 u
uT 0
∂u∗∂zl
∂λ∗∂zl
+
∂2L∂u∂zl
∂hR
∂zl
= 0. (5.8)
The above system needs to be solved for each parameter in the optimizationproblem to obtain the sensitivity of the optimal solution with respect to thatparameter, ∂u∗
∂zl. In the proposed sequential RBDO framework, the system
of equations needs to be solved for only those parameters that are decisionvariables in the upper level optimization. It should be noted that the Hessianof the limit state function needs to be computed when using this technique. Ifthe Hessian of the limit state function is not available or is difficult to obtain,other techniques have to be used. In the present implementation, a dampedBFGS update is used to obtain the second order information [42]. This methodis defined by
rk = ψkyk + (1− ψk)Hksk, (5.9)
where the scalar
ψk =
1 : sTk yk ≥ 0.2sT
k Hksk
0.8sTk Hksk
sTk Hksk − sT
k yk
: sTk yk ≤ 0.2sT
k Hksk
, (5.10)
and yk and sk are the differences in the function and gradient values of theprevious iteration from the current iteration, respectively. The Hessian updateis
Hk+1 = Hk − HksksTk Hk
sTk Hksk
+rkr
Tk
sTk rk
. (5.11)
53
2. The sensitivity of the optimal solution to problem parameters can also beobtained by using finite difference techniques. These techniques can be ex-tremely expensive, as the dimension of the decision variables and the numberof hard constraints increase. This is because a full optimization is required tocompute the sensitivity of the MPP with respect to each decision variable andthis has to be performed for each hard constraint. However, significant com-putational savings can be achieved if the previous optimum MPP is used asa warm starting point to compute the change in MPP as the design variablesare perturbed.
3. Approximations to the limit state function can also be utilized to computethe sensitivity of optimal solution to problem parameters. This technique isdescribed below.
(a) At a given design dk, perform inverse reliability analysis to obtain exactMPP, xi
∗,k.
(b) Construct linear approximations of the hard constraint as follows.
gRi = gR
i (x∗,ki , ηηηk) +∂gR
i
∂x
∣∣∣∣T
x∗,ki ,ηηηk
(x− x∗,ki ) +∂gR
i
∂ηηη
∣∣∣∣T
x∗,ki ,ηηηk
(η − ηki ) (5.12)
(c) Perform inverse reliability analysis over the linear approximation at per-turbed values of design variables to obtain approximate sensitivities.
5.2 Test Problems
The decoupled RBDO methodology developed in this investigation is implemented
for a series of analytical, structural, and multidisciplinary design problems. The
methodology is compared to the nested RBDO approach using the PMA approach
for probabilistic constraint evaluation.
5.2.1 Short Rectangular Column
This problem has been used for testing and comparing RBDO methodologies in
Kuschel and Rackwitz [35]. The design problem is to determine the depth h and
width b of a short column with rectangular cross section with a minimal total mass
bh assuming unit mass per unit area. The uncertain vector, X = (P, M, Y ), the
stochastic parameters, and the correlations of the vector elements are listed in Table
5.1. The limit state function in terms of the random vector, X = (P,M, Y ), and
54
Table 5.1
STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM
Variable Dist. Mean/St. dev. Cor. P Cor. M Cor. YYield Stress (P ) N 500/100 1 0.5 0
Bending Moments (M) N 2000/400 0.5 1 0Axial Force (Y ) LogN 5/0.5 0 0 1
the limit state parameters, ηηη = (b, h), (which happens to be same as the design
vector d in this problem) is given by
gR(x, ηηη) = 1− 4M
bh2Y− P 2
(bhY )2. (5.13)
The objective function is given by
f(d) = bh. (5.14)
The depth h and the width b of the rectangular column had to satisfy 15 ≤ h ≤ 25
and 5 ≤ b ≤ 15. The allowable failure probability is 0.00621 or in other words a
reliability index for the failure mode greater than or equal to 2.5. The optimization
process was started from the point (u0,d0) = ((1, 1,−1), (5, 15)). Both approaches
results in an optimal solution d∗ = (8.668, 25.0). The computational effort for this
problem is compared in Table 5.2. The nested approach requires 77 evaluations
of the limit state function and 85 evaluations of its gradients as compared to 31
evaluations of the limit state function and 31 evaluation of its gradients for the
proposed framework. Therefore, it is noted that the proposed methodology for
RBDO is computationally more efficient than the traditional RBDO approach for
this particular problem. The proposed method took three cycles for convergence,
the design history for which is shown in Figure 5.2. It is observed that after the
55
Table 5.2
COMPUTATIONAL COMPARISON OF RESULTS (SHORT RECTANGULAR
Unit yeild strength R Weibull 50/6 ksifatigue strength coefficient A Lognormal 1.6323× 1010/0.4724 ksi
The design variables in the problem are width (b) and depth (h) of the beam.
The objective is to minimize the weight of the beam (bh) (assuming unit weight per
unit volume) subject to following hard constraints
gR1 =
0.3Eb3d
900−Q2 ≥ 0,
gR2 = A(6Q1L/bd2)−N0 ≥ 0,
gR3 = ∆0 − (4Q2L
3/Ebd3) ≥ 0,
gR4 = R− (6Q2L/bd2) ≥ 0,
where N0 = 2 × 106, ∆0 = 0.15′′, and L = 30′′. A minimum reliability index of 3
is desired for each failure mode. It is clear that the beam design problem exhibits
nonlinear limit state functions (gR1 through gR
4 ), nonnormal random variables and
multicriteria constraints.
61
The optimization process was started from the point, d0 = (1, 1). Both ap-
proaches result in an optimal solution d∗ = (0.2941, 4.5559). The computational
cost for the two methods is compared in terms of the total number of g-function
evaluations taken by each method. The proposed decoupled RBDO method took 238
g-function evaluations as compared to 523 evaluations by the nested RBDO method.
This does not include derivative calculations as analytical first order derivatives were
used. Therefore, it is noted that the proposed methodology is significantly more ef-
ficient compared to the traditional approach while providing the same solution.
5.2.4 Steel Column
This problem is taken from Kuschel and Rackwitz [35]. The problem is a steel
column with design vector, d = (b, d, h), where
b = mean of flange breadth,
d = mean of flange thickness, and
h = mean of height of steel profile.
The length of the steel column (s) is 7500 mm. The objective is to min-
imize the cost function, f = bd + 5h. The independent random vector, X =
(Fs, P1, P2, P3, B, D,H, F0, E), and its stochastic characteristics are given in Table
7.2.
The limit state function in terms of the random vector, X, the limit state pa-
62
Table 5.6
STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM
Variable Symbol Distribution Mean/Standard deviation UnitYield stress Fs Lognormal 400/35 MPa
Dead weight load P1 Normal 500000/50000 NVariable load P2 Gumbel 600000/90000 NVariable load P3 Gumbel 600000/90000 N
Flange breadth B Lognormal b/3 mmFlange thickness D Lognormal d/2 mmHeight of profile H Lognormal h/5 mmInitial Deflection F0 Normal 30/10 mmYoung’s modulus E Weibull 21000/4200 Mpa
rameters, ηηη = d, is given as
GR(X, ηηη) = Fs − P(
1
As
+F0
Ms
.εb
εb − P)
,
where
As = 2BD, (area of section)
Ms = BDH, (modulus of section)
Mi =1
2BDH2, (moment of inertia)
εb =π2EMi
s2, (Euler buckling load)
The means of the flange breadth b and flange thickness d must be within the intervals
[200, 400] and [10, 30] respectively. The interval [100, 500] defines the admissible
mean height h of the T-shaped steel profile. It is required that the optimal design
satisfies a reliability level of 3.
Again, both the methods yield the same optimal solution, d = (200, 17.1831, 100).
The computational cost of the two approaches is compared in terms of the number
of g-function evaluations taken be each method. The proposed decoupled RBDO
63
methodology took 236 evaluations of the limit state function as compared to 457
evaluations taken by the nested RBDO approach. This does not include derivative
calculations as analytical first order derivatives were used. Again, it is noted that
the proposed methodology is significantly more efficient compared to the traditional
approach while providing the same solution.
5.3 Summary
A new decoupled iterative RBDO methodology is presented. The deterministic
optimization phase is separated from the reliability analysis phase. During the de-
terministic optimization phase the most probable point of failure corresponding to
each failure mode is obtained by using first order Taylor series expansion about
the design point from the previous cycle. The most probable point update during
deterministic optimization requires the sensitivities of the MPPs with respect to
the design vector. This requires the second order derivatives of the failure mode.
In this investigation, a damped BFGS update scheme is employed to compute the
second order derivatives. It is observed that the estimated most probable point
converges to the exact values in a few cycles. This implies that the Hessian up-
date scheme gives an accurate estimate of the second order information of the limit
state function. The framework is tested using a series of structural and multidisci-
plinary design problems. It is found that the proposed methodology provides the
same solution as the traditional nested optimization formulation, and is significantly
more computationally efficient. For the problems considered, the decoupled RBDO
methodology reduces the computational cost by 2 to 3 times as compared to the
traditional approach.
64
CHAPTER 6
UNILEVEL RBDO METHODOLOGY
In this chapter, a novel unilevel formulation for reliability based design optimiza-
tion is developed. In this formulation the lower level optimization (evaluation of
reliability constraints in the double-loop formulation) is replaced by its correspond-
ing first order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the
upper level optimization. It is shown that such a replacement is computationally
equivalent to solving the original nested optimization if the lower level optimization
problem is solved by numerically satisfying the KKT conditions (which is typically
the case). Numerical studies show that the proposed formulation is numerically
robust (stable) and computationally efficient compared to the existing approaches
for reliability based design optimization.
6.1 A new unilevel RBDO methodology
The main focus of this research has been to develop a robust and efficient for-
mulation for performing RBDO. As mentioned earlier, the probabilistic constraint
specification using the performance measure approach is robust compared to the
reliability index approach. However, the methodology is still nested and is hence
expensive. In this research, the inverse reliability analysis optimization problem
is replaced by the corresponding first order necessary Karush-Kuhn-Tucker (KKT)
optimality conditions. The KKT conditions for the reliability constraints similar to
65
PMA (Eqns. (4.21)-(4.22)) are used. The treatment of Eqn. (4.22) is a bit subtle.
No simple modification of Eqn. (4.22) will result in an equality constraint that is
both quasiconvex and quasiconcave, which would be required for the sufficiency of
the KKT conditions. For necessity of the KKT conditions, observe that ‖u‖ − ρ is
the final design. This is expected for a more reliable structure to account for the
variation in the random variables.
The values of the hard constraints at the final design are given in Table 6.5.
It should be noted that the constraints g6, g16, g18, g20, and g22 dictate the system
failure. The reliability constraints corresponding to these constraints are the only
active constraints in the RBDO. The other hard constraints have a value greater
than zero, which means that the reliability index corresponding to those constraints
is much higher than the desired index.
The computational cost of the two methods is compared in Table 6.6. The
proposed method is observed to be twice as fast as the nested approach. Therefore,
the proposed method is not only a robust formulation for RBDO problems, but is
also computationally efficient.
6.3 Summary
A traditional RBDO methodology is very expensive for designing engineering sys-
tems. To address this issue, a new unilevel formulation for RBDO is developed.
79
Table 6.5
HARD CONSTRAINTS AT THE FINAL DESIGN
gi value at optimum
g1 0.2232g6 4.7 ×10−8
g14 0.1794g16 1.1 ×10−16
g18 1.1 ×10−16
g20 1.1 ×10−16
g22 3.3 ×10−16
g28 0.3749
Table 6.6
COMPARISON OF COMPUTATIONAL COST OF RBDO METHODS
Method SA Calls
DLM-PMA 493Unilevel-PMA 261
The first order KKT conditions corresponding to the probabilistic constraint (as
in PMA) are enforced directly at the system level optimizer, thus eliminating the
lower level optimizations used to compute the probabilistic constraints. The pro-
posed formulation is solved in an augmented design space that consists of the original
decision variables, the MPP of failure corresponding to each failure driven mode,
and Lagrange multipliers. It is mathematically equivalent to the original nested op-
timization formulation if the inner optimization problem is solved by satisfying the
KKT conditions (which is effectively what most numerical optimization algorithms
do). Under mild pseudoconvexity assumptions on the hard constraints, the proposed
80
formulation is mathematically equivalent to the original nested formulation. The
method is tested using a simple analytical problem and a multidisciplinary struc-
tures control problem, and is observed to be numerically robust and computationally
efficient compared to the existing approaches for RBDO.
It is noted that the proposed formulation for RBDO is accompanied by a large
number of equality constraints. Most commercial optimizers exhibit numerical insta-
bility or show poor convergence behavior for problems with large numbers of equality
constraints. In the next chapter, continuation methods have been employed to solve
the unilevel RBDO problem.
81
CHAPTER 7
CONTINUATION METHODS IN OPTIMIZATION
In the unilevel formulation developed in the last chapter, the KKT conditions of
the inner optimization for each probabilistic constraint evaluation are imposed at
the system level as equality constraints. Most commercial optimizers are usually
numerically unstable when applied to problems accompanied by many equality con-
straints. In this chapter, continuation methods are used for constraint relaxation
and to obtain a simpler problem for which the solution is known. A series of opti-
mization problem are then solved as the relaxed optimization problem approaches
the original problem.
7.1 Proposed Algorithm
Since the problem of interest is accompanied by a large number of equality con-
straints, it is extremely important that the constraint relaxation techniques be such
that it is easier to identify an initial feasible starting point. In this investigation,
continuation methods have been used for this purpose[73]. Homotopy (continua-
tion) techniques have been shown to be extremely robust in the works of Perez et.
al.[50]. The constraint relaxation used in this investigation is of the following form
82
gr ≡ g + (1− τ)b ≥ 0, (7.1)
hr ≡ h + (1− τ)c = 0, (7.2)
where g and h are generic inequalities and equalities, gr and hr are the relaxed
inequalities and equalities respectively. The constants b and c are chosen to make
the relaxed constraints feasible at the beginning. For the inequalities, b is based on
the value of g. If g ≥ 0, b is set equal to zero. If g < 0, b is set equal to negative
of g. Similarly, for the equalities, the constant c is evaluated to satisfy the relaxed
equality at the initial design. It is set equal to the negative of the value of h in
current studies. The homotopy parameter τ drives the relaxed constraints to the
original constraints by gradually adjusting τ = 0 → τ = 1.
After each cycle, the homotopy parameter τ is updated. In this investigation,
it is incremented by a constant value. Note that the homotopy parameter τ helps
in gradually solving simpler problems from a known solution. As the parameter is
changed from 0 to 1, the solution to the original problem is found.
7.2 Test Problems
The proposed algorithm is implemented for a short rectangular column design prob-
lem and a steel column design problem. Both the test problems are taken from the
literature and have been used to test RBDO methodologies.
7.2.1 Short Rectangular Column
The design problem is to determine the depth h and width b of a short column
with rectangular cross section with a minimal total mass bh assuming unit mass
per unit area. The uncertain vector, X = (P,M, Y ), the stochastic parameters,
and the correlations of the vector elements are listed in Table 7.1. The limit
83
Table 7.1
STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM
Variable Dist. Mean/St. dev. Cor. P Cor. M Cor. YYield Stress (P ) N 500/100 1 0.5 0
Bending Moments (M) N 2000/400 0.5 1 0Axial Force (Y ) LogN 5/0.5 0 0 1
state function in terms of the random vector, X = (P, M, Y ), and the limit state
parameters, ηηη = (b, h), (which happens to be same as the design vector d in this
problem) is given by
gR(X, ηηη) = 1− 4M
bh2Y− P 2
(bhY )2. (7.3)
The objective function is given by
f(d) = bh. (7.4)
The depth h and the width b of the rectangular column had to satisfy 15 ≤ h ≤ 25
and 5 ≤ b ≤ 15. The allowable failure probability is 0.00621 or in other words a
reliability index for the failure mode greater than or equal to 2.5. The optimiza-
tion process was started from the point (u0,d0, λ0) = ((1, 1,−1), (5, 15), 0.1). The
optimal solution for this problem is d∗ = (8.668, 25.0).
Figure 7.1 shows the history of the objective function. It is noted that as
the value of the homotopy parameter τ increases from 0 to 1, the objective function
gradually approaches the optimal solution.
Figure 7.2 shows the history of the augmented design variables. Observe that
the variables gradually approach the optimal solution of the original problem. The
homotopy parameter τ controls the progress of the optimization process. For highly
nonlinear problems, it might be difficult to locate the solution directly. The use
84
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1190
195
200
205
210
215
220
τ
f
Figure 7.1. Convergence of objective function.
of homotopy parameter allows to start from a known solution and gradually make
progress towards the optimal solution.
7.2.2 Steel Column
The problem is a steel column with design vector, d = (b, d, h), where
b = mean of flange breadth,
d = mean of flange thickness, and
h = mean of height of steel profile.
The length of the steel column (s)is 7500 mm. The objective is to mini-
mize the cost function, f = bd + 5h. The independent random vector, X =
(Fs, P1, P2, P3, B, D,H, F0, E), and its stochastic characteristics are given in Table
7.2.
The limit state function in terms of the random vector, X, the limit state pa-
85
0 0.2 0.4 0.6 0.8 17.5
8
8.5
9
τ
b
0 0.2 0.4 0.6 0.8 124
24.5
25
25.5
26
τ
h
0 0.2 0.4 0.6 0.8 11.2
1.4
1.6
1.8
2
u 1
τ 0 0.2 0.4 0.6 0.8 10.55
0.6
0.65
0.7
0.75
τ
u 2
0 0.2 0.4 0.6 0.8 1−1.6
−1.4
−1.2
−1
−0.8
τ
u 3
0 0.2 0.4 0.6 0.8 10.1
0.2
0.3
0.4
τ
λ
Figure 7.2. Convergence of optimization variables.
rameters, ηηη = d, is given as
GR(X, ηηη) = Fs − P(
1
As
+F0
Ms
.εb
εb − P)
,
where
As = 2BD, (area of section)
Ms = BDH, (modulus of section)
Mi =1
2BDH2, (moment of inertia)
εb =π2EMi
s2, (Euler buckling load)
The means of the flange breadth b and flange thickness d must be within the intervals
[200, 400] and [10, 30] respectively. The interval [100, 500] defines the admissible
mean height h of the T-shaped steel profile. It is required that the optimal design
satisfies a reliability level of 3.
The optimal solution for this problem is d = (200, 17.1831, 100). Similar con-
vergence history was observed for this test problem as well. Figure 7.3 shows the
86
Table 7.2
STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM
Variable Symbol Distribution Mean/Standard deviation UnitYield stress Fs Lognormal 400/35 MPa
Dead weight load P1 Normal 500000/50000 NVariable load P2 Gumbel 600000/90000 NVariable load P3 Gumbel 600000/90000 N
Flange breadth B Lognormal b/3 mmFlange thickness D Lognormal d/2 mmHeight of profile H Lognormal h/5 mmInitial Deflection F0 Normal 30/10 mmYoung’s modulus E Weibull 21000/4200 Mpa
convergence of the objective function. Again it is observed that the homotopy
parameter controls the progress of the optimization process.
7.3 Summary
An optimization methodology for reliability based design is presented. From the au-
thors experience, the unilevel formulation for RBDO, when directly coupled with an
optimizer, may have convergence difficulties if there are many equality constraints
or if the problem is very nonlinear. Since the unilevel formulation is usually ac-
companied by many equality constraints, homotopy techniques are used to relax
the constraints and identify a starting point that is feasible with respect to the
relaxed constraints. In this investigation, the homotopy parameter is incremented
by a fixed value. A series of optimization problems are solved for various values of
the homotopy parameter as the relaxed problem approaches the original problem.
It is realized that it is easier to solve the relaxed problem from a known solution
and make gradual progress towards the solution than solve the problem directly.
87
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 12800
3000
3200
3400
3600
3800
4000
4200
4400
4600
4800
τ
f
Figure 7.3. Convergence of objective function.
The proposed strategy is tested on two design problems. It is observed that the
homotopy parameter controls the progress made in each cycle of the optimization
process. As the homotopy parameter approaches the value of 1, the optimal solution
to the original problem is obtained.
88
CHAPTER 8
RELIABILITY BASED DESIGN OPTIMIZATION UNDER EPISTEMIC
UNCERTAINTY
Advances in computational performance have led to the development of large-scale
simulation tools for design. Systems generated using such simulation tools can fail in
service if the uncertainty of the simulation tool’s performance predictions is not ac-
counted for. In this chapter, an investigation of how uncertainty can be quantified in
multidisciplinary systems analysis subject to epistemic uncertainty associated with
the disciplinary design tools and input parameters is undertaken. Evidence theory
is used to quantify uncertainty in terms of the uncertain measures of belief and
plausibility. To illustrate the methodology, multidisciplinary analysis problems are
introduced as an extension to the epistemic uncertainty challenge problems identified
by Sandia National Laboratories.
After uncertainty has been characterized mathematically the designer seeks the
optimum design under uncertainty. The measures of uncertainty provided by ev-
idence theory are discontinuous functions. Such non-smooth functions cannot be
used in traditional gradient-based optimizers because the sensitivities of the uncer-
tain measures does not exist. In this investigation, surrogate models are used to
represent the uncertain measures as continuous functions. A sequential approximate
optimization approach is used to drive the optimization process. The methodology
is illustrated in application to multidisciplinary example problems.
89
8.1 Epistemic uncertainty quantification
In this section, a brief description of uncertainty quantification in multidisciplinary
system analysis is given. The nature of a multidisciplinary system was explained in
detail in chapter 3 along with the associated uncertainties. A simplified model is
shown again in Figure 8.1.
Figure 8.1. Simplified Multidisciplinary Model
There are two subsystems CA1 and CA2 interacting with each other. Each
subsystem simulation model has model form uncertainty δ1 and δ2. Also there
are uncertain parameters p and q which are inputs to CA1 and CA2 respectively.
Model form uncertainty and parametric uncertainty are known within intervals with
prescribed BPAs (basic probability assignments) as shown in Figure 8.2 for a variable
a.
In general, the intervals for an unknown parameter can be nested or non-nested
and they are usually obtained by experimental means or from expert opinion. Ex-
90
Figure 8.2. Known BPA structure
pert knowledge is primarily intuitive. In a given situation an expert makes an
intuitive decision based on judgement and experience and the context of the prob-
lem, and the decision has a high probability of being correct. The intervals for
the model form uncertainty (δi) usually represent the difference in the value that a
given mathematical model predicts to that of the actual system. The BPA for it
reflects the degree of belief of an expert on the uncertainty of the values given by
the mathematical model. Given this information, our objective is to determine the
belief and plausibility of y ≥ yreqd. The algorithmic steps are outlined below.
(1) Combine the evidences obtained from different sources for each uncertain
variable and for each model form uncertainty e.g (p, q, δ1, δ2, etc). Dempster’s rule
of combination is employed in this investigation.
(2) Determine the BPA for all the possible sets of the uncertain variables and model
uncertainties. For example, if p is given by 2 intervals, q is given by 3 intervals, δ1 is
91
given by 3 intervals and δ2 is given by 2 intervals, the different possible combinations
of the intervals is the product of all of them and is equal to 36. The BPA for each
combination is simply the product of the BPAs for each interval assuming them to
be independent.
(3) Propagate each set (C) (e.g., Cijkl = [pi, qj, δ1k, δ2l] where i,j,k,l are the indices
for the intervals of p, q, δ1, δ2 respectively) through the system analysis (Figure 8.1)
and obtain the bounds for the states y for each C. This is performed for the given
design x.
(4) Determine the Belief and Plausibility of y ≥ yreqd using Equations 1 and 2
respectively.
The above steps are now illustrated with an example problem. Researchers at
Sandia National Laboratories have identified a suite of epistemic uncertainty chal-
lenge problems [46] and one of the problem is adopted here for illustration purposes.
The mathematical model is given by the following equation.
y = (a + b)a (8.1)
where a and b are the input uncertain variables and y is the output response of
the model. The available evidence for a and b is assumed in order to illustrate the
calculation of belief and plausibility. The information from the first expert is given
as intervals with their BPAs.
From expert 1, for variable a
a11=[0.6,1], a12=[1,2], a13=[2,3]
m(a11)=0.3, m(a12)=0.6, m(a13)=0.1
From expert 1, for variable b
b11=[1.2,1.6], b12=[1.6,2], b13=[2,3]
m(b11)=0.3, m(b12)=0.3, m(b13)=0.4
92
where aij and bij refers to the jth proposition from ith expert for variables a and b
respectively.
Similarly, from expert 2, for variable a
a21=[0.6,3], a22=[1,2]
m(a21)=0.6, m(a22)=0.4
From expert 2, for variable b
b21=[1.2,2], b22=[2,3]
m(b11)=0.8, m(a22)=0.2
Step 1 : Since the evidence for a and b comes from two sources, use Dempster’s
rule of combination (Equation 3.12) to combine them. The combined evidence is as
follows.
ac1=[0.6,1], ac2=[1,2], ac3=[2,3]
m(ac1)=0.2143, m(ac2)=0.7143, m(ac3)=0.0714
bc1=[1.2,1.6], bc2=[1.6,2], bc3=[2,3]
m(bc1)=0.4286, m(ac2)=0.4286, m(bc3)=0.1429
where c refers to the combined evidences.
Step 2 : Using the combined evidence for a and b, obtain all possible sets of the
intervals and their BPAs. This is shown below.
Cij = [aci, bcj]
mc(Cij) = m(aci)m(bcj)
Step 3 : Obtain the lower and upper bounds for the system response y corre-
sponding to each set Cij. Since, we have assumed monotonicity of y in Cij, Equa-
tion 8.1 is evaluated only at the vertices of Cij. For example, for C11 = [ac1, bc1] =
93
[[0.6, 1], [1.2, 1.6]], the function is evaluated at points [0.6, 1.2], [0.6, 1.6], [1, 1.2] and
[1, 1.6] to obtain the bounds for the state y. Note that the state y given by equation
8.1 is indeed monotonic in the intervals chosen for the variables a and b.
Bounds for y for each set and corresponding BPAs
mc(Cij) Lower Bounds Upper Bounds
C11 0.0918 1.4229 2.6
C12 0.0918 1.6049 3
C13 0.0306 1.7741 4
C21 0.3061 2.2000 12.96
C22 0.3061 2.6000 16
C23 0.1020 3.0000 25
C31 0.0306 10.2400 97.336
C32 0.0306 12.9600 125
C33 0.0102 16.0000 216
Step 4 : Compute belief and plausibility of y ≥ yreqd. Belief and plausibility
plots are shown in Figure 8.3 as a function of yreqd.
The propagation of C through the system analysis requires some discussion.
In general, a global optimization problems needs to be solved to determine exact
bounds for the states corresponding to each C. Examples of such techniques are
genetic algorithms and branch and bound algorithms. In our research, we have
assumed that the state information is monotonic in each C. Hence, the system
analysis (considered to be expensive) is evaluated only at the vertices of the set
C. Using this information, the belief and plausibility are easily determined. The
examples used in this investigation are monotonic in the space of uncertain variables.
However, if nothing is known about the behavior of the states with respect to the
uncertain variables, we must use discretization or global optimization techniques to
94
101
102
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
yreqd
(y ≥
yre
qd)
BeliefPlausibility
Figure 8.3. Complementary Cumulative Belief and Plausibility Function
find exact bounds for the state variables corresponding to each C.
This example illustrates how evidence theory can be used to characterize epis-
temic uncertainty. Our goal in this research is to use evidence theory to characterize
uncertainty in multidisciplinary systems and to optimize these systems under uncer-
tainty. In the next section, we briefly discuss the deterministic and non-deterministic
forms of the optimization problem.
8.2 Deterministic Optimization
Deterministic design optimization deals with the design variables, parameters and
responses as quantities that can be defined precisely. A conventional deterministic
95
optimization problem is of the following form.
minimize : f(x) (8.2)
subject to : g(x) ≥ 0 (8.3)
xmin ≤ x ≤ xmax (8.4)
where f is the objective function to be minimized and is subject to the inequality
constraints g and variable bounds xmin and xmax.
8.3 Optimization under epistemic uncertainty
Optimized designs without considering the uncertainty in the design variables, pa-
rameters and simulation based design tools are unreliable and can be subjected
to failure in service. Therefore, it is imperative that the designers consider the
reliability of the resulting designs. Reliability based design optimization (RBDO)
methods employ numerical optimization to obtain optimal designs that are reliable.
In RBDO, the constraints for an optimization problem are formulated in terms of
the probability of failure (or reliability indices) corresponding to the failure modes
(also known as limit state functions)[54, 14, 15, 59, 1] . Such an analysis requires
the exact values of the distribution parameters (means and variances) and the exact
form of the distribution functions of the random variables. RBDO methods make
use of non-deterministic formulations for the constraints and therefore are in the
class of non-deterministic optimization problems.
It is not always possible to have all the information for the uncertain variables.
In such cases, the unknown information for the uncertain variables is normally
assumed. Such systems obtained by RBDO by making critical assumptions may
fail in practice. Therefore new methods must be developed for optimization under
uncertainty when the information available for the uncertain parameters is sparce.
96
A typical non-deterministic optimization problem is of the following form.
represent constraints on mission requirements, available technologies, and aircraft
class regulations.
The original deterministic design optimization problem has ten design variables
and five parameters. The design of the system is decomposed into three contributing
analyses. This problem has been modified by Tappeta [67] to fit the framework
of multiobjective coupled MDO systems (seven design variables and eight design
parameters). The problem has also been modified by Gu et al [24] to illustrate the
methodology of decision based collaborative optimization. It is further modified in
this research to be suitable for optimization under uncertainty (OUU) using evidence
theory. The modified problem (Tappeta [67]) is referred to as the ACS problem from
here-on and is described next. The OUU version of the ACS problem will be given
following this description.
The ACS problem has three disciplines as shown in Figure 8.8. They are
aerodynamics (CA1), weight (CA2), and performance (CA3). The dependency di-
agram indicates that there are two feed-forwards and no feed-backs between the
disciplines. However, CA2 is internally coupled. The design variables and the cor-
105
Range
Stall Speed
Performance
~x1 x4 ê
~x1 x7 ê
,x2 x7 ê
y1
y3
y4
y2
y5
y6
Wetted Area
Lift/Drag
y1
Aero.
Total Weight
Empty Weight
y4
y3
Weight
Figure 8.8. Aircraft Concept Sizing Problem
responding bounds are listed in Table 8.2. Note that there are five shared design
Table 8.2
DESIGN VARIABLES IN ACS PROBLEM
Description (Unit) Lower UpperBound Bound
x1 aspect ratio of the wing 5 9x2 wing area (ft2) 100 300x3 fuselage length (ft) 20 30x4 fuselage diameter (ft) 4 5x5 density of air at cruise 0.0017 0.002378
variables (x1 ∼ x4 and x7). Table 8.3 lists the design parameters and their values.
Table 8.4 provides a brief description of the various states and their relations with
each discipline. The state y2 is coupled with respect to CA1 and CA3 and the
state y4 is coupled with respect to CA2 and CA3.
The objective in the ACS problem is to determine the least gross take-off weight
106
Table 8.3
LIST OF PARAMETERS IN THE ACS PROBLEM
Name Description Valuep1 Npass number of passengers 2p2 Nen number of engines 1p3 Wen engine weight 197 (lbs)p4 Wpay payload weight 398 (lbs)p5 Nzult ultimate load factor 5.7p6 Eta propeller efficiency 0.85p7 c specific fuel 0.4495
consumption (lbs/hr/hb)p8 C1max maximum lift coeff. 1.7
of the wing
within the bounded design space subject to two performance constraints. The con-
straints are on the range and stall speed of the aircraft. The deterministic optimiza-
tion problem is as follows.
minimize : F = Weight = y4 (8.27)
subject to : g1 = 1− y6
V stallreq
≥ 0 (8.28)
g2 = 1− Rangereq
y5
≥ 0 (8.29)
V stallreq = 70 ft/sec (8.30)
Rangereq = 560 miles (8.31)
The problem described above has been modified slightly to be suitable for design
optimization under uncertainty using evidence theory. The parameters p3 and p4
listed in table 8.3 are considered uncertain. The information on p3 and p4 is obtained
through expert opinion (let’s say). Expert’s information is given in intervals with
corresponding BPA for each of the intervals. This is shown in Figure 8.9.
107
Table 8.4
LIST OF STATES IN THE ACS PROBLEM
Description (Unit) Output InputFrom To
y1 total aircraft wetted area (ft2) CA1y2 maximum lift to drag ratio CA1 CA3y3 empty weight (lbs) CA2y4 gross take-off weight (lbs) CA2 CA3y5 aircraft range (miles) CA3y6 stall speed (ft/sec) CA3
180 190 200 210 220
0.2 0.4 0.3 0.1
p3
0.1 0.2 0.5 0.2
p4
360 380 400 420 440
Figure 8.9. Expert Opinion for p3 and p4
Since the information on the parameters p3 and p4 is not certain, the determin-
istic optimization problem (Equations 8.27-8.29) is modified as follows.
minimize :
F = Weight = y4 (8.32)
subject to :
G1 = UM(1− y6
V stallreq
≥ 0)− UMreqd ≥ 0 (8.33)
G2 = UM(1− Rangereq
y5
≥ 0)− UMreqd ≥ 0 (8.34)
The objective function is calculated assuming the values of p3 and p4 as given
108
in Table 8.3. The uncertain measure used is belief (Bel). The minimum value of
required belief is taken as 0.98. Figure 8.10 shows the convergence of the objective
function. Observe that after few iterations we are close to the optimal solution,
however to meet the required convergence criteria many more iterations are required.
2 4 6 8 10 12 14 16 18 20 221700
1750
1800
1850
1900
1950
2000
2050
2100
Iteration
Ob
ject
ive
Figure 8.10. Convergence of the Objective Function (ACS Problem)
The deterministic optima and the optimal solution under uncertainty are com-
pared in table 8.5. Note that the starting point is infeasible i.e., y6 has a value
greater that 70, hence making g2 infeasible. Observe that the objective function
(y4) has increased as compared to deterministic optima. This is an expected trade-
off for a more reliable design. This is evident from the fact that the value of y5 has
increased and the value of y6 has decreased as compared to the deterministic op-
tima, thus moving into the feasible region and ensuring the required belief measure.
In this investigation, an approach for performing design optimization under epis-
temic uncertainty is presented. Dempster-Shafer theory (evidence theory) has been
used to model the epistemic uncertainty arising due to incomplete information or
the lack of knowledge. The constraints posed in the design optimization problem
are evaluated using uncertain measures provided by evidence theory. The belief
measure is used in this research to formulate non-deterministic constraints. Since
the belief functions are discontinuous, a formal trust region managed sequential ap-
proximate optimization approach based on the Lagrangian is employed to drive the
design optimization. The trust region is managed by a trust region ratio based on
110
the performance of the Lagrangian. The Lagrangian is a penalty function of the
objective and the constraints. The framework is illustrated with multidisciplinary
test problems. The strength of the investigation is the use of evidence theory for
optimization under epistemic uncertainty. As a byproduct it also shows that sequen-
tial approximate optimization approaches can be used for handling discontinuous
constraints and obtaining improved designs.
111
CHAPTER 9
CONCLUSIONS AND FUTURE WORK
This chapter presents an overview and general conclusions related to the work devel-
oped in this dissertation. The general topic of research is developing novel reliability
based design optimization (RBDO) methodologies. In traditional RBDO, the un-
certainties are modelled using probability theory. In chapters 5 and 6, two different
methodologies for performing traditional RBDO were developed. Uncertainties in
the form of aleatory uncertainty were treated in design optimization to obtain op-
timal designs characterized by a low probability of failure. The main objective
was to reduce the computational cost associated with existing nested methodology
for RBDO. Both the methodologies were tested on engineering design problems of
reasonable size and scope. An optimization methodology based on continuation
techniques was developed for solving the unilevel RBDO methodology in chapter 7.
A second focus in this dissertation was to develop a methodology for performing op-
timization under epistemic uncertainty. Epistemic uncertainty, by its very nature, is
difficult to characterize using standard probabilistic means. Dempster-Shafer theory
was used to quantify epistemic uncertainty in chapter 8. A trust region managed
sequential approximate optimization (SAO) framework was proposed to perform
optimization under epistemic uncertainty.
112
9.1 Summary and conclusions
9.1.1 Decoupled methodology for reliability based design optimization
In chapter 5, a decoupled methodology for probabilistic design optimization is de-
veloped. Traditionally, RBDO formulations involve nested optimization making it
computationally intensive. The basic idea is to separate the main optimization phase
(optimizing an objective subject to constraints on performances) from the reliability
calculations (compute the performance that meets a given reliability requirement).
A methodology based on this paradigm is developed. During the deterministic
optimization phase, information on the most probable point (MPP) of failure corre-
sponding to each failure mode is required to calculate the performance constraints.
The most probable point of failure corresponding to each failure mode is obtained
by using the first order Taylor series expansion about the design point from the
previous cycle. This MPP update strategy during the deterministic optimization
phase requires the sensitivities of the MPP with respect to the design vector. In
practice, this requires the second order derivatives of the failure mode. In current
implementation, a damped BFGS update scheme is employed to compute the second
order derivatives. The framework is tested using a series of structural and multidis-
ciplinary design problems taken from the literature. For the problems considered, it
is observed that the estimated most probable point converges to the exact values in
3-4 cycles. It is found that the proposed methodology provides the same solution as
the traditional nested optimization formulation, and is computationally 2-3 times
more efficient.
This methodology has its advantages and disadvantages. The major advantage is
the fact that a workable reliable design can be obtained at significantly less compu-
tational effort. The calculations in the main optimization phase and the reliability
calculation phase can be solved independently, with different optimizers. By using
113
higher order reliability calculation techniques (SORM, MCS, etc), the methodology
has the potential to give optimal designs with high reliability. The major limitation
of the methodology is that it is not provably convergent. However, the problems
on which the methodology was tested were nonlinear and the MPP obtained were
exact, thus showing its potential.
9.1.2 Unilevel methodology for reliability based design optimization
In chapter 6, a new unilevel formulation for RBDO is developed. As mentioned
before, traditional RBDO involves nested optimization, making it computationally
intensive. In the proposed unilevel RBDO formulation, the first order KKT con-
ditions corresponding to each probabilistic constraint (as in PMA) are enforced
directly at the system level optimizer, thus eliminating the lower level optimizations
used to compute the probabilistic constraints. The proposed formulation provides
improved robustness and provable convergence as compared to a unilevel variant
given by Kuschel and Rackwitz [36]. The formulation given by Kuschel and Rack-
witz [36] replaces the direct first order reliability method (FORM) problems (lower
level optimization in the reliability index method (RIA)) by their first order nec-
essary KKT optimality conditions. The FORM problem in RIA is numerically ill
conditioned [69]; the same is true for the formulation given by Kuschel and Rack-
witz [36]. It was shown in Tu et al [69] that PMA is numerically robust in terms of
probabilistic constraint evaluation and is therefore used in this investigation. The
proposed formulation is solved in an augmented design space that consists of the
original decision variables, the MPP of failure corresponding to each failure driven
mode, and the Lagrange multipliers corresponding to each lower level optimization.
It is computationally equivalent to the original nested optimization formulation if
the lower level optimization problem is solved by satisfying the KKT conditions
114
(which is effectively what most numerical optimization algorithms do). It is proved
that under mild pseudoconvexity assumptions on the hard constraints, the proposed
formulation is mathematically equivalent to the original nested formulation. The
method is tested using a simple analytical problem and a multidisciplinary struc-
tures control problem, and is observed to be numerically robust and computationally
efficient compared to the existing approaches for RBDO.
One of the major advantage of this methodology is the fact that the RBDO prob-
lem can be solved in a single optimization. This helps is reducing the computational
cost of RBDO. For the structures control test problem, the unilevel methodology
was found to be two times as efficient as the nested RBDO methodology. The major
limitation of the formulation is that it is accompanied by a large number of equal-
ity constraints. Sometimes the commercial optimizers exhibit numerical instability
or show poor convergence behavior for problems with large numbers of equality
constraints. Also, the unilevel methodology is applicable only for FORM based
reliability constraints.
9.1.3 Continuation methods for unilevel RBDO
The unilevel formulation for RBDO is usually accompanied by a large number of
equality constraints which often cause numerical instability for many commercial
optimizers. In chapter 7, an optimization methodology employing continuation
methods is developed for reliability based design using the unilevel formulation.
Since the unilevel formulation is usually accompanied by many equality constraints,
homotopy techniques are used to relax the constraints and identify a starting point
that is feasible with respect to the relaxed constraints. In this investigation, the
homotopy parameter is incremented by a fixed value. A series of optimization
problems are solved for various values of the homotopy parameter as the relaxed
115
problem approaches the original problem. It is realized that it is easier to solve
the relaxed problem from a known solution and make gradual progress towards
the solution than to solve the problem directly. The proposed strategy is tested
on two design problems. It is observed that the homotopy parameter controls the
progress made in each cycle of the optimization process. As the homotopy parameter
approaches the value of 1, the local solution is obtained.
9.1.4 Reliability based design optimization under epistemic uncertainty
In chapter 8, a methodology for performing design optimization under epistemic
uncertainty is developed. Epistemic uncertainty in nondeterministic systems arises
due to ignorance, lack of knowledge or incomplete information. This is also known
as subjective uncertainty. In general, epistemic uncertainty is extremely difficult to
quantify using probabilistic means. Dempster-Shafer theory (evidence theory) has
been used to model the epistemic uncertainty arising due to incomplete information
or the lack of knowledge. The constraints posed in the design optimization problem
are evaluated using uncertain measures provided by evidence theory. The belief
measure is used in this research to formulate non-deterministic constraints. Since
the belief functions are discontinuous, a formal trust region managed sequential
approximate optimization approach based on the Lagrangian is employed to drive
the design optimization. The trust region is managed by a trust region ratio based
on the performance of the Lagrangian. The Lagrangian is a penalty function of the
objective and the constraints. The framework is illustrated with multidisciplinary
test problems. Optimal designs characterized with low uncertainty of failure can be
obtained in few cycles.
The main accomplishment of this research is the quantification of epistemic un-
certainty in design optimization. As a byproduct it also shows that sequential
116
approximate optimization approaches can be used for handling discontinuous con-
straints and obtaining better designs.
9.2 Recommendations for future work
9.2.1 Decoupled RBDO using higher order methods
In the decoupled RBDO methodology developed in chapter 5, first order reliability
techniques were used for reliability analysis. Since the reliability evaluation is sepa-
rated from the main optimization, it is possible to do higher order reliability analysis.
Techniques such as the second order reliability methods (SORM), Monte-Carlo sim-
ulation (MCS), etc can be used for obtaining high order estimates of reliability. This
will lead to better designs with very accurate estimates of probability of reliability.
9.2.2 RBDO for system reliability
In current work, only series systems were considered. However, there are numerous
systems where failure is governed by a combination of component failure modes.
Most of the research work in reliability based design optimization is limited to
series systems. Therefore, there is a need to develop methodologies for reliability
based design optimization for parallel systems, and where system reliability can
be incorporated in design optimization. The main challenge would be to develop
techniques using which system reliability could be computed efficiently.
9.2.3 Homotopy curve tracking for solving unilevel RBDO
In this investigation, a continuation technique is employed for managing the re-
laxed unilevel reliability based design optimization problem. In the continuation
procedure, the homotopy parameter is incremented by a fixed value. A series of
optimization problems are solved for various values of the homotopy parameter as
the relaxed problem approaches the original problem. It is realized that it is eas-
117
ier to solve the relaxed problem from a known solution and make gradual progress
towards the solution than solve the problem directly. In continuation methods the
homotopy parameter controls the progress made in each cycle of the optimization
process. As the homotopy parameter approaches the value of 1, the optimal solu-
tion is obtained. The heuristic approach of updating the homotopy parameter by a
fixed value has worked for the problems considered as part of testing the algorithm.
However, it has been proved in the literature that this may not work at all times.
The use of formal homotopy curve tracking techniques for solving unilevel reliability
based design optimization problem will be make it more robust and computationally
efficient.
9.2.4 Considering total uncertainty in design optimization
In this dissertation, aleatory uncertainty and epistemic uncertainty were considered
separately in design optimization. A hybrid RBDO methodology can be developed
that incorporates both uncertainty types. Epistemic uncertainty can be quantified
using Dempster-Shafer theory and aleatory uncertainty using probability theory. A
total reliability analysis will involve full uncertainty quantification. The decoupled
RBDO methodology developed in this dissertation can be modified accordingly to
develop a hybrid framework.
9.2.5 Variable fidelity reliability based design optimization
A considerable amount of computational effort is usually required in reliability based
design optimization. Therefore, in recent years, surrogates of the simulation models
are largely employed to reduce the cost of optimization. Variable fidelity methods
employ a set of models ranging in fidelity to reduce the cost of design optimization.
The decoupled RBDO methodology and the unilevel RBDO methodology developed
in this research can be individually combined with variable fidelity methods to
118
further reduce the computational cost associated with RBDO.
119
BIBLIOGRAPHY
[1] H. Agarwal and J. E. Renaud, Reliability based design optimization formultidisciplinary systems using response surfaces. In Proceedings of the 43rdAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-rials Conference and Exhibit , AIAA-2002-1755 (Denver, Colorado. April 22-252002).
[2] H. Agarwal, J. E. Renaud and J. D. Mack, A decomposition approach forreliability-based multidisciplinary design optimization. In Proceedings of the44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, andMaterials Conference & Exhibit , number AIAA 2003-1778, Norfolk, Virginia(April 7-10 2003).
[3] H. Agarwal, J. E. Renaud, E. L. Preston and D. Padmanabhan, Uncertaintyquantification using evidence theory in multidisciplinary design optimization.Reliability Engineering and System Safety (2003), (in press).
[4] N. M. Alexandrov and R. M. Lewis, Algorithmic perspectives on problem for-mulation in mdo. In Proceedings of the 8th AIAA/NASA/USAF Multidisci-plinary Analysis & Optimization Symposium, number AIAA-2000-4719, LongBeach, CA (September 6-8 2000).
[5] E. K. Antonsson and K. N. Otto, Imprecision in engineering design. Journal ofMechanical Design, 117(B): 25–32 (1995).
[6] H.-R. Bae, R. V. Grandhi and R. A. Canfield, Uncertainty quantificationof structural response using evidence theory. In Proceedings of the 43rdAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-rials Conference, AIAA-2002-1468 (Denver, Colorado. April 22-25 2002).
[7] Y. Ben-Haim and I. Elishakoff, Convex Models of Uncertainty in Applied Me-chanics . Studies in Applied Mechanics 25, Elsevier (1990).
[8] R. Braun and I. M. Kroo, Development and application of the collaborativeoptimization architecture in a multidisciplinary design environment. In N. M.Alexandrov and M. Y. Hussaini, editors, Multidisciplinary Design Optimiza-tion: State of the Art , SIAM (1997).
[9] K. Brietung, Asymptotic approximations for multinormal integral. Journal ofEngineering Mechanics , 110(3): 357–366 (1984).
120
[10] S. Chen, E. Nikolaidis, H. H. Cudney, R. Rosca and R. T. Haftka, Compari-son of probabilistic and fuzzy set methods for designing under uncertainty. InProceedings of the 40th AIAA/ASME/ASCE/AHS/ASC Structures, StructuralDynamics, and Materials Conference and Exhibit , pages 2860–2874, AIAA-99-1579 (April 1999).
[11] S. Chen, E. Nikolaidis, H. H. Cudney, R. Rosca and R. T. Haftka, Compari-son of probabilistic and fuzzy set methods for designing under uncertainty. InProceedings of the 40th AIAA/ASME/ASCE/AHS/ASC Structures, StructuralDynamics, and Materials Conference & Exhibit , number AIAA 99-1579, St.Louis (April 12-15 1999).
[12] W. Chen and X. Du, Sequential optimization and reliability assesment methodfor efficient probabilistic design. In ASME Design Engineering Technical Con-ferences and Computers and Information in Engineering Conference, numberDETC2002/DAC-34127, Montreal, Canada (2002).
[13] X. C. Chen, T. K. Hasselman and D. J. Neill, Reliability based structualdesign optimization for practical applications. In Proceedings of the 38thAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-rials Conference, number AIAA-97-1403, pages 2724–2732 (1997).
[14] K. K. Choi and B. D. Youn, Hybrid analysis method for reliability-based de-sign optimization. In Proceedings of 2001 ASME Design Engineering Techni-cal Conferences: 27th Design Automation Conference, number DETC/DAC-21044, Pittsburgh, PA (September 9-12 2001).
[15] K. K. Choi, B. D. Youn and R. Yang, Moving least square method for reliability-based design optimization. In Proceedings of the Fourth World Congress ofStructural and Multidisciplinary Optimization, Dalian, China (June 4-8, 20012001).
[16] E. J. Cramer, Dennis, J. E., jr., P. D. Frank, R. M. Lewis and G. R. Shubin, Onalternative problem formulations for multidisciplinary design optimization. InProceedings of the 4th AIAA/NASA/ISSMO Symposium on MultidisciplinaryAnalysis & Optimization, number AIAA-92-4752, pages 518–530 (1992).
[17] J. E. Dennis, Jr and R. M. Lewis, A comparison of nonlinear programmingapproaches to an elliptic inverse problem and a new domain decomposition ap-proach. Technical Report TR94-33, Department of Computational and AppliedMathematics, Rice University, Houston, Texas 77005-1892 (1994).
[18] D. Dubois and H. Prade, Possibility Theory : An Approach to ComputerizedProcessing of Uncertainty . Plenum Press, first edition (1988).
[19] D. Dubois and H. Prade, Possibility Theory: An approach to Computer Pro-cessing of Uncertainty . Plenum Press, NY (1988).
[20] I. Enevoldsen and J. D. Sorensen, Reliability-based optimization in structuralengineering. Structural Safety , 15(3): 169–196 (1994).
121
[21] S. Engelund and R. Rackwitz, A benchmark study on importance samplingtechniques in structural reliability. Structural Safety , 12: 255–276 (1993).
[22] M. Fedrizzi, J. Kacprzyk and R. R. Yager, Advances in Dempster-Shafer Theoryof Evidence. John Wiley & Sons Inc. (1994).
[23] T. Fetz, M. Oberguggenberger and S. Pittschmann, Applications of possibilityand evidence theory in civil engineering. In 1st International Symposium onImprecise Probabilities and Their Applications (29 June - 2 July 1999).
[24] X. Gu, J. E. Renaud, L. M. Ashe, S. M. Batill, S. M. Budhiraja and A. S.Krajewski, Decision based collaborative optimization. ASME Journal of Me-chanical Design, 124(1): 1–13 (2001).
[25] R. T. Haftka, Simultaneous analysis and design. AIAA Journal , 25(1): 139–145(1985).
[26] R. T. Haftka, Z. Gurdal and M. P. Kamat, Elements of Structural Optimization,volume 1. Kluwer Academy Publications, second edition (1990).
[27] A. Haldar and S. Mahadevan, Probability, Reliability and Statistical Methodsin Engineering Design. John Wiley & Sons (2000).
[28] A. Harbitz, An efficient sampling method for probability of failure calculation.Structural Safety , 3: 109–115 (1986).
[29] H. A. Jensen and A. E. Sepulveda, Use of approximation concepts in fuzzydesign problem. Advances in Engineering Software, 31: 263–273 (2000).
[30] N. S. Khot, Optimization of controlled structures. Advances in Design Automa-tion (1994).
[31] C. Kirjner-Neto, E. Polak and A. der Kiureghian, An outer approximations ap-proach to reliability-based optimal design of structures. Journal of OptimizationTheory and Applications , 98(1): 1–16 (July 1998).
[32] A. D. Kiureghian, H.-Z. Lin and S.-J. Hwang, Second order reliability approx-imation. journal of engineering mechanics , 113(8): 1208–1225 (1987).
[33] G. J. Klir and M. J. Wierman, Uncertainty Based Information : Elements ofGeneralized Information Theory . Physica-Verlag (1998).
[34] I. M. Kroo, Decomposition and collaborative optimization for large-scaleaerospace design programs. In N. M. Alexandrov and M. Y. Hussaini, editors,Multidisciplinary Design Optimization: State of the Art , SIAM (1997).
[35] N. Kuschel and R. Rackwitz, Two basic problems in reliability based struc-tural optimization. Mathematical Methods of Operations Research, 46: 309–333(1997).
[36] N. Kuschel and R. Rackwitz, A new approach for structural optimization ofseries systems. Applications of Statistics and Probability , 2(8): 987–994 (2000).
122
[37] S. W. Law and E. K. Antonsson, Implementing the method of imprecision:An engineering design example. In Proceedings of the 3rd IEEE InternationalConference on Fuzzy Systems , volume 1, pages 358–363 (1994).
[38] R. M. Lewis, Practical aspects of variable reduction formulations and reducedbasis algorithms in multidisciplinary design optimization. In N. M. Alexandrovand M. Y. Hussaini, editors, Multidisciplinary Design Optimization: State ofthe Art , SIAM (1997).
[39] P.-L. Liu and A. D. Kiureghian, Optimization algorithms for structural relia-bility. Structural Safety , 9(3): 161–177 (1991).
[40] G. Maglaras, E. Nikolaidis, R. T. Haftka and H. H. Cudney, Analytical-experimental comparison of probabilistic and fuzzy set based methods for de-signing under uncertainty. Structural Optimization, 13: 69–80 (1997).
[41] O. L. Mangasarian, Nonlinear Programming . Classics in Applied Mathematics,SIAM, Philadelphia (1994).
[42] J. Nocedal and S. J. Wright, Numerical Optimization. Springer-Verlag (1999),Springer series in Operations Research.
[43] W. L. Oberkampf, S. M. Deland, B. M. Rutherford, K. V. Diegert and K. F.Alvin, A new methodology for the estimation of total uncertainty in computa-tional simulation. In Proceedings of the 40th AIAA/ASME/ASCE/AHS/ASCStructures, Structural Dynamics, and Materials Conference (April 1999).
[44] W. L. Oberkampf, S. M. DeLand, B. M. Rutherford, K. V. Diegert and K. F.Alvin, Estimation of total uncertainty in modeling and simulation. TechnicalReport SAND2000-0824, Sandia National Laboratories (April 2000).
[45] W. L. Oberkampf, K. V. Diegert, K. F. Alvin and B. M. Rutherford, Variability,uncertainty, and error in computational simulation. In ASME Proceedings ofthe 7th. AIAA/ASME Joint Thermophysics and Heat Transfer Conference,volume 2, pages 259–272 (1998).
[46] W. L. Oberkampf, J. C. Helton, C. A. Joslyn, S. Wojtkiewicz and S. Ferson,Challenge problems : Uncertainty in system response given uncertain parame-ters. Reliability Engineering and System Safety (2003), (in press).
[47] W. L. Oberkampf, J. C. Helton and K. Sentz, Mathematical representationof uncertainty. In Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASCStructures, Structural Dynamics, and Materials Conference & Exhibit , num-ber AIAA 2001-1645, Seattle, WA (April 16-19, 2001 2001).
[48] D. Padmanabhan and S. M. Batill, Reliability based optimization using approx-imations with applications to multi-disciplinary system design. In Proceedingsof the 40th AIAA Sciences Meeting & Exhibit , number AIAA-2002-0449, Reno,NV (January 2002).
[49] S. Parsons, Qualitative Methods for Reasoning under Uncertainty . The MITPress (2001).
123
[50] V. M. Perez, J. E. Renaud and L. T. Watson, Interior point sequen-tial approximate optimization methodology. In Proceedings of the 10thAIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis & Op-timization, number AIAA-2002-5505, Atlanta, GA. (September 4-6 2002).
[51] V. M. Perez, J. E. Renaud and L. T. Watson, An interior point se-quential approximate optimization methodology. In Proceedings of the 9thAIAA/NASA/USAF Multidisciplinary Analysis & Optimization Symposium,AIAA-2002-5505, Atlanta, GA (September 4-6 2002).
[52] M. S. Phadke, Quality Engineering Using Robust Design. Prentice Hall, Engle-wood Cliffs, NJ (1989).
[53] E. Polak, R. J.-B. Wets and A. der Kiureghian, On an approach to optimizationproblems with a probabilistic cost and or constraints. Nonlinear Optimizationand related topics , pages 299–316 (2000).
[54] R. Rackwitz, Reliability analysis-a review and some perspectives. StructuralSafety , 23(4): 365–395 (2001).
[55] J. E. Renaud, Sequential approximation in non-hierarchic system decomposi-tion and optimization: a multidisciplinary design tool . Ph.D. thesis, RenssalaerPolytechnic Institute, Department of Mechanical Engineering, Troy, New York(August 1992).
[56] J. F. Rodriguez, J. E. Renaud and L. T. Watson, Convergence of trust re-gion augmented lagrangian methods using variable fidelity approximation data.Structural Optimization, 15: 141–156 (1998).
[57] J. F. Rodriguez, J. E. Renaud and L. T. Watson, Convergence using variablefidelity approximation data in a trust region managed augmented lagrangianapproximate optimization. AIAA Journal , pages 749–768 (1998).
[58] M. Rosenblatt, Remarks on a multivariate transformation. The Annals of Math-ematical Statistics , 23(3): 470–472 (September 1952).
[59] J. O. Royset, A. D. Kiureghian and E. Polak, Reliability based optimal struc-tural design by the decoupling approach. Reliability Engineering and SystemSafety , 73(3): 213–221 (2001).
[60] M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization. PlenumPress (1993).
[61] K. Sentz and S. Ferson, Combination of evidence in dempster-shafer theory.Technical report, Sandia National Laboratories (April 2002), SAND 2002-0835.
[62] J. Sobieszczanski, J. S. Agte and Sandusky R. R., jr., Bi-level integrated sys-tem synthesis (bliss). In Proceedings of the 7th AIAA/NASA/USAF Multi-disciplinary Analysis & Optimization, number AIAA-98-4916, St. Louis, Mis-souri (September 2-4 2000), Extended paper published as Technical ReportNASA/TM-1998-208715.
124
[63] J. Sobieszczanski-Sobieski, A linear decomposition method for large optimiza-tion problems- blueprint for development. Technical Report TM-83248-1982,NASA (1982).
[64] J. Sobieszczanski-Sobieski, Optimization by decomposition: A step from hierar-chic to non-hierarchic systems. In 2nd NASA/Air Force Symposium on RecentAdvances in Multidisciplinary Analysis and Optimization, number NASA TM-101494, CP-3031, Part 1, pages 28–30, Hampton, VA (1988).
[65] J. Sobieszczanski-Sobieski, C. L. Bloebaum and P. Hajela, Sensitivity of control-augmented structure obtained by a system decomposition method. AIAA Jour-nal , 29(2): 264–270 (February 1990).
[66] T. R. Sutter, C. J. Camarda, J. L. Walsh and H. M. Adelman, Comparisionof several methods for calculating vibration mode shape derivatives. AIAAJournal , 26: 1506–1511 (1988).
[67] R. V. Tappeta, An Investigation of Alternative Problem Formulations for Mul-tidisciplinary Optimization. Master’s thesis, University of Notre Dame (Decem-ber, 1996).
[68] P. B. Thanedar and S. Kodiyalam, Structural optimization using probabilisticconstraints. Structural Optimization, 4: 236–240 (1992).
[69] J. Tu, K. K. Choi and Y. H. Park, A new study on reliability-based designoptimization. Journal of Mechanical Design, 121: 557–564 (December 1999).
[70] L. Tvedt, Distribution of quadratic forms in normal space-application to struc-tural reliability. Journal of Engineering Mechanics , 116(6): 1183–1197 (1990).
[71] P. Walley, Statistical Reasoning with Imprecise Probabilities . London: Chap-man and Hall. (1991).
[72] L. Wang and S. Kodiyalam, An efficient method for probabilistic androbust design with non-normal distribution. In Proceedings of the 43rdAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materi-als Conference, number AIAA 2002-1754, Denver, Colorado (April 22-25 2002).
[73] L. T. Watson, Theory of globally convergent probability-one homotopiesfor nonlinear programming. SIAM Journal on Optimization, 11(3): 761–780(2001).
[74] B. A. Wujek, J. E. Renaud, S. M. Batill, E. W. Johnson and J. B. Brockman,Design flow management and multidisciplinary design optimization in applica-tion to aircraft concept sizing. In 34th Aerospace Sciences Meeting & Exhibit ,AIAA (January 1996).
[75] H. J. Zimmermann, Fuzzy Set Theory and its Applications . Kluwer AcademicPublishers, second edition (1991).