1 A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates Jason Matthew Aughenbaugh, Ph.D. [email protected]Applied Research Laboratories University of Texas at Austin Jeffrey W. Herrmann, Ph.D. [email protected]Department of Mechanical Engineering and Institute for Systems Research University of Maryland Third International Workshop on Reliable Engineering Computing, NSF Workshop on Imprecise Probability in Engineering Analysis & Design, Savannah, Georgia, February 20-22, 2008.
26
Embed
1 A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates Jason Matthew Aughenbaugh,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates
Department of Mechanical Engineering and Institute for Systems ResearchUniversity of Maryland
Third International Workshop on Reliable Engineering Computing, NSF Workshop on Imprecise Probability in Engineering Analysis &
Design, Savannah, Georgia, February 20-22, 2008.
2
Motivation
• Need to estimate reliability of system with components of uncertain reliability.
• Which components should we test to reduce uncertainty about system reliability?
AB
C
3
Introduction
Data
Existing informationIs it relevant?
Is it accurate?
Prior characterization
Updated / posteriorcharacterization
New experiments
Statisticalmodeling and updating
approach
-3 -2 -1 0 1 2 30
0.5
1
-3 -2 -1 0 1 2 30
0.5
1
-3 -2 -1 0 1 2 30
0.5
1
-3 -2 -1 0 1 2 30
0.5
1
4
Statistical Approaches
• Compare the following approaches: (Precise) Bayesian Robust Bayesian
• sensitivity analysis of prior
Imprecise probabilities• actual “true” probability is imprecise• the imprecise beta model
}Different philosophicalmotivations, but
equivalent math. forthis problem
5
Is precise probability sufficient?
• Problem: equiprobable Know nothing or know they are equally likely?
• Why does it matter? Engineer A states that input values 1 and 2 have equal
probabilities Engineer B is designing a component that is very
sensitive to this input Should Engineer B proceed with a costly but versatile
design, or study the problem further?• Case 1: Engineer A had no idea, so stated equal. Study =good• Case 2: Engineer A performed substantial analysis. Additional
study = wasteful.
6
Moving beyond precise probability
• Start with well established principles and mathematics Conclude it is insufficient
• Abandon probability completely?
• Relax conditions, extend applicability?
Think sensitivity analysis. How much do deviations from a precise prior matter?
variance of the conditional expectation / total variance focuses on status quo, next (local) piece of info testing a component with a large sensitivity analysis
should reduce variance of system reliability estimate
• Mean and variance observations• Posterior variance
11Metrics of Uncertainty: Imprecise Distributions
• Imprecise variance-based sensitivity analysis (Hall, 2006) Does not worry about outcomes; local metric
• Mean and variance dispersion
• Imprecision in the mean
• Imprecision in the variance
,
,
min
max
i pip F
i i pp F
SV SV
SV SV
12
Scenarios with Precise Distributions
• Components have beta distributions for the prior distributions of failure probability
• Scenario 1 System failure probability:
mean = 0.2201 variance = 0.0203
• Scenario 2 System failure probability:
mean = 0.1691variance = 0.0116
A B C
0
0
0.15
10
t
s
0
0
0.15
2
t
s
0
0
0.55
10
t
s
Scenario 1 priors
A B C
Scenario 2 priors
0
0
0.15
10
t
s
0
0
0.15
2
t
s
0
0
0.15
10
t
s
AB
CXX
13
Scenario 1 Results
• Variance-based sensitivity analysis:
0.4814
0.4583
0.0181
A
B
C
SV
SV
SV
• Posterior variance:
Table 1. Posterior variance for scenario 1 Posterior Variance Across Test Results Test Plan
#:{ , , }A B Cn n n Min Max
1:{12,0,0} 0.0110 0.0151
2:{0,12,0} 0.0117 0.0175
3:{0,0,12} 0.0131 0.0291
4:{4,4,4} 0.0071 0.0195
5:{6,6,0} 0.0059 0.0181
6:{6,0,6} 0.0094 0.0228
7:{0,6,6} 0.0117 0.0177
Best worst-case
Best best-case
AB
C
14
Scenario 1 Results
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.005
0.01
0.015
0.02
0.025
mean
varia
nce
Test plan 1: [12, 0, 0]
Test plan 2: [0, 12, 0]Test plan 3: [0, 0, 12]
Test plan 4: [4, 4, 4]
Test plan 5: [6, 6, 0]
Test plan 6: [6, 0, 6]Test plan 7: [0, 6, 6]
Prior
AB
C
0.4814
0.4583
0.0181
A
B
C
SV
SV
SV
1
2
15
Scenario 2 Results
• Variance-based sensitivity analysis:
• Posterior variance:
0.8982
0.0560
0.0153
A
B
C
SV
SV
SV
Table 1. Posterior variance for scenario 2 Posterior Variance Across Test Results Test Plan
#:{ , , }A B Cn n n Min Max
1:{12,0,0} 0.0042 0.0109
2:{0,12,0} 0.0115 0.0155
3:{0,0,12} 0.0116 0.0218
4:{4,4,4} 0.0064 0.0158
5:{6,6,0} 0.0051 0.0145
6:{6,0,6} 0.0054 0.0160
7:{0,6,6} 0.0115 0.0145
AB
C
Best worst-case
Best best-case
16
Scenario 2 Results
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.005
0.01
0.015
0.02
0.025
mean
varia
nce
Test plan 1: [12, 0, 0]
Test plan 2: [0, 12, 0]Test plan 3: [0, 0, 12]
Test plan 4: [4, 4, 4]
Test plan 5: [6, 6, 0]
Test plan 6: [6, 0, 6]Test plan 7: [0, 6, 6]
Prior
AB
C
0.8982
0.0560
0.0153
A
B
C
SV
SV
SV
1
2
17
Scenario 3: Imprecise Distributions
• Component failure probabilities are modeled using imprecise beta distributions
• System failure probability an imprecise distribution: Mean: 0.2201 to 0.4640 Variance: 0.0136 to 0.0332
• Imprecise variance-based sensitivity analysis:
A B C
0
0
0
0
0.15
0.20
10
12
t
t
s
s
0
0
0
0
0.15
0.55
2
5
t
t
s
s
0
0
0
0
0.55
0.60
10
12
t
t
s
s
0.1363 to 0.7204
0.2406 to 0.6960
0.0116 to 0.2512
A
B
C
SV
SV
SV
Since failure probability of B is poorly known,
we allow for a range.
Scenario 3 comparable to precise scenario 1.
18
Posterior Variance Analysis
Smallest variances, and smallest imprecision in variances.
0.1363 to 0.7204
0.2406 to 0.6960
0.0116 to 0.2512
A
B
C
SV
SV
SV
Table 1. Posterior variance analysis for scenario 3
• Multiple sources of uncertainty Existing knowledge Results of future tests
• How do we prioritize different aspects? Variance or imprecision reduction? Best case, worst case, average case of results? Incorporate economic/utility metrics?
• Other imprecision/total uncertainty measures? “Breadth” of p-boxes (Ferson and Tucker, 2006 ) Aggregate uncertainty, others(Klir and Smith, 2001)
23
Summary
• Shown how to use different statistical approaches for evaluating experimental test plans
• Used direct uncertainty metrics Variance-based sensitivity analysis
• Precise and imprecise Posterior variance Dispersion of the mean and variance Imprecision in the mean and variance
24
Thank you for your attention.
• Questions? Comments? Discussion?
This work supported in part by the Applied Research Laboratories at UT-Austin Internal IR&D grant 07-09
25
SVi
2
2 2
2 2
11
11
11
A B C A
B C A B
C B A C
SV E P E P V PV
SV E P E P V PV
SV E P E P V PV
26
Formulae
/A A A AE P
;
2
1A A
A
A A A A
V P
;
22 1
1A A
A AAA A A A
E P V P E P
.The mathematical model for the reliability of the system shown in Figure 1 follows.
1 (1 )(1 )sys A B CR R R R
sys A B C A B CP P P P P P P
[ ] [ ] [ ] [ ] [ ] [ ] [ ]A B C A B CE E P E P E P E P E P E P