1 Research Heaven, West Virginia FY 2003 Initiative: IV&V of UML Hany Ammar, Katerina Goseva-Popstojanova, V. Cortelessa, Ajith Guedem, Kalaivani Appukutty, Walid AbdelMoez, Ahmad Hassan and Rania Elnaggar LANE Department of Computer Science and Electrical Engineering West Virginia University Ali Mili College of Computing Science New Jersey Institute of Technology Less risk, sooner WVU UI: Performance-Based Risk Assessment
35
Embed
Research Heaven, West Virginia 1 FY 2003 Initiative: IV&V of UML Hany Ammar, Katerina Goseva-Popstojanova, V. Cortelessa, Ajith Guedem, Kalaivani Appukutty,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
• Estimate performance based risk on a scenario level• Identify and rank performance critical components • How ?- details follow
CICS
CTSRS
OS
SA
INA
APSSCEN1
SCEN3SCEN5
0
10
20
30
40
50
60
70
80
90
100
Nor
mal
ised
Ser
vice
Tim
es
Components
Scenarios
5
Research Heaven,West Virginia
Why UML ?
• Unified modeling language– Rational software
– The three amigos: Booch Rumbaugh, Jacobson.
• International standard in system specification
An international standardIn system specification
6
Research Heaven,West Virginia
UML & NASA
• Increasing use at NASA• Informal (very) survey
– Google search:
– “rational rose nasa”
– 10,000 hits
– 3 definite projects, just in first ten
• We use a case-study
based on the UML specs of
the Earth Observing System
7
Research Heaven,West Virginia
The methodology is illustrated on the Flight Operations System (FOS)
of NASA's Earth Observing System (EOS)
The Case Study
• NASA's Earth Observing System (EOS) is the first observing system to offer integrated measurements of the Earth's processes
• The Flight Operations Segment (FOS) of EOS is responsible for the planning, scheduling, commanding, and monitoring of the spacecraft and the instruments on board
• We have evaluated the performance-based risk of the Commanding service
8
Research Heaven,West Virginia
Project Overview
FY01
• Developed of an automated simulation environment for UML dynamic specification, suggested an observer component to detect errors
• Conducted performance and timing analysis of the NASA case study
FY02
• Develop a fault injection methodology
Define a fault-model for components at the specification level
• Develop a methodology for architectural level risk analysis
Determine Critical Use Case List
Determine Critical Component/Connector list
FY03
• Develop a methodology for Performance-based / Reliability-based risk assessment
• Validation of the risk analysis methodology
9
Research Heaven,West Virginia
Performance is a non-functional software attribute that plays a crucial role in application domains spreading from safety-critical systems to e-commerce web sites
• We introduce the concept of performance-based risk, which is a risk resulting from software failures originated from behaviors that do not meet performance requirements
• Performance failure is the inability of the system to meet its performance objective(s)
• Performance-based risk is defined as: Probability of performance failure Severity of the failure
Performance Based Risk
10
Research Heaven,West Virginia
What do we need and what do we get?
• Input – UML diagrams: Use case diagram, Sequence
diagram, and Deployment diagram;
– Performance objectives (requirements)
• Output – Performance-based risk factor of the scenarios
modeled as sequence diagrams
– Identification of performance-critical components in the scenario
11
Research Heaven,West Virginia
Performance Based Risk Methodology
For each Use Case
For each scenarioSTEP 1 – Assign demand vector to each “action” in Sequence
diagram; build a Software Execution Model
STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis
STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective
STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario
STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
12
Research Heaven,West Virginia
Performance Based Risk Methodology
For each Use Case
For each scenarioSTEP 1 – Assign demand vector to each “action” in Sequence
diagram; build a Software Execution Model
STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis
STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective
STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario
STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
13
Research Heaven,West Virginia
STEP 1 – Assign a Demand Vector to Every “Action” in SD
Build a software execution model from the demand vectors and SD
A B C
A1
A2
B1
C1
1: lbl1
2: lbl2
3: lbl3
Component action (example: A1)
A1 = [CPU_instr, DISK_data]
CPU_instr : number of “basic CPU instructions” to be executed to accomplish the action task
DISK_data: = 0 – no access to disk0 – size of data to be accessed
Interaction ( example: 1:lbl1 )
lbl1 = [MSG]
MSG : size of data exchanged in the interaction
14
Research Heaven,West Virginia
The Preplanned Emergency scenario
Preplanned emergency scenario comprises of two sequence diagrams:
• Preparation of command groups that are to be uplinked (SD1)
• Handling the transmission failure during uplink (SD2)
We assumed for the purpose of illustration that SD1 is executed once and SD2 i.e. the retransmission twice before there is a mission failure
For each scenarioSTEP 1 – Assign demand vector to each “action” in Sequence
diagram; build a Software Execution Model
STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis
STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective
STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario
STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
21
Research Heaven,West Virginia
STEP 2 – Add Hardware Platform Characteristics on the Deployment
Diagram
Space Craft
ECOM << network - (25000 μs/KB) >>
Communication Subsystem<<CPU (0.02 μs/KB)>>
Ground N/W << network - (80 μs/KB) >>
EOC<<CPU (0.0025 μs/KB)>>
ICC<<CPU (0.0025 μs/KB)>>
IDB<<database (60 μs/KB)>>
22
Research Heaven,West Virginia
Conduct Stand-alone Analysis (Step 2)
• Stand-alone analysis consists of evaluating the completion time of the whole SD as it would be executed on a dedicate hardware platform with a single user workload
• The service time consumed by the steps (as shown in the software execution graph) is 9.949 seconds
23
Research Heaven,West Virginia
Performance Based Risk Methodology
For each Use Case
For each scenarioSTEP 1 – Assign demand vector to each “action” in Sequence
diagram; build a Software Execution Model
STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis
STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective
STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario
STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
24
Research Heaven,West Virginia
Asymptotic bounds and Failure probability estimate (Step 3)
• Failure probability (Z1) = 0
• Failure probability (Z2) = =0.7958
• Failure probability (Z3) = 1
UB
LB
D
Response timeObjective
N*D
N*DMAX
Z1
Z3
Z2
LowerBoundUpperBound
eObjectivePerformancUpperBound
25
Research Heaven,West Virginia
Performance Based Risk Methodology
For each Use Case
For each scenarioSTEP 1 – Assign demand vector to each “action” in Sequence
diagram; build a Software Execution Model
STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis
STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective
STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario
STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
26
Research Heaven,West Virginia
STEP 4 – Conduct Severity Analysis
A cto r1
S y s te m : SA cto r2
S 1
S 2
S 3
S 4
E 11s
E 12s
E s11
E s12
E s21
E 21s
E s22
• For severity analysis we use Functional Failure Analysis (FFA) based on UML use case and scenario diagram
• The input to FFA are
• A list of events of a use case (under a specific scenario)
• A list of guide words
• The output is the severity level (catastrophic, critical, marginal, and minor) based on FFA in a tabulated form
27
Research Heaven,West Virginia
FFT for the Emergency Scenario in EOS-FOS (Step 4)
Since we are dealing with performance-based risk, we apply only the guideword “LATE”
Preplanned Emergency
ICC retrieves the Instrument status from IDB
IDB takes longer time to return the status
Emergency cannot be handled at proper time
Catastrophic
ICC returns the command groups to EOC
ICC takes longer computation times
Command groups cannot be up linked in time
Catastrophic
Handle Transmission
SE evaluates the acknowledgment from the spacecraft
SE takes longer time to evaluate status
Retransmission delayed Catastrophic
SEVERITYSEQUENCE DIAGRAM
EVENT FAILURE EFFECTS
28
Research Heaven,West Virginia
Performance Based Risk Methodology
For each Use Case
For each scenarioSTEP 1 – Assign demand vector to each “action” in Sequence
diagram; build a Software Execution Model
STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis
STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective
STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario
STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
29
Research Heaven,West Virginia
STEP 5 – Estimate the Performance Risk
• The performance risk of a scenario is defined as a product of
– the probability that the system will fail to meet the required performance objective for a given workload (e.g., desired response time) estimated in STEP 3 and,
– the severity associated with this performance failure of the system in this scenario estimated in STEP 4
• Performance based risk =
Probability of performance failure * Severity of the failure =
0.7958*0.95=0.756
30
Research Heaven,West Virginia
Identify High-risk Components(Step 5)
• Estimate the overall residence time of each component in a given sequence diagram
• Sum the time of all processing steps that belong to that component in a given scenario
• Normalize it with the response time of the sequence diagram
• Components that contribute significantly to the scenario’s response time are the high-risk components
31
Research Heaven,West Virginia
Identify High-risk Components (Step 5)
• Ground (GN) and the Space(ECOM) networks are the most critical components
• The service times of the other components are significantly smaller than the service times of GN and ECOM network components and hence are not visible on the graph
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
No
rma
lis
ed
Se
rvic
e T
ime
s
EOC ICC RECV SE TXC TSMT IDB GN ECOM
Components
Identification of High-Risk Components
32
Research Heaven,West Virginia
Identify High-risk Components (Step 5)
CICS
CTSRS
OS
SA
INA
APSSCEN1
SCEN3SCEN5
0
10
20
30
40
50
60
70
80
90
100
Nor
mal
ised
Ser
vice
Tim
es
Components
Scenarios
• This 3-D graph shows the components on x-axis, scenarios on y-axis and normalized service times on the z-axis
• The graph is based on different case study and is presented here for illustration
33
Research Heaven,West Virginia
• Developed analytical techniques and a methodology for reliability-based risk analysis– A lightweight approach based on static analysis of dynamic
specifications is developed and automated– A tool was presented at ICSE Tools session – Applied the methodology and tool to the NASA case study HCS-
ISS
• Developed analytical techniques and a methodology for performance-based risk analysis– Applied the methodology to the NASA-EOS case study
Accomplishments
34
Research Heaven,West Virginia
Publications
1. H. H. Ammar, T. Nikzadeh, and J. B. Dugan "Risk Assessment of Software Systems Specifications," IEEE Transactions on Reliability, To Appear September 2001
2. Hany H. Ammar, Sherif M. Yacoub, Alaa Ibrahim, “A Fault Model for Fault Injection Analysis of Dynamic UML Specifications,” International Symposium on software Reliability Engineering, IEEE Computer Society, November 2001
3. Rania M. Elnaggar, Vittorio Cortellessa, Hany Ammar, “A UML-based Architectural Model for Timing and Performance Analyses of GSM Radio Subsystem” , 5th World Multi-Conference on Systems, Cybernetics and Informatics, July. 2001, Received Best Paper Award
4. Ahmed Hassan, Walid M. Abdelmoez, Rania M. Elnaggar, Hany H. Ammar, “An Approach to Measure the Quality of Software Designs from UML Specifications,” 5th World Multi-Conference on Systems, Cybernetics and Informatics and the 7th international conference on information systems, analysis and synthesis ISAS July. 2001.
5. Hany H. Ammar, Vittorio Cortellessa, Alaa Ibrahim “Modeling Resources in a UML-based Simulative Environment”, ACS/IEEE International Conference on Computer Systems and Applications (AICCSA'2001), Beirut, Lebanon, 26-29 June 2001
6. A Ibrahim, Sherif M. Yacoub, Hany H. Ammar, “Architectural-Level Risk Analysis for UML Dynamic Specifications,” Proceedings of the 9th International Conference on Software Quality Management (SQM2001), Loughborough University, England, April 18-20, 2001, pp. 179-190
URL is http://www.csee.wvu.edu/~ammar/papers/2001
35
Research Heaven,West Virginia
Publications
7. T. Wang, A. Hassan, A. Guedem, W. Abdelmoez, K. Goseva-Popstojanova, H. Ammar, “Architectural Level Risk Assessment Tool Based on UML Specifications”, 25th International Conference on Software Engineering, Portland, Oregon, May 3 - 10, 2003.
8. A. Hassan, K. Goseva-Popstojanova, H. Ammar, “Methodology for Architecture Level Hazard Analysis”, ACS/IEEE International Conference on Computer Systems and Applications (AICCSA 03), Tunis, Tunisia, July 14-18, 2003.
9. A. Hassan, W. Abdelmoez , A.Guedem, K. Apputkutty, K.Goseva-Popstojanova, H.Ammar, “Severity Analysis at Architectural Level Based on UML Diagrams”, 21st International System Safety Conference, Ottawa, Ontario, Canada, August 4-8, 2003.
10. K. Goseva-Popstojanova , A. Hassan, A. Guedem, W. Abdelmoez, D. Nassar, H. Ammar, A. Mili, “Architectural-Level Risk Analysis using UML”, IEEE Transaction on Software Engineering, (accepted for publication).