1 Hany H. Ammar LANE Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, West Virginia, USA, and Faculty of Computers and Information, Cairo University, Cairo, Egypt Introduction to Risk Management And Software Architecture Risk Assessment م ي ح ر ل ا ن م ح ر ل له ا ل م ا س ب له ل ول ا س ى ر عل لام س ل وا! لاة ص ل له ، وا ل مد ح ل ا
44
Embed
1 Hany H. Ammar LANE Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, West Virginia, USA, and Faculty of.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Hany H. Ammar
LANE Department of Computer Science and Electrical EngineeringWest Virginia University, Morgantown, West Virginia, USA, and
Faculty of Computers and Information, Cairo University, Cairo, Egypt
Introduction to Risk Management And Software Architecture Risk Assessment
الرحيم الرحمن الله بسمالله رسول على والسالم والصالة ، لله الحمد
• RISK MANAGEMENT: An organized, systematic decision-making process that efficiently identifies risks, assesses or analyzes risks, and effectively reduces or eliminates risks to achieving program goals.
• RISK: A Program “Risk” is any circumstance or situation that poses a threat to: crew or vehicle safety, Program controlled cost; Program controlled schedule; or major mission objectives, and for which an acceptable resolution is deemed unlikely without a focused management effort
Identify: Identify that a risk exits and give it a meaningful name.
Analyze: Determine the severity of the risk according to the risk matrix. If the risk is negligible (low to medium severity, low likelihood of occurrence), stop here. However, if the risk could cause damage to the system or the system's users, continue.
Plan: Decide how to combat the risk based on the risk's severity and likelihood of occurrence.
Mitigate: Follow the plan formulated in the previous phase as closely as possible to combat the risk. If this approach does not work, return to the previous phase and make a new plan. If the plan does work, continue analyzing the risk to determine whether it has been reduced to an acceptable severity level.
Track: Once the risk has been mitigated to an acceptable severity level, the risk should be tracked to ensure the continued control of the risk. If at any time the risk seems to resurface, the risk management cycle should begin again, starting with the analysis phase.
• According to NASA Software Safety Technical Standard, risk is defined as: “exposure to the chance of injury or loss. It is a function of the possible frequency of occurrence of the undesired event, of the potential severity of resulting consequences, and of the uncertainties associated with the frequency and severity”.
• For software intensive systems, a risk is a combination of a likelihood of occurrence of an abnormal event or failure and the potential consequences or severity of that event or failure to a system's operators, users, or environment
• Software Independent Verification & Validation (IV&V) is a systems engineering process employing rigorous methodologies for evaluating the correctness and quality of the software product throughout the software life cycle.
• Software IV&V is adapted to the characteristics of the project. Different projects require different level of IV&V
• Life-cycle IV&V is designed to mesh with the Project schedule and provide timely inputs to mitigate risk
• Dialog between the IV&V Facility and the Project must begin before SRR
• For most Projects, IV&V ends (and the Final Report is delivered) on or about MRR. Some Projects have extended S/W development post-launch or major upgrades/maintenance (e.g. Shuttle, MER)
System Requirements Review
Preliminary DesignReview
CriticalDesignReview
System Test
S/W FQT
Initial IVVPSigned
Mission Readiness Review
Concept Phase
2.0
Requirements Phase
3.0
Design Phase
4.0
Implementation Phase
5.0
Test Phase
6.0
Operations &Maintenance Phase
7.0
Baseline IVVPSigned
- IV&V provides support and reports for Project milestones - Technical Analysis Reports document major phases- IVVP is updated to match changes in Project
Software CHAOSThe Standish Group has examined 30,000 Software Projects in the US since 1994. This "CHAOS" research has revealed a decided improvement in IT project management with the implementation of standards and practices such as IV&V. This improvement correlates with the rise in project success depicted in the chart below:
16% 53% 31%
27% 33% 40%
26% 46% 28%
28% 49% 23%
0% 20% 40% 60% 80% 100%
1994
1996
1998
2000
Successful
Challenged
Failed
Project Resolution History (1994-2000)
The Standish Group International, Inc.: Extreme CHAOS (2001) - The 2001 update to the CHAOS report. http://www.standishgroup.com/sample_research/PDFpages/extreme_chaos.pdf
Error Detection/CorrectionEarly error detection and correction are vital. The cost to correct software errors multiplies during the software development lifecycle. Early error detection and correction reduce costs and save time.
Test
Code
Design
Requirements
0
5
10
15
20
25
30
35
40
45
50
Relative Cost to Fix
Phase Found
De
fec
t
Ty
pe
Relative Cost to Fix Defects per Phase Found
Test Code Design Requirements
Direct Return on Investment of Software Independent Verification and Validation: Methodology and Initial Case Studies, James B. Dabney and Gary Barber, Assurance TechnologySymposium, 5 June 2003.
This work is funded in part by grants to West Virginia University Research Corp. from the NSF (ITR) Program, and from the NASA Office of Safety and Mission Assurance (OSMA) Software Assurance Research Program (SARP) managed through the NASA Independent Verification and Validation (IV&V) Facility, Fairmont
• Unisys holds the NASA contract to maintain and support 14 million lines of ground software for the space shuttle
• There were 3,800 requirement changes made to the software after the loss of Challenger. These changes resulted in 900 software releases, of which 30 applied to the mission-control center with 3 of these being major upgrades • Reference:
cpij is the probability that a change in Ci due to corrective/ perfective maintenance requires a change in Cj while maintaining the overall function of a system S
cpij = P([Cj] [Cj'] | [Ci] [Ci'] ^ [S] = [S'] ) cpij is estimated by
Size of change SC=[scij ] scij is defined as the ratio between the number of affected
methods of the receiving component caused by the changes in the interface elements of the providing components and the total number of methods in the receiving component
Papers Published1. Vittorio Cortellessa, Katerina Goseva-Popstojanova, Kalaivani Appukkutty, Ajith R. Guedem, Ahmed
Hassan, Rania Elnaggar, Walid Abdelmoez, Hany H. Ammar, “Model-Based Performance Risk Analysis, IEEE Transactions on Software Engineering, January 2005, (Vol. 31, No. 1), pp.3-20.
2. Katerina Goseva-Popstojanova, Ahmed Hassan, Ajith Guedem, Walid Abdelmoez, Diaa Eldin M. Nassar, Hany Ammar, Ali Mili, "Architectural-Level Risk Analysis Using UML", IEEE Transactions on Software Engineering, October 2003 (Vol. 29, No. 10), pp.946-960.
3. S. Yacoub, H. Ammar, “A Methodology for Architectural-Level Reliability Risk Analysis,” IEEE Transactions on Software Engineering, Vol. 28, No. 6, June 2002.
4. W. AbdelMoez, K. Goseva-Popstojanova, H.H. Ammar,” Methodology for Maintainability-Based Risk Assessment”, Proc. of the 52nd Annual Reliability & Maintainability Symposium (RAMS 2006), Newport Beach, Ca., January 23-26, 2006.
5. Israr P. Shaik , W. Abdelmoez, R. Gunnalan, M. Shereshevsky, A. Zeid, H.H. Ammar, A. Mili, C. Fuhrman, “Change Propagation for Assessing Design Quality of Software Architectures”, Proc. of 5th IEEE/IFIP
Working Conference on Software Architecture (WICSA), Pittsburgh, Pa., USA, November 6-9, 2005.
6. AbdelMoez, W., I. Shaik, R. Gunnalan, M. Shereshevsky, K. Goseva-Popstojanova, H.H. Ammar, A. Mili, C. Fuhrman, “Architectural level Maintainability Based Risk Assessment”, Proc. of poster papers in IEEE International Conference on Software Maintenance (ICSM 2005), Budapest, Hungary, September 25-30,2005.
7. W. Abdelmoez, D. M. Nassar, M. Shereshevsky, N. Gradetsky, R. Gunnalanm and H. H. Ammar, Bo Yu, and Ali Mili "Error Propagation in Software Architectures". In Proceedings of the 10th International Symposium on Software Metrics (METRICS'04), September 11 - 17, 2004 , IEEE Comp. Soc., pp 384-393
8. Abdelmoez, W., M. Shereshevsky, R. Gunnalan, H. H. Ammar, Bo Yu, S. Bogazzi, M. Korkmaz, A. Mili, "Software Architectures Change Propagation Tool (SACPT),” 20th IEEE International Conference on Software Maintenance (ICSM'04) September 11 - 14, 2004 , Chicago, Illinois, IEEE Comp. Soc., pp 517
9. A. Hassan, K. Goseva-Popstojanova, and H. Ammar, “UML Based Severity Analysis Methodology”, Proceedings of the 2005 Annual Reliability and Maintainability Symposium (RAMS 2005), Alexandria, VA, January 2005.