This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Service-based Applications The need for Adaptation
The need for Adaptation The S-Cube SBA Lifecycle
Requirements
Engineering
Design
Realization
Deployment &
Provisioning
Operation & Management
Identify
Adaptation
Need (Analyse)
Identify
Adaptation
Strategy (Plan)
Enact Adaptation
(Execute)
Evolution Adaptation
Design time Run-time („MAPE“ loop)
(incl. Monitor)
Background: S-Cube Service Life-Cycle
A life cycle model is a process model that covers the activities related to the entire life cycle of a service, a service-based application, or a software component or system [S-Cube KM]
Online Failure Prediction through OT Two S-Cube Approaches
In this learning package we focus on PROSA
Note: Both approaches support “Service Integrator“; who integrates in-house and 3rd
party services to compose an SBA
Idea of the PROSA approach
Inverse usage-based testing:
– Assume: A service has seldom been “used” in a given time period
– This implies that not enough “monitoring data” (i.e., data collected from monitoring its usage) has been collected
– If we want to predict the service’s QoS, the available monitoring data is used, and then the prediction accuracy might be not good
– To improve the prediction accuracy, dedicated online tests are performed to collect additional evidence for quality of the service (this evidence is called “test data”)
Note: For 3rd party services, the number of allowable tests can be limited due to economical (e.g., pay per service invocation) and technical considerations (testing can impact on the availability of a service)
– Generally improves accuracy of failure prediction
– Exploits available monitoring data
– Beneficial in situations where prediction accuracy is critical while available past monitoring data is not enough to achieve this
– Can complement approaches that make prediction based available monitoring data (e.g., approaches based on data mining) and require lots of data for accurate prediction
– Can be combined with approaches for preventive adaptation, e.g.,:
- SLA violation prevention with machine learning based on predicted service failures Run-time verification to check if “internal” service failure leads to “external” violation of SLA
Cons:
– Assumes that testing a service doesn’t produce sides effects
– Can have associated costs due to testing:
One can use the usage model to determine the need for the testing activities
Require further investigation into cost models that relate costs of testing vs. costs of compensation of wrong adaptation
• [Sammodi et al. 2011] O. Sammodi, A. Metzger, X. Franch, M. Oriol, J. Marco, and K. Pohl. Usage-based online testing for proactive adaptation of service-based applications. In COMPSAC 2011
• [Metzger 2011] A. Metzger. Towards Accurate Failure Prediction for the Proactive Adaptation of Service-oriented Systems (Invited Paper). In ASAS@ESEC 2011
• [Metzger et al. 2010] A. Metzger, O. Sammodi, K. Pohl, and M. Rzepka. Towards pro-active adaptation with confidence: Augmenting service monitoring with online testing. In SEAMS@ICSE 2010
• [Hielscher et al. 2008] J. Hielscher, R. Kazhamiakin, A. Metzger, and M. Pistore. A framework for proactive self-adaptation of service-based applications based on online testing. In ServiceWave 2008
• [Dranidis et al. 2010] D. Dranidis, A. Metzger, and D. Kourtesis. Enabling proactive adaptation through just-in-time testing of conversational services. In ServiceWave 2010
[Salehie et al. 2009] Salehie, M., Tahvildari, L.: Self-adaptive software: Landscape and research challenges. ACM Transactions on Autonomous and Adaptive Systems 4(2), 14:1 – 14:42 (2009)
[Di Nitto et al. 2008] Di Nitto, E.; Ghezzi, C.; Metzger, A.; Papazoglou, M.; Pohl, K.: A Journey to Highly Dynamic, Self-adaptive Service-based Applications. Automated Software Engineering (2008)
[PO-JRA-1.3.1] S-Cube deliverable # PO-JRA-1.3.1: Survey of Quality Related Aspects Relevant for Service-based Applications; http://www.s-cube-network.eu/results/deliverables/wp-jra-1.3
[PO-JRA-1.3.1] S-Cube deliverable # PO-JRA-1.3.5: Integrated principles, techniques and methodologies for specifying end-to-end quality and negotiating SLAs and for assuring end-to-end quality provision and SLA conformance; http://www.s-cube-network.eu/results/deliverables/wp-jra-1.3
[Trammell 1995] Trammell, C.: Quantifying the reliability of software: statistical testing based on a usage model. In ISESS’95. Washington, DC: IEEE Computer Society, 1995, p. 208
[Musa 1993] Musa, J.: Operational profiles in software-reliability engineering. IEEE Software, vol. 10, no. 2, pp. 14 –32, mar. 1993
[Salfner et al. 2010] F. Salfner, M. Lenk, and M. Malek. A survey of online failure prediction methods. ACM Comput. Surv., 42(3), 2010
[Cavallo et al. 2010] B. Cavallo, M. Di Penta, and G. Canfora. An empirical comparison of methods to support QoS-aware service selection. In PESOS@ICSE 2010