TE Half‐day Tutorial 6/4/2013 8:30 AM "Design for Testability: A Tutorial for Devs and Testers" Presented by: Peter Zimmerer Siemens AG Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888‐268‐8770 ∙ 904‐278‐0524 ∙ [email protected]∙ www.sqe.com
14
Embed
Design for Testability: A Tutorial for Devs and Testers
Testability is the degree to which a system can be effectively and efficiently tested. This key software attribute indicates whether testing (and subsequent maintenance) will be easy and cheap—or difficult and expensive. In the worst case, a lack of testability means that some components of the system cannot be tested at all. Testability is not free; it must be explicitly designed into the system through adequate design for testability. Peter Zimmerer describes influencing factors (controllability, visibility, operability, stability, simplicity) and constraints (conflicting nonfunctional requirements, legacy code), and shares his experiences implementing and testing highly-testable software. Peter offers practical guidance on the key actions: (1) designing well-defined control and observation points in the architecture, and (2) specifying testability needs for test automation early. He shares creative and innovative approaches to overcome failures caused by deficiencies in testability. Peter presents a new, comprehensive strategy for testability design that can be implemented to gain the benefits in a cost-efficient manner.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
TE Half‐day Tutorial 6/4/2013 8:30 AM
"Design for Testability: A Tutorial for Devs and Testers"
Presented by:
Peter Zimmerer Siemens AG
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073 888‐268‐8770 ∙ 904‐278‐0524 ∙ [email protected] ∙ www.sqe.com
Peter Zimmerer Siemens AG
Peter Zimmerer is a principal engineer at Siemens AG, Corporate Technology, in Munich, Germany. For more than twenty years Peter has been working in the field of software testing and quality engineering. He performs consulting, coaching, and training on test management and test engineering practices in real-world projects, driving research and innovation in this area. An ISTQB® Certified Tester Full Advanced Level, Peter is a member of the German Testing Board, has authored several journal and conference contributions, and is a frequent speaker at international conferences. Contact Peter at [email protected].
Corporate Technology
Design for Testability:A Tutorial for Devs and TestersA Tutorial for Devs and Testers
Agile Development Conference & Better Software Conference WestLas Vegas NV USALas Vegas, NV, USA
design for testability over the whole lifecycle of a system
A story on missing testability …
The Therac-25 was a radiation therapy machineproduced by Atomic Energy of Canada Limited(AECL) after the Therac 6 and Therac 20 units(AECL) after the Therac-6 and Therac-20 units.It was involved in at least six accidents between1985 and 1987, in which patients were given
i d f di ti i t lmassive overdoses of radiation, approximately100 times the intended dose. These accidentshighlighted the dangers of software control ofsafety-critical systems, and they have becomea standard case study in health informatics andsoftware engineeringsoftware engineering.
A commission concluded that the primary reason should be attributed to the bad software design and development practices and not explicitly to several codingsoftware design and development practices, and not explicitly to several coding errors that were found. In particular, the software was designed so that it was realistically impossible to test it in a clean automated way.
Testability is the ease of validation, that the software meets the requirements. Testability is broken down into: accountability, accessibility, communicativeness, self-descriptiveness, structuredness.
Barry W. Boehm: Software Engineering Economics, Prentice Hall,1981
Testability is the degree to which, and ease with which, software can be effectively tested.Donald G. Firesmith: Testing Object-Oriented Software, Proceedings of the Object EXPO Europe, London (UK), July 1993
The degree to which a system or component facilitates the establishment of test criteria and the f f t t t d t i h th th it i h b tperformance of tests to determine whether those criteria have been met.
The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met.
IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries New York NY 1990IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York, NY, 1990
The testability of a program P is the probability that if P contains faults, P will fail under test.Dick Hamlet, Jeffrey Voas: Faults on its Sleeve. Amplifying Software Reliability Testing, in Proceedings of the 1993 International
Symposium on Software Testing and Analysis (ISSTA), Cambridge, Massachusetts, USA, June 28–30, 1993
Software testability refers to the ease with which software can be made to demonstrate its faultsthrough (typically execution-based) testing. Len Bass, Paul Clement, Rick Kazmann: Software Architecture in Practice
The degree to which an objective and feasible test can be designed to determine whether arequirement is met. ISO/IEC 12207
The capability of the software product to enable modified software to be validated. ISO/IEC 9126-1
Degree of effectiveness and efficiency with which test criteria can be established for a system,
product or component and tests can be performed to determine whether those criteria have beenmet. ISO/IEC 25010, adapted from ISO/IEC/IEEE 24765
Testability – Factors (1)
Control(lability) The better we can control the system (in isolation),
the more and better testing can be done, automated, and optimized Ability to apply and control the inputs to the system under test
or place it in specified states (for example reset to start state roll back)or place it in specified states (for example reset to start state, roll back) Interaction with the system under test (SUT) through control points
Control(lability)
Visibility / observability
Control(lability)
Visibility / observability What you see is what can be tested Ability to observe the inputs, outputs, states, internals, error conditions,
tili ti d th id ff t f th t d t t
y y
resource utilization, and other side effects of the system under test Interaction with the system under test (SUT) through observation points
Availability, operability The better it works, the more efficiently it can be tested Bugs add overhead for analysis and reporting to testing Bugs add overhead for analysis and reporting to testing No bugs block the execution of the testsSimplicity, consistency, and decomposability The less there is to test, the more quickly we can test it Standards, (code / design) guidelines, naming conventions Layering, modularization, isolation, loose coupling, separation of concern, SOLIDy g, , , p g, p , Internal software quality (architecture, design, code), technical debtStability The fewer the changes the fewer the disruptions to testing The fewer the changes, the fewer the disruptions to testing Changes are infrequent, controlled and do not invalidate existing tests Software recovers well from failuresU d t d bilit k l d ( f t d lt )Understandability, knowledge (of expected results) The more (and better) information we have, the smarter we will test Design is well understood, good and accurate technical documentation
How we design and build the system affects testability
T t bilit i th k t t ff ti t t t tiTestability is the key to cost-effective test automationTestability is often a better investment than automation Test environment: stubs, mocks, fakes, dummies, spiesTest environment: stubs, mocks, fakes, dummies, spies Test oracle: assertions (design by contract) Implemented either inside the SUT or inside the test automation
Challenges for testability Size, complexity, structure, variants, 3rd party components, legacy codeSize, complexity, structure, variants, 3 party components, legacy code Regulatory aspects You cannot log every interface (huge amount of data)
C fli t ith th f ti l i t lik l ti Conflicts with other non-functional requirements like encapsulation, performance or security, and impacts system behavior Non-determinism: concurrency, threading, race conditions, timeouts,
Non determinism: concurrency, threading, race conditions, timeouts, message latency, shared and unprotected data
Testability is affected / required by new technologiesy y gExample: Cloud computing
Cloud testability – the negative– Information hiding and remoteness– Complexity and statefulness– Autonomy and adaptiveness
Dependability and performance– Dependability and performance– Paradigm infancy (temporary)
Cloud testability – the positive+ Computational power+ Storage+ VirtualizationTesting requires lots of resourcesTesting requires lots of resourcesand the cloud is certainly powerful enough to handle it
Reference: Tariq M. King, Annaji S. Ganti, David Froslie:Towards Improving the Testability of Cloud Application Services, 2012
Design for Testability (DfT) – Why?
Testability ≈ How easy / effective / efficient / expensive is it to test?Can it be tested at all?
Increase depth and quality of tests – Reduce cost, effort, time of tests More bugs detected (earlier) and better root cause analysis with less effort Support testing of error handling and error conditions, for example to test how aSupport testing of error handling and error conditions, for example to test how a
component reacts to corrupted data Better and more realistic estimation of testing efforts Testing of non-functional requirements, test automation, regression testingTesting of non functional requirements, test automation, regression testing Cloud testing, testing in production (TiP)
Reduce cost, effort, and time for debugging, diagnosis, maintenanceT i ll d ti t d b t t h tl Typically underestimated by managers, not easy to measure honestly
Provide building blocks for self-diagnosing / self-correcting software
Possible savings ~10% of total development budget(Stefan Jungmayr, http://www.testbarkeit.de/)
Use layered architectures, reduce number of dependencies
Use good design principles
SUT
Use good design principles Strong cohesion, loose coupling, separation of concerns
Component orientation and adapters ease integration testing
Real or simulatedenvironment
Component orientation and adapters ease integration testing Components are the units of testing Component interface methods are subject to black-box testing
Avoid / resolve cyclic dependencies between components Combine or split components Dependency inversion via callback interface (observer-observable
design pattern)
Example: MSDN Views Testability guidanceThe purpose is to provide information about how to increase the testability surface of Web applications using the Model-View-Presenter
testability surface of Web applications using the Model View Presenter (M-V-P) pattern. http://msdn.microsoft.com/en-us/library/cc304742.aspx
Architectural and design patterns (2)
Provide appropriate test hooks and factor your design in a way that lets test code interrogate and control the running system
I l t d l t d d i th t l i t Isolate and encapsulate dependencies on the external environment Use patterns like dependency injection, interceptors, introspective Use configurable factories to retrieve service providersUse configurable factories to retrieve service providers Declare and pass along parameters instead of hardwire references to
service providersD l i t f th t b i l t d b t t l Declare interfaces that can be implemented by test classes Declare methods as overridable by test methods Avoid references to literal valuesAvoid references to literal values Shorten lengthy methods by making calls to replaceable helper methods
P t t bl d ibl b h iPromote repeatable, reproducible behavior Provide utilities to support deterministic behavior (e.g. use seed values)
Dependency injectionThe client provides the depended-on objectt th SUTto the SUT
Dependency lookupDependency lookupThe SUT asks another object to return the depended-on object before it uses it
Humble objectWe extract the logic into a separate easy-to-test component that isWe extract the logic into a separate easy to test component that is decoupled from its environment
T t h kTest hookWe modify the SUT to behave differently during the test
Reduced amount of data collected from customers:bucketing
Diagnosis and dump utilities (internal states, resource utilization)
Detect, identify, analyze, isolate, diagnose, and reproducea bug (location, factors, triggers, constraints, root cause)i diff t t t ff ti d ffi i t ibl
CommunicationDiagnosisExamplein different contexts as effective and as efficient as possible
Unit / component testing:
iagnosisExample
Unit / component testing:bug in my unit/component or external unit/component or environment? Integration testing of several subsystems:
b i b t t l b t i t?bug in my subsystem or external subsystem or environment? System of systems testing:
bug in which part of the whole system of systems?bug in which part of the whole system of systems? Product line testing in product line engineering (PLE):
bug in domain engineering part or application engineering part?B i t d t t (SUT) t t t ? Bug in system under test (SUT) or test system? quality of test system is important as well ... SUT in operation:
Implemented dependent on the specific domain, system, and (software) technologies that are availableE
du
Strategy for Design for Testability (DfT) over the lifecycle (2)
Specified and elaborated in a testability guideline for architects, developers, testers ( DfT – How?) Testability view*/profile in the architectural design approach (ADA)f
DfT PhilippeKruchten
nal_4+1ViewMode
Testability view /profile in the architectural design approach (ADA) Description how to recognize characteristics of the ADA in an artifact
to support an analysis and check of the claimed ADAA t f t l ( t ll bl ) i t d ith i th ADA
nef
it o
f
A set of controls (controllables) associated with using the ADA A set of observables associated with using the ADA A set of testing firewalls within the architecture used for regression testing o
n b
en
Control+ObservationPoints@Architecture
(firewalls enclose the set of parts that must be retested, they are imaginary boundaries that limits retesting of impacted parts for modified software) A set of preventive principles (e g memory / time partitioning):o
lder
s
A set of preventive principles (e.g. memory / time partitioning):specific system behavior and properties guaranteed by design (e.g. scheduling, interrupt handling, concurrency, exception handling, error management) A fault model for the ADA: failures that can occur failures that cannot occurst
akeh
A fault model for the ADA: failures that can occur, failures that cannot occur Analysis (architecture-based or code-based) corresponding to the fault model to
detect ADA-related faults in the system (e.g. model checking)uca
Tests made redundant or de-prioritized based on fault model and analysis
Ed
u
*Additional view to the 4+1 View Model of Software Architecture by Philippe Kruchten
Strategy for Design for Testability (DfT) over the lifecycle (3)
Included as one important criteria in milestones and quality gates, e.g.: Check control and observation points in the architecture Ch k t t bilit i t f t i i t t t tif
DfT
Check testability requirements for sustaining test automation Check logging and tracing concept to provide valuable information Check testability support in coding guidelinesn
efit
of
Check testability support in coding guidelines Check testability needs (especially visibility) for regression testing
I ti t d d l d i t ti t ti
on
ben
Investigated and explored in static testing:architecture interrogation (interviews, interactive workshops),architecture reviews, QAW, ATAM*, code reviewso
lder
s
architecture reviews, QAW, ATAM , code reviews
Investigated and explored in dynamic testing:i l t t bilit t i i t ti ( li ti t i )st
akeh
special testability tour in scenario testing (application touring)
Neglecting testability means increasing technical debt … $$ …uca