Risk Based Testing and Random Testing Dr. Himanshu Hora SRMS College of Engineering & Technology Bareilly (INDIA)
Oct 21, 2014
Risk Based Testingand
Random Testing
Dr. Himanshu HoraSRMS College of Engineering & Technology
Bareilly (INDIA)
Risk Based Testingand
Random Testing
• Use of Risk Analysis and Metricsfor Software Testing
• Focus Testing to Save Time and Money while maintaining quality
• How to develop metrics to manage and organise large test projects
The Challenges
• Time Constraints• Resource Constraints• Quality Requirements• Risk Factors:
– New technology– Lack of knowledge– Lack of experience
• Take Control!
Risk Analysis and Testing
Risk Identification
Risk Strategy
Risk Assessment
Risk Mitigation
Risk Reporting
Risk Prediction
Testing, Inspection etc.
Test Plan
Matrix: Cost and Probability
Test Item Tree
Test Metrics
Risk Based Testing - Theory• The Formula
– Re(f) - Risk Exposure of function f– P(f) - Probability of a fault in function f– C(f) - Cost related to a fault in function f
P(f)*C(f)Re(f)
Risk Based Testing - Approach
• Plan: Identify Elements to be Tested– Logical or physical Functions, Modules etc.
• Identify Risk Indicators– What is important to predict the probability of faults?
• Identify Cost of faults• Identify Critical Elements
– I.e. functions, tasks, activities etc. based on Risk Analysis (Indicators and Cost)
• Execute: Improve the Test Process and Organization: Schedule and Track
Simple Test Metrics• Test Planning
– Number of test cases per function– Number of hours testing per function
• Progress Tracking– Number of tests planned, executed and completed– Number of faults per function– Number of hours used for test and fix– Estimated to Complete
• Probability of faults - Indicators– New functionality– Size– Complexity– Quality of previous phases and documents
• Cost of Faults
Risk Based Testing - Metrics• Identify Areas with “High Risk Exposure”
– Probability and Cost
• All functions/modules should be tested to a “minimum level”
• “Extra Testing” in areas with high risk exposure
• Establish Test Plan and Schedule– Monitor Quality
• Number of Faults per function and time– Monitor Progress
• Number of hours in test and fix -> ETC
Risk Based Testing - Example
Other Probability Factors might include: Function Points, Frequency of Use etc.
Ranking the functions based on Risk ExposureThe Probability of a FaultThe Cost of a FaultExample: 2
C(s)C(c)P(f)*Re(f)
Cost Probability Risk
Func. C( s ) C(c) Avr.New Func.
Design Quality Size
Com-plexity
Weighted Averag
Exp. func.
5 5 1 3 fInterest Calc. 3 3 3 2 3 3 3 37 111
Close Account 1 3 2 2 2 2 3 31 62
Cust. Profitab. 2 1 1,5 3 3 2 3 41 61,5
Risk Based Testing - Reporting
Consequence
Pro
bab
ilit
y
LowLow
High
High
1
2
34
510
440
10 1
TECHNICAL
INTERFACE
RISK
439
370
11 2 2
369
302
5
Low Medium High
BUSINESS RISK
Low Medium High
Low
M
ediu
m
Hig
h
Consequence
Pro
bab
ilit
y
Risk Based Testing - Practice
“Top-20”
Prior to test execution: identify critical transactions
1
Test Execution identifies
“bad” transactions
2
Extra Testing: - Additional testing by product specialist- Automated regression testing
3
Planning and Progress Tracking
Planned
Actual
On-line Test Cases Completed
Date
Nu
mb
er o
f T
est
Cas
es
Planned
Executed
QAed
Started
Progress Indicators - “To be vs. Actual”
• “To be fixed” vs. “Actually fixed”
To be retested Act. retested
Rejected
To Be Restested, Actually Retested and Rejected
Nu
mb
er o
f F
ault
s“To be Retested” vs. “Actually Retested”
Actually fixed
To Be Fixed and Actually Fixed
Nu
mb
er o
f F
ault
s
To Be Fixed
Progress Indicators - Hours Used
BatchNumber of hours
for finding one fault and for fixing one
OnlineNumber of hours for finding one fault and for fixing one
Hours per Fault for Test and Fix
Date
Ho
urs
pe
r F
ault
Test
Fix
Test
Fix
Hours per Fault for Test and Fix
Ho
urs
pe
r F
ault
“Estimated to Complete”• ETC for system test based on:
– Number of hours testing per fault found– Number of hours fixing per fault– Number of
faults found per function
– Number of fixes being rejected
– Number of remaining tests (functions to be tested)
Calculated ETC and Actual Hours
Date
Ho
urs
ET
C
Estimated to Complete at Time t
Actual to Complete at Time t
Benefits of Risk Based Testing
• Improved Quality?– all critical functions tested
• Reduced Time and Money in Testing – effort not wasted on non critical or low risk
functions• Improved customer confidence
– due to customer involvement and good reporting and progress tracking
Test Process Work FlowLD PD
Test Exec
PTDsRaised
Fix
CR
Fix Procedure
Problem Mngmnt.Procedure
Test Exec.Procedure
Case BuildProcedure
Case QualityStandards
Change Mngmnt.Procedure
Re-test
Good/Bad
TestCompleted
RegressionTest
Good
Bad
ProAte
QC / QABasic Test Data
Test Case
Risk Identification
Risk Assessment
Risk Mitigation
Risk Reporting
Risk Prediction
Summary
• Risk Based Test Approach– Focused Testing
• Reduced Resources• Improved Quality
– Metrics are fundamental
• Process and Organization must support the new strategy– Metrics must support the organization and
process
Random testing
– Start off with a practical look, and some useful ideas to get you started on the project: random testing for file systems
– Then take a deeper look at the notion of feedback and why it is useful: method for testing OO systems from ICSE a couple of years
• Then back out to take a look at the general idea of random testing, if time permits
A Little Background
– Generate program inputs at random– Drawn from some (possibly changing) probability
distribution“Throw darts at the state space, without drawing a
bullseye”– May generate the same test (or equivalent tests)
many times– Will perform operations no sane human would
ever perform
Random Testing• Millions of operations and scenarios, automatically
generated• Run on fast & inexpensive workstations• Results checked automatically by a reference oracle• Hardware simulation for fault injection and reset
simulationA day (& night) of testing
(x 100000)
(x 100000) (x 100000)
(x 100000)
The Goals• Randomize early testing (since it is not possible
to be exhaustive)– We don’t know where the bugs are
NominalScenario Tests
RandomizedTesting
The Goals• Make use of desktop hardware for early testing – vs.
expensive (sloooow) flight hardware testbeds– Many faults can be exposed without full bit-level
hardware simulation
The Goals• Automate early testing
– Run tests all the time, in the background, while continuing development efforts
• Automate test evaluation– Using reference systems for fault detection and diagnosis– Automated test minimization techniques to speed
debugging and increase regression test effectiveness
• Automate fault injection– Simulate hardware failures in a controlled test
environment
Random testing• Simulated flash hardware layer allows random
fault injection• Most development/early testing can be done on
workstations• Lots of available compute power – can cover
many system behaviors• Will stress software in ways nominal testing will
not
Differential Testing• How can we tell if a test succeeds?
– POSIX standard for file system operations• IEEE produced, ANSI/ISO recognized standard for file
systems• Defines operations and what they should do/return,
including nominal and fault behavior
POSIX operation Result
mkdir (“/eng”, …) SUCCESSmkdir (“/data”, …) SUCCESScreat (“/data/image01”, …) SUCCESScreat (“/eng/fsw/code”, …) ENOENTmkdir (“/data/telemetry”, …) SUCCESSunlink (“/data/image01”) SUCCESS
/
/eng /data
image01 /telemetry
File system
Differential Testing• How can we tell if a test succeeds?
– The POSIX standard specifies (mostly) what correct behavior is
– We have heavily tested implementations of the POSIX standard in every flavor of UNIX, readily available to us
– We can use UNIX file systems (ext3fs, tmpfs, etc.) as reference systems to verify the correct behavior of flash
– First differential approach (published)was McKeeman’s testing for compilers
Random Differential TestingChoose (POSIX) operation F
Perform F on NVFSPerform F on Reference
(if applicable)
Compare return values
Compare error codes
Compare file systems
Check invariants
(inject a fault?)
Don’t Use Random Testing for Everything!
• Why not test handing read a null pointer?– Because (assuming the code is correct) it guarantees
some portion of test operations will not induce failure– But if the code is incorrect, it’s easier and more
efficient to write a single test– The file system state doesn’t have any impact (we
hope!) on whether there is a null check for the buffer passed to read
• But we have to remember to actually do these non-random fixed tests, or we may miss critical, easy-to-find bugs!
Principles Used• Random testing (with feedback)• Test automation• Hardware simulation & fault injection• Use of a well-tested reference implementation
as oracle (differential testing)• Automatic test minimization (delta-debugging)• Design for testability
– Assertions– Downward scalability (small model property)– Preference for predictability
Synopsis
• Random testing is sometimes a powerful method and could likely be applied more broadly in other missions– Already applied to four file system-related
development efforts– Part or all of this approach is applicable to
other critical components (esp. with better models to use as references)
Thank You
Dr. Himanshu HoraSRMS College of Engineering & Technology
Bareilly (INDIA)