This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
What’s Agile got to do with Tes1ng? Key Agile Tes+ng Topics
• Unevenness in workload • No single version of the
truth • Cost of Delay
What is ‘Cost of Delay’?
• The difference in the value of doing something sooner vs. doing it later E.g. – Releasing a Product this month vs. next month – Valida1ng a perceived user need before we build the feature vs. a^er
– Fixing a bug today vs. in two weeks 1me – Delivering a feature 6 months a^er it was specified
COD – Delayed Release Example
Source: David Anderson, LKNA14
Bug Lifecycle
CreateBug
Merge & Pass to Test
Find Source& Fix Bug
Find Bug
Log Bug
Details
Review & Priori1se
Bug
Retest
Recreate Bug
Developer Tester
Prj Mgr
Squish it at birth!
Create Env
Recreate Env
Test & Debug – Cost of Delay Tasks that take up man hours
Delayed 3 weeks
Delayed 4 hours
Run the Tests that find the bug Log the Bug – Describe/Characterise it Log the Bug -‐ Detail to Recreate it ‘Manage the Bug’ – Review, Assign, Priori1se Developer Recreates & Locates the Bug Fix the Bug Merge the Bug with All appropriate branches Find and Fix Regression Issues Retest Regression Fixes Merge Regression Fixes Retest the Bug (a^er delay) TOTAL MAN HOURS
Tasks that take up man hours Delayed 3 weeks
Delayed 4 hours
Run the Tests that find the bug 1 1 Log the Bug – Describe/Characterise it 0.25 0.05 Log the Bug -‐ Detail to Recreate it 1 0 ‘Manage the Bug’ – Review, Assign, Priori1se 0.5 0 Developer Recreates & Locates the Bug 2 0.25 Fix the Bug 0.25 0.25 Merge the Bug with All appropriate branches 0.5 0 Find and Fix Regression Issues 1 0 Retest Regression Fixes 0.25 0 Merge Regression Fixes 0.5 0 Recreate Env & Retest the Bug (a^er delay) 1 0.25 TOTAL MAN HOURS 8.25 1.8
• Test First: Build your Tests before you Implement – Understand Requirement before designing a solu1on
• Collabora1on between Testers and ‘Customers’ – ‘Executable Requirements’ – ideally automated
• Collabora1on between Testers and Developers • ‘Single Source of Truth’
– Early Tes1ng – ‘Build Quality In’ • Prevent Defects over Finding Failures
From: Lisa Crispin, 2011 12
Test Driven Development Step AcDon ObjecDve RED Write a Test reflec1ng the
behavior required – it Fails Learn – about the problem to be solved, the func1onality to be built
GREEN Write the code to make the test pass – as fast as you can (kludge, copy/paste, spagek code, whatever)
Learn – about the solu1on needed to pass the tests
REFACTOR Design and re-‐implement the code in the simplest, most parsimonious and most elegant way you can
Implement – the cleanest, best designed code you can -‐ based on what you’ve learned about the problem and the required solu1on
• Never write a single line of code unless you have a failing test • Eliminate Duplica+on
Test Driven Development • Primarily about Development
– Its about Learning/Understanding the requirement – What’s the Simplest thing that would work? – Allows us ‘refactor’ safely – reduced cost of change – Enables ‘Emergent Design/Architecture’ – Encourages callable & testable code
• Secondarily about Tes1ng – We test Early – even before we code – We test O^en (if we automate) – fast feedback – Prevent Defects over Finding Failures
TDD, ATDD, BDD: A word of Warning
• What ‘UNITS’ are we talking about? – Unit Tests test a ‘Unit’ of implementa1on
• Verifies the code does what the developer intended – TDD tests a ‘Unit’ of func1onality/behaviour
• Verifies the code implements the behaviour required
• TDD (Beck) is Outside-‐In: Create tests to test requirements, not implementa1on (BB)
• Developer level TDD o^en seen as ‘unit level’, what developers do to test implementa1on (WB)
• ATDD is the new TDD – something Testers do to test Func1onality/Behavior (ATDD=BDD)
A brief statement of intent that describes something the system needs to do for the user
• What its not: – A Use Case – Requirements Document – Feature Specifica1on
Ver1cal Slices -‐ High Level Stories should represent a ver1cal slice through the system and should be completed in one itera1on – and can therefore be tested in the same itera+on
User Story Example – Call Processing High Roaming Spend Warning (Data) As a billpay customer with data roaming enabled I want to be made aware if I’m approaching my threshhold so I can decide whether to incur out-‐of-‐plan charges
Acceptabce Criteria: • Ini1al warning at 80%
threshhold. • Follow Up at 95% threshhold
and op1on to cap at threshhold. • Warnings issued by SMS and
email (where email address validated)
• If 80% threshhold achieved in first 25% of billing cycle, send ‘possible fraud alert’ to customer service queue.
CONVERSATION (PO, Testers, Developers):
• How many warnings? • How should we issue
the warning? • What text do we use in
the warnings? • Do we force the user to
acknowledge the warning?
• Can the user configure warning points?
• If they go above their threshhold, do we issue periodic reminders/warnings?
• Edge cases flush out unseen requirements • Start with ‘How could we verify that this feature is implemented completely and correctly’ – what are all the examples we need?
Things to Consider: • Stable over Time/Cheap to Maintain • Test First/Early • Cheap to Produce (TDD Collateral) • Quick to Run/Reduced Overlap • Pinpoint Defect
EXPLORATORY TESTING The Spy that came in from the Cold
Hendrickson: a style of testing in which you explore the software
while simultaneously designing and executing tests, using feedback from the last test to inform the next
Bolton: Operating and observing the product with the freedom and
mandate to investigate it in an open-ended search for information about the program.
Kaner: Simultaneous learning, design and execution, with an
emphasis on learning. “… agile programs are more subject to unintended consequences
of choices simply because choices happen so much faster. This is where exploratory testing saves the day. Because the program always runs, it is always ready to be explored." Ward Cunningham on Why should agile teams do exploratory testing?
Exploratory Testing
30
• Advantages: – FAST:
• Minimal prepara1on (e.g. a charter/objec1ve, 1mebox) • Test Cases/Procedure not reorded, only incidents
– FOCUSED • On a par1cular area to achieve coverage/confidence • Risk based focus – re-‐evaluated as tes1ng proceeds
– PRODUCTIVE • Finds more defects per unit of effort than other methods
• Disadvantages: – Un-‐repeatable and not suitable for regression – Requires a skilled tester to execute, with knowledge of the product/domain
– Need to avoid rever1ng to ‘AdHoc’ tes1ng
Why do exploratory tes1ng?
31
• 1-‐2 hour sessions each star1ng with a charter and ending with a report/debrief
• Charter – iden1fies the goal of the session – What to test? – How long for? – Areas to explore – Risky areas? – Bug types to be inves1gated? – Extreme values/tricky situa1ons?
• In general exploratory can help you get familiar with product & target hotspots – Try extremes, anything you like to break product – When you find a bug, explore around it – Look for panerns, clues to other bugs – Document the test that caused the failure; document other tests that
cover 'interes1ng' situa1ons – Report/Log an incident – Move on to next interes1ng area