Top Banner
How to test an user story Design Scientifically Powered by HBT T Ashok Founder & CEO STAG Software Private Limited Architect - HBT in.linkedin.com/in/AshokSTAG Ash_Thiru Webinar: Apr 2, 2015, 1100-1200 IST This is the last (third) in the Tri-webinar in the series on “How to test an user story”. The focus of this webinar is “How to design using a structured and scientific approach “. 1
15
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Design Scientifically (How to test a user story)

How to test an user storyDesign ScientificallyPowered by HBT

T Ashok Founder & CEO STAG Software Private Limited Architect - HBT

in.linkedin.com/in/AshokSTAG Ash_Thiru

Webinar: Apr 2, 2015, 1100-1200 IST

This is the last (third) in the Tri-webinar in the series on “How to test an user story”. The focus of this webinar is “How to design using a structured and scientific approach “.

1

Page 2: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved. 2

An user story is seen as a modern way of communicating end user's needs & expectations in a sweet & simple format that can be easily modified. This brevity/simplicity hides information leading to understanding in the small and potentially missing the big picture.

The first webinar focused on how-to-identify these "white spaces" in an user story to uncover potential gaps and understand what it is supposed to accomplish. The second one focused on establishing a clear goal of ‘what-to-test-for’ and ‘how-to-test’ to formulate an effective strategy.

This webinar, the third in this series, outlines how to design test cases using a scientific & disciplined approach.

2

Page 3: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Remember (1/3) : “What-to-test - ENTITIES”

3

User Activities (Theme?)

User Tasks (Epic?)

User Story

From http://winnipegagilist.blogspot.in/2012/03/how-to-create-user-story-map.html

2 : Test a set of user stories of a epic

1 : Test an user story

3 : Test a set of user stories that an user would do in sequence (flow)

4 : Test a set of user stories across releases(sprints) & epics

(4) Test a set of user stories across sprints(releases) and epics - Create contract, update the contact info, then Add address data, update this and then delete this. This represents a typical sequence of real life usage.

3

Page 4: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Remember (2/3) : “Test for what - CRITERIA”

4

CC1 CC2 CC3 CC4 ... CCn

E1✓ ?

------- -------

E2✓ ✓ ✓

------- -------

------- -------

------- -------

E3✓

------- -------

E4✓ ✓ ✓

------- -------

------- -------

------- -------

E1 : An user story E2 : Set of user stories of a epic

E3 : Set of user stories used in a sequence (flow)

E4 :Set of user stories across releases(sprints) & epics

Cleanliness CriteriaCC1 FunctionalityCC2 Capacity/LoadCC3 PerformanceCC4 SecurityCC5 Usability... ...

Acceptance Criteria ... Instantiation of CC ... Satisfaction of conditions

Attribute condition(s) t<=2s

Behaviour conditions Combination of conditions

Recap that baseline is the combination of two parts 1. What to Test 2. Test for What

‘Test for What’ are the criteria that we need to consider to meet the expectations- the acceptance criteria. The entity to test as we understood are E1-Individual User Story, E2- Set of User stories of an epic, E3- Set of user stories used in a sequence or the flow and E4 – Set of user stories across releases & epics. An entity under test can be listed against the acceptance criteria so that it should not miss the acceptance criteria which are given to us. But we have to make sure that the acceptance criteria is indeed complete and also not ambiguous.

In HBT methodology, expectations (of quality) is stated in terms of Cleanliness Criteria for e.g. Functionality, Capacity, Performance, Security, and Usability. So setting up ‘test for what’ is listing down these cleanliness criteria and mapping against the entities. Functionality will be applicable across all, while others like Performance may be applicable for some. We have to qualify these criteria and hence these have to be unambiguous. So in order to test each of these elements of what to test, we will consider the list of cleanliness criteria, choose these applicable and elaborate. This approach will enable us to discover if stated acceptance criteria is indeed complete.

4

Page 5: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Remember(3/3) : “Levels&Types - EVAL ORDER”

5

L9 End user value End user value test

L8 Deployment correctness Installation test Migration test

L7 Attributes correctness LSPS test Reliability test Security test

L6 Environment cleanliness Good citizen test Compatibility test

L5 Flow correctness Flow test

L4 Behaviour correctness Functionality test Access control test

L3 Structural cleanliness Structural test

L2 Interface cleanliness API validation test GUI validation test

L1 Input cleanliness Input validation testNa

tura

l ord

er (Q

ualit

y gr

owth

)

5

Page 6: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Structured & Scientific approach to design

6

E2 : Set of user stories of a epic

E1 : User story

E3 : Set of user stories of a flow

(across sprints & epics)E4 : Set of user stories of bigger flow

(within a sprint)

1 What “entities” to design for?

So what is structured and scientific approach to design? At the outset, knowing clearly the entities we going to design for.

In this case, there FOUR kinds of entities: E1: User story E2: Set of users stories of an epic E3: Set of user stories of a flow E4: Set of user stories of a bigger flow

A good design is about having stark clarity as what the entity under test is, and having a clear set of test cases for each one of these.

Note that the test cases get closer to be a end user scenario usage as we move from E1 to E4.

6

Page 7: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Structured & Scientific approach to design

7

E2 : Set of user stories of a epic

E1 : User story

E3 : Set of user stories of a flow

L1-L4

L4

L5-L8

L5-L8 (across sprints & epics)E4 : Set of user stories of bigger flow

(within a sprint)

L9 End user value

L8 Deployment correctness

L7 Attributes correctness

L6 Environment cleanliness

L5 Flow correctness

L4 Behaviour correctness

L3 Structural cleanliness

L2 Interface cleanliness

L1 Input cleanliness

2 What quality levels to design for?

We talked about quality levels and associated types of tests. Let us see how they connect with the various types of entities. A small entity which is individual in nature, at a lower level, forms a user story. That basic user story may be made up of set of screens, APIs or command lines which accepts inputs and hence Level 1,2,3 as well as basic behavior (L4) of the end user is also applicable.

A set of user stories to form an epic, for which behavior correctness (L4) is indeed applicable. As we move further up the aggregation changes. A set of user stories that form a flow (E3) has to be checked for flow correctness and attributes.

The higher order entities E3 and E4 consisting of aggregation of user stories which can be within the flow or across the sprints and epics, making a bigger flow. The focus of test and test cases to a larger degree for those type of test cases which falls under the levels L5-L8.

Before we design test cases we need to clear about these: 1. What are the thing that we are design for? 2. What are the various types of test that have to be done?

7

Page 8: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Structured & Scientific approach to design

8

3 What is the design approach?

E2 : Set of user stories of a epic

E1 : User story

E3 : Set of user stories of a flow

L1-L4

L4

L5-L8

L5-L8 (across sprints & epics)E4 : Set of user stories of bigger flow

(within a sprint)

T&P

E&E

E&E

E&E

T&P : Think & Prove Evaluating statically, Read in Frictionless Development Testing

E&E : Execute & Evaluate Dynamic evaluation. Design test cases & execute (Manual/Automated)

Once we understood that there are FOUR types of entities and certain types of tests are more applicable than others and those types of test has been stratified in EIGHT levels L1-L8, we need to figure out the approach to evaluation.

There are two to evaluate correctness T&P – Think & Prove – Evaluating statically E&E – Dynamically evaluate, Design test cases & execute (Manual/Automated) The typical approach to validation has always been about well documented test cases, converting them to scripts and learns & executes them to evaluate correctness.

The difference in Think & Prove is that we have been much faster because we do not detect an issue but try to prevent the issue. Since the user story is a smaller element it is worth to think in terms of the approach to evaluate statically rather that by execution.

As the entities aggregate and becomes bigger it can become intellectually difficult to evaluate statically and hence it becomes necessary for us to execute the test cases.

Slide 9

So when we talk about T&P – Think & Prove, we mean that instead of going through an external stimulus to uncover potential issues in the code we can change the approach by Put in tests in the code – Simple asserts for validating certain correctness, inputs

8

Page 9: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Structured & Scientific approach to design

9

4 And the design technique?

E2 : Set of user stories of a epic

E1 : User story

E3 : Set of user stories of a flow

L1-L4

L4

L5-L8

L5-L8 (across sprints & epics)E4 : Set of user stories of bigger flow

(within a sprint)

T&P

E&E

E&E

E&E

Via 1. Simple assert (tests inside code) 2. Standardised PDT list 3. Common Test Scenarios

Via Behaviour Driven Approach

“Extract conditions that govern behaviour”

PDT = Potential Defect Type

So when we talk about T&P – Think & Prove, we mean that instead of going through an external stimulus to uncover potential issues in the code we can change the approach by Put in tests in the code – Simple asserts for validating certain correctness, inputs in the interface and structural correctness Using standardized list of defects or the PDT list Coming with common scenarios of execution

We do not have to come up with new test cases always, we can leverage existing ones. We can cut down the design time and then map it through the user story. The challenge here is there is no report as such or audited. So the design technique of Think & Prove could be either of using the above mentioned criteria and hence can speed up the work because user story is comparatively small so we do not want to overdo it. The same time can use to test larger aggregation.

As we move higher we need to know dynamically evaluate behavior. As the user story aggregate we want to see the behavior that governs the flow. We are trying to understand the business conditions that govern the behavior so that we can meaningfully combine these conditions to form various behavioral scenarios and then come with test cases that stimulate the behaviour.

The whole idea of design is not just about coming with test cases, it is about getting deeper that is extracting the condition that govern the behaviour. Note that conditions are not merely limited to functionality.

9

Page 10: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Behaviour conditions for different levels

10

Objective Test Type Behaviour conditions

L1 Input cleanliness Input validation test Data type, syntax, boundaries, value-set

L2 Input interface cleanliness

API validation test GUI validation test

Order, dependency, presentation element properties like layout, grouping, interface signature

L3 Structural integrity Structural test Resource use policy, concurrency, error handling, internal linkages

L4 Behaviour correctness Functionality test Access control test Business logic, data specification, authorisation rules

L5 Flow correctness Interaction test End-to-end flow business logic

L6 Environment cleanliness

Good citizen test Compatibility test

Factors that can mess-up/messed-up-by the environment

L7 Attributes met LSPS test Security test

Performance, data volume, load, capacity, endurance, security, resource consumption

L8 Clean deployment Installation test Migration test Deployment, Environment, Migration

The various type of test at each level and the list of conditions for those behaviours are listed here.

At L1, the conditions relate to data conditions(syntax, boundaries), while at L2 the conditions relate to the interface that accepts these inputs. When it comes to structural aspects we will look at what could be conditions that could derail my structure, linkages. At L4/L5, the conditions relate to the business logic.

As we proceed towards L6-L8 we are considering the conditions that can affect the environment, performance. Note that conditions are not limited to functional behaviour.

Slide 11

Behavior driven approach forces you to describe the behavior as a collection or series of conditions combined in certain ways so that we can take a particular path. Hence there will be an unambiguous number of scenarios that would like to execute. Given any entity under test from E1-E4 we will have to describe the behavior as a series of conditions and then come up certain inputs to check those behaviors by combining various inputs.

10

Page 11: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Behaviour driven approach

11

Behaviour StimuliEntity under test

Conditions govern behaviour Outcome is Test Scenarios

Input (data) is Stimuli Outcome is Test Cases

C1

C2

C3

C4i1

i2

i3}

Outputs1

2

3

45

Inputs Conditions

Behavior driven approach forces you to describe the behavior as a collection or series of conditions combined in certain ways so that we can take a particular path. Hence there will be an unambiguous number of scenarios that would like to execute. Given any entity under test from E1-E4 we will have to describe the behavior as a series of conditions and then come up certain inputs to check those behaviors by combining various inputs.

For E1, the PDTs being simpler, it will be much faster do it in a mental approach. Hence we said Think & Prove. At higher level as we aggregate it could be more complex hence one has to come up with the scenarios and evaluate by executing it.

11

Page 12: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

So how will this look?

12

TT1 -------- --------

TT2 -------- -------

TT1 -------- -------

Ex ------- -------

------- -------

E1 : An user story E2 : Set of user stories of a epic

E3 : Set of user stories used in a sequence (flow)

E4 :Set of user stories across releases(sprints) & epics

Test types to be done Acceptance Criteria

Test cases neatly segregated across test types across levels

Given an Entity Ex (anything from E2 – E4) we have a set of test cases, grouped by the various type of tests TT1, TT2 and so on test cases following under different levels L1, L2, ---L8. Hence the test case is segregated. Depending upon which of the test cases are suitable converted into scripts. The test cases are matched to the acceptance criteria. A neat compartmentalization of test cases naturally happens and segregating them in different tests at various levels.

12

Page 13: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Points to note...

13

Design enables us to dig deeper and come up with questions to understand better. It is not just about coming up with test cases to evaluate the code.

Our objective is not only to ‘stimulate-to-uncover’, but also ‘to-uncover-unknowns’, to prevent not detect.

Note that flows (E3/E4) are what end users see and these are key to successful delivery.

Pay attention to NF attributes right from start. Don’t wait to test that.

Test design has always been thought as an activity which results in test cases that we shall execute to evaluate the correctness of the systems/entity under test. In agile context you may want to refine that definition.

• The act of design or looking forward to finding conditions is a tool that enables us to dig deeper to understand what is going to happen. Hence come up with more questions to understand better in terms of behavior that is not always written down. The good part is that it forces us to come up with the different situations. So this not the process of finding out the bug at the end but it is more of the process understanding the situation little more deeply, and can prevent bugs being injected in the first place..

• The objective of design is not only to uncover defects but it is about to uncover the unknowns. To prevent not detect. • This is facilitated by the design of test cases. Testing individual user story is far easier but at the higher order flows which are E3 and E4 type of entity are what end user see and

use and are keys to successful delivery. We miss the bigger picture sometime when evaluating. We have broken down the problem in much more smaller manageable entities /user stories .

• Due to rapid code development, we are more often focused on the functionality delivered by the system under development. Hence we do pay less attention to the non-functional aspects such as Load, Performance, Security and etc and this can lead to serious issues at the later stage.

The test design is the tool that enables us to dig a little deeper so the focus is not only coming up with the stimuli or test cases but also to uncovering the unknown.

13

Page 14: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved.

Hypothesis Based Testing - HBT

14

System Under Test

Cleanliness Criteria

Potential Defect Types

Test CasesRequirements traceability “what to test”

Fault traceability “test for what”

should satisfy impeded by

Click here to know more about HBT. http://stagsoftware.com/blog?p=570

HBT or Hypothesis Based Testing is a scientific personal test methodology where we hypothesize potential defect types to be uncovered to meet the expectations (Cleanliness Criteria) of the needs (System under test) via the Test cases.

14

Page 15: Design Scientifically (How to test a user story)

© 2015 STAG Software Private Limited. All rights reserved. www.stagsoftware.com

HBT is the intellectual property of STAG Software Private Limited. STEMTM is the trademark of STAG Software Private Limited.

@stagsoft

blog.stagsoftware.com

Connect with us...

Thank you. Powered by HBT

How to test an user story : Design Scientifically

To design scientifically the key ideas outlined are: • Be clear of the entities you are designing for (E1-E4) • Be clear as to levels that are applicable for each type of entity. • Note that the approach to evaluating correctness can be Think&Prove and Execute&Evaluate. • Apply a behavior driven approach, this enables us to dig deeper to uncover the conditions and in the process understand better.. •

15