Page 1
MN PM Tutorial
9/30/2013 1:00:00 PM
"Essential Test Management
and Planning"
Presented by:
Rick Craig
Software Quality Engineering
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ [email protected] ∙ www.sqe.com
Page 2
Rick Craig
Software Quality Engineering
A consultant, lecturer, author, and test manager, Rick Craig has led numerous teams of testers
on both large and small projects. In his twenty-five years of consulting worldwide, Rick has
advised and supported a diverse group of organizations on many testing and test management
issues. From large insurance providers and telecommunications companies to smaller software
services companies, he has mentored senior software managers and helped test teams
improve their effectiveness.
Page 3
1© 2013 SQE Training V3.1
Page 4
4© 2013 SQE Training V3.1
Page 5
5© 2013 SQE Training V3.1
Page 6
6© 2013 SQE Training V3.1
Page 7
7© 2013 SQE Training V3.1
Page 8
The IEEE has two definitions for “Quality”:
� The degree to which a system, component, or process meets specified
requirements
� The degree to which a system, component, or process meets customer or
user needs or expectations
The ISO (ISO 8402) defines “Quality” as:
� The totality of features and characteristics of a product or service that bears
on its ability to meet stated or implied needs
Philip B. Crosby defines “Quality” as:
� Conformance to requirements. Requirements must be clearly stated.
Measurements determine conformance 6 nonconformance detected is the
absence of quality.
8© 2013 SQE Training V3.1
Page 9
Testing is the process of measuring quality.
Testing is a lifecycle process, not just a phase of the Software Development Life Cycle
(SDLC), which occurs after the completion of coding.
The IEEE has two definitions for “testing”:
� The process of operating a system or component under specified conditions,
observing or recording the results, and making an evaluation of some aspect
of the system or component
� The process of analyzing a software item to detect the difference between
existing and required conditions (i.e., bugs) and to evaluate the features of
the software items
Unfortunately, implied requirements are very easy to get wrong. Often due to political
reasons, requirements rarely have “bugs.” When a requirement is deemed to be
incorrect, an “enhancement request” is typically raised rather than an incident/defect
report. Similarly, in the case of third-party development, the difference between a
defect and an enhancement may be a legal issue.
9© 2013 SQE Training V3.1
Page 10
10© 2013 SQE Training V3.1
Page 11
11© 2013 SQE Training V3.1
Page 12
For testers to be effective, they have to work closely with the developers. Adopting a
“them and us” attitude typically results in the software product being delivered to
testing much later in the lifecycle and/or not meeting basic entrance criteria.
12© 2013 SQE Training V3.1
Page 13
“The defect that is prevented doesn’t need repair, examination, or explanation. The
first step is to examine and adopt the attitude of defect prevention. This attitude is
called, symbolically, zero defects.”— Philip Crosby: Quality is Free (1979)
Production bugs cost many times more than bugs discovered earlier in the lifecycle. In
some systems the factor may be 10, while in others it may be 1,000 or more. A
landmark study done by TRW, IBM, and Rockwell showed that a requirements bug
found in production cost on average 100+ times more than one discovered at the
beginning of the lifecycle.
13© 2013 SQE Training V3.1
Page 14
Testing (at least in this course!) is not about perfection, only about reasonable risk.
The granularity required is both a business and technical issue.
14© 2013 SQE Training V3.1
Page 15
15© 2013 SQE Training V3.1
Page 16
A methodology (or method) is a process model composed of tasks, work products,
and roles for consistently and cost effectively achieving specified objectives.
Methodologies should be considered as dynamic guidelines that help the software
engineers do their jobs. Methodologies should be periodically reviewed and updated
based on the experiences of the development and testing staff. Inflexible
methodologies can lead to a disgruntled staff and complicate buy-in.
16© 2013 SQE Training V3.1
Page 17
STEPTM is a testing methodology based on the IEEE guidelines. STEPTM treats testing
as a lifecycle of activities that occurs in parallel with the software development
lifecycle (SDLC).
Most testing is preventive testing and is divided into levels. A level is characterized by
the environment in which the testing occurs. The components of the test environment
include
• Who is doing the testing
• Hardware
• Software
• Data
• Interfaces
• etc.
FYI: The “Acquire” step can mean reusing existing testware or developing new test
cases.
TIP: If your organization does not have a formal methodology in place, choose a pilot
project to develop a Master Test Plan. Use this test plan as the basis of your
methodology and then incrementally build upon the initial outline until you have
developed a comprehensive customized methodology.
17© 2013 SQE Training V3.1
Page 18
18© 2013 SQE Training V3.1
Page 19
The Master Test Plan (MTP) should outline how many levels are going to be used and
how they are dependent upon each other.
19© 2013 SQE Training V3.1
Page 20
A good testing methodology should embrace all of the points listed above.
20© 2013 SQE Training V3.1
Page 21
The software lifecycle is a series of imperfect transformations.
21© 2013 SQE Training V3.1
Page 22
22© 2013 SQE Training V3.1
Page 23
From the FDA’s point of view: This is true if testing is regarded as a separate phase
conducted at the end of a traditional waterfall development cycle.
From SQE’s point of view: This is true when testing is involved throughout the
development lifecycle.
From our point of view: “Basically, no amount of testing at the end of the project will
make bad software good.”
23© 2013 SQE Training V3.1
Page 24
24© 2013 SQE Training V3.1
Page 25
A level is defined by the collection of hardware, software, documentation, people, and
processes that make up a specific testing effort.
A test manager may be responsible for a single level or potentially all of the levels
specified in the project’s Master Test Plan.
25© 2013 SQE Training V3.1
Page 26
Unit, Integration, System, and Acceptance are the names used by the IEEE for the
levels (stages) of test planning. Many other terms also are used to describe these
levels.
NOTE: Some methods, processes, and terminology use the term “stage” instead of
“level”.
26© 2013 SQE Training V3.1
Page 27
How many levels is the right number?
� Too many – consumes too many resources and often extends a development
cycle
� Too few – too many defects may slip through
� Wrong ones – consumes resources and allows too many defects to slip
through
Although there is no “golden rule,” most projects use between three and five levels.
Smaller projects may use only one level; large, life-dependent systems may have
many more.
FYI: The IEEE defines four levels of testing:
� Acceptance
� System
� Integration
� Unit
27© 2013 SQE Training V3.1
Page 28
Acceptance Testing (the “glue”):
� A set of tests that, when successfully executed, certify a system meets the
user’s expectations
� Based on the requirements specifications (high-level tests)
� Often written by the end user/client (can be a problem in a Web environment
or in shrink-wrapped software that will be used by millions of unknown users)
� Ideally built before a single line of code is developed
� Developed by or approved by the user representative prior to software
development
� Sample test cases serve as models of the requirements
� The acceptance test set serves as a model of the system
� Changes, if necessary, must be negotiated – should use very formal
configuration management process
� Ideally, should be short in duration compared to other levels of testing
� May require significant resources to find/build realistic test data
28© 2013 SQE Training V3.1
Page 29
Typically the most extensive and time consuming level of testing. It should be as
comprehensive as time and resources allow. Acceptance testing is often a subset of
system testing, but the biggest difference is who does the testing.
System testing considerations:
� Corrections to defects found
� New code integration
� Devices and supporting equipment
� Files and data
Large number of test cases:
� Hundreds and even thousands not uncommon
� Starts with functional testing
� Includes test cases intended to create failures
� Includes test cases designed to stress and even break the system
Focus on reliability and operations:
� Will the system support operational use?
� Security, backup, recovery etc.
A by-product of the systems test should be the regression test set. A key deliverable,
the regression test set is typically a subset of the system’s test cases and should be
saved for testing future modifications.
TIP: Remember that requirements can be wrong.
29© 2013 SQE Training V3.1
Page 30
A major project development decision that impacts testing is “who owns the interface.”
In other words, is the module “caller” or “callee” responsible for ensuring the interface
works? If changes to the interface need to be made, who has final say as to what
those changes are and when they are implemented?
Integration testing is difficult to stage manage. Strategies include
� Top levels working down
� Critical software first
� Bottom levels working up
� Functional capabilities
� Build levels
� Prototypes
FYI: Integration testing may be referred to as “string,” “thread,” or “build” testing. It is
often conducted in “stages” by the same or different groups of testers.
Example integration exit criteria:
� Integration test cases are documented in accordance with corporate
standards
� All test cases are run; X% must pass
� No class 1 or 2 defects
� X% statement coverage
� Must pass the “smoke” test
30© 2013 SQE Training V3.1
Page 31
Unit testing is the validation of a program module independent from any other portion
of the system. The unit test is the initial test of a module. It demonstrates that the
module is both functionally and technically sound and is ready to be used as a building
block for the application. It is often accomplished with the aid of stub and driver
modules which simulate the activity of related modules.
Unit testing is typically a development responsibility, but testing must help. The testing
team can provide help and guidance in any of the following ways:
� Determining the purpose of the testing activity and why it is difficult
� Analyzing programs to identify test cases
� Defining what is good and bad testing
� Explaining how to create test case specifications
� Defining test execution and evaluation procedures
� Itemizing what records and documentation to retain
� Discussing the importance of re-testing and the concept of the test data set
TIP: Although management support is key, inspections, walkthroughs, and code
reviews typically are more beneficial if management is not present during the actual
review.
FYI: Inspections tend to be more formal than walkthroughs and therefore typically
require more training for the participants.
31© 2013 SQE Training V3.1
Page 32
32© 2013 SQE Training V3.1
Page 33
The easiest way to organize the testing effort and recognize the many planning risks
and their associated contingencies (and thereby reduce the projects overall risk) is to
use a Master Test Plan (MTP). The test manager should think of the Master Test Plan
as one of his or her major communication channels with all project participants. A
Master Test Plan ties together all the separate levels into a single cohesive effort.
33© 2013 SQE Training V3.1
Page 34
The written test plan should be a by-product of the test process.
The IEEE defines a (master) test plan as:
A document describing the scope, approach, resources, and schedule of intended
testing activities. It identifies the test items, the features to be tested, the testing tasks,
who will do each task, and any risks requiring contingency planning.
The Master Test Plan is obviously a document, but more importantly it is a thought
process. It is a way to get involvement (and have buy-in) from all parties on how
testing will occur. If a Master Test Plan is created and no one uses it, did it really help?
The creation of the Master Test Plan should generally start as early as possible,
ideally in the early stages of project development and/or requirements formulation.
34© 2013 SQE Training V3.1
Page 35
Obviously, the first question you must ask yourself when creating a test plan is “Who is
my audience?” The audience for a unit test plan is quite different from the audience for
an acceptance test plan or a Master Test Plan—so the wording, use of acronyms,
technical terms, and jargons should be adjusted accordingly.
Keep in mind that various audiences have different tolerances for what they will and
will not read. Executives may not be willing to read an entire master test plan if it is
fifty pages long, so you may have to consider an executive summary. Come to think
about it, you might want to avoid making the plan prohibitively long or no one will read
(or use) it. If your plan is too long, it may be necessary to break it into several plans of
reduced scope (possibly based around subsystems or functionality). Sometimes, the
size of plans can be kept in check by the judicious use of references. But please
proceed carefully—most people don’t really want to gather a stack of documents just
so they can read a single plan.
The audience of a Master Test Plan usually includes developers, testers, users, the
project sponsor, and other stakeholders.
Often, the author of a Master Test Plan will be the manager of the test group (if one
exists), but it also could be the project manager (the MTP should ultimately form part
of the project plan) or the user’s technical representative.
35© 2013 SQE Training V3.1
Page 36
This is the outline of the Master Test Plan template as defined in the IEEE 829-2008
standard.
The IEEE templates should be thought of as guidelines only. Feel free to change,
add, delete sections as you see fit. The template on the next page is the one I usually
use. It combines most of the things found in the template on this page with some of
the sections the IEEE 829-2008 only includes in the level-specific test plan template.
36© 2013 SQE Training V3.1
Page 37
What is it?
A document (or series of documents) that is outlined during project planning and is
expanded and reviewed during a project to guide and control all testing efforts within
the project.
Why have it? It is the primary means by which the test manager exerts influence by:
� Raising testing issues
� Defining testing work
� Coordinating the work of others
� Gaining management approval
� Controlling what happens
Note that item 6 “Software Risks” and item 7 “Planning Risks and Contingencies”
appear as a single section in the IEEE template.
TIP: A table of contents (TOC), glossary, and index make good additions to the IEEE
standard test plan. Risks and contingencies are often restricted to just planning risks
and contingencies. Some organizations have a section called “Assumptions.”
Assumptions that do not occur are really planning risks. The IEEE template should be
considered only a guide. Sections should be changed, added, or deleted to meet your
organization’s objectives. In some cases, the plan may only be a checklist or even
verbal.
The above outline is derived from the IEEE 829.
37© 2013 SQE Training V3.1
Page 38
1 ― Test Plan Identifier
In order to keep track of the most current version of your test plan, you will want to
assign it an identifying number. If you have a standard documentation control system
in your organization, then assigning numbers is second nature to you.
TIP: When auditing the testing practices of an organization, always check for the test
plan identifier. If there isn’t one, that usually means that the plan was created but
never changed (and quite probably never used). The MTP should itself also be the
subject of configuration management.
2 ― Introduction
The introduction should at least cover:
� A basic description of the project or release including key features, history,
etc. (scope of the project)
� An introduction to the plan that describes the scope of the plan (what levels,
etc.)
38© 2013 SQE Training V3.1
Page 39
3 ― Test Items
This section describes programmatically what is to be tested. If this is a master test
plan, this section might talk in very broad terms: “version 2.2 of the accounting
software,” “version 1.2 of the users manual,” or “version 4.5 of the requirements spec.”
If this is an integration or unit test plan, this section might actually list the programs to
be tested, if known. This section should usually be completed in collaboration with the
configuration or library manager.
FYI: Many MTPs refer to a particular internal “build” of an application rather than the
public version number.
39© 2013 SQE Training V3.1
Page 40
4 ― Features to be Tested
This is a listing of what will be tested from the user or customer point of view (as
opposed to test items, which are a measure of what to test from the viewpoint of the
developer or library manager). For example, if you were system testing an Automated
Teller Machine (ATM), features to be tested might include:
� Password validation
� Withdraw money
� Deposit checks
� Transfer funds
� Balance inquiries, etc.
NOTE: The features to be tested might be much more detailed for lower levels of test.
5 ― Features Not to Be Tested
This section is used to record any features that will not be tested and why. There are
many reasons that a particular feature might not be tested (e.g., it wasn’t changed, it
is not yet available for use, it has a good track record, etc.). Whatever the reason a
feature is listed in this section, it all boils down to relatively low risk. Even features that
are to be shipped but not yet “turned on” and available for use pose at least a certain
degree of risk, especially if no testing is done on them. This section will certainly raise
a few eyebrows among managers and users (many of whom cannot imagine
consciously deciding not to test a feature), so be careful to document the reason you
decided not to test a particular feature.
40© 2013 SQE Training V3.1
Page 41
6 ― Risk Analysis
This session breaks risk analysis into two sections:
� Software or Product Risks
� Project or Planning Risks
Note: The ISTQB uses the words Product and Project Risk rather than the terms
Software and Planning Risks.
42© 2013 SQE Training V3.1
Page 42
The purpose of discussing software risk is to determine what the primary focus of
testing should be. Generally speaking, most organizations find that their resources are
inadequate to test everything in a given release. Outlining software risks helps the
testers prioritize what to test and allows them to concentrate on those areas that are
likely to fail or have a large impact on the customer if they do fail. Organizations that
work on safety-critical software usually can use the information from their safety and
hazard analysis here. However in many other companies no attempt is made to
verbalize software risks in any fashion. If your company does not currently do any type
of risk analysis, try a brainstorming session among a small group of users, developers,
and testers to identify their concerns.
The outcome of the software risk analysis should directly impact what you test and in
what order you test. Risk analysis is hard, especially the first time you try it, but you
will get better at it—and it’s definitely worth the effort. Often, it’s a lot more important
what you test than how much you test.
43© 2013 SQE Training V3.1
Page 43
Step 1 – Make an inventory of the system's features and attributes.
The level of detail of the inventory is based upon the resources available for the risk
assessment and the detail of the test (i.e., system test is more detailed than
acceptance test). All features/attributes do not necessarily have to be at the same
level of detail.
FYI: A feature is a user function; an attribute is a system characteristic.
44© 2013 SQE Training V3.1
Page 44
Step 2 – Determine the likelihood of the feature or attribute failing.
Once the inventory has been built, the next step is to assign a “likelihood of something
going wrong” to each of the features and attributes identified in the inventory (this is
often achieved by conducting a “brainstorming” session). While some organizations
like to use percentages, number of days/years between occurrences, or even
probability “half lives,” often using a set of simple categories such as the ones listed in
the slide above provide sufficient accuracy.
If the likelihood of something going wrong is none or zero, then this item may be
removed from the analysis. However, the removal should be documented.
Step 3 – Determine the impact on the business (not just the IT department) if the
feature or attribute were to fail.
If the impact of the feature or attribute failing is trivial (or even beneficial), then this
item may be removed from the analysis. Again, the removal should be documented.
NOTE: While testers, developers, and customer support representatives may have the
best “gut feel” for determining which features or attributes are most likely to fail, it is
often the line of business (LOB) managers who typically have the best handle on how
big a business impact a failure could cause.
45© 2013 SQE Training V3.1
Page 45
Step 4 – Determine the “1st cut” testing priority by multiplying the likelihood and
business impact.
Multiplying the likelihood and the impact will determine which items have the highest
risk. This information then can be used to determine which test cases should be given
the highest priority/extensiveness.
From the ISTQB Syllabus:
Risk can be quantified mathematically when the probability of the occurrence of the
risk (P) and the corresponding damage (D) can be quantitatively represented. The risk
is calculated from the formula P*D. In most cases the probability and damage cannot
be quantified, rather only the tendencies are assignable (e.g., high probability, low
probability, higher damage, average damage, etc.)
The risk is defined as a graduation within a number of classes or categories.
If there are no dependable metrics available, then the analysis is based on personal
perceptions, and the results differ, depending on the person making the judgment. For
example, the project manager, developer, tester and users all may have different
perceptions of risk.
The degree of insecurity should be recognizable from the results of the risk analysis,
which was used to evaluate the risk.
46© 2013 SQE Training V3.1
Page 46
Web Site Attribute Business Impact
Spelling mistakes Low (projects bad image)
Invalid mail-to Medium (loss of business)
Viruses received via email Medium (lost time)
Wrong telephone #s High (loss of business)
Slow performance High (loss of business)
Poor usability Medium (some loss of business)
Ugly site Medium (projects bad image)
Does not work with Browser X High (loss of business)
Hacker spam attack Medium (server temporarily down)
Site intrusion High (unknown)
47© 2013 SQE Training V3.1
Page 47
Once the items have been prioritized, they can be sorted. Sorting the list of features
and attributes provides a clear view of which items need the most attention.
TIP: Consider entering the data into a software tool that is “sort friendly” (e.g., use
Excel instead of Word).
48© 2013 SQE Training V3.1
Page 48
If time or resources are an issue, then the priority associated with each feature or
attribute can be used to determine which test cases should be created and/or run.
TIP: Used wisely, a prioritized inventory with a “cut off” point can be powerful when
negotiating with senior management.
In addition to using the Risk Analysis to determine Test Case/Run priority, the Risk
Analysis can be used as a starting point for identifying failure points and subsequently
designing test cases to specifically exercise the suspected failure points. This
technique often is used by organizations with extremely low risk tolerances (e.g.,
medical device manufacturers, the military, and space agencies).
49© 2013 SQE Training V3.1
Page 49
7 – Planning Risks and Contingencies
Planning risk can be anything that adversely affects the planned testing effort Planning risk can be anything that adversely affects the planned testing effort
(schedule, completeness, quality, etc.)
The ISTQB refers to these as project risks.
50© 2013 SQE Training V3.1
Page 50
The purpose of identifying planning risks is to allow contingency plans to be developed
ahead of time and ready for implementation in case the event
occurs.
Examples of Planning Risks:
Risk: Project start time is slightly delayed, but the delivery date has
not changed
Contingency: Staff works overtime
Prerequisites: Overtime is approved by senior management, and staff have
stated willingness to work overtime
Risk: Microsoft releases a new version of browser halfway through
testing (and the delivery date has not changed)
Contingency: Don’t run some of the lower priority test cases for the Web site
and re-run the standard smoke test with the new browser
Risk: Entire testing staff wins state lottery
Contingency: Make sure you are in the syndicate
51© 2013 SQE Training V3.1
Page 51
There are many contingencies to consider, but in most cases they will all fall into one
of the categories shown above. For example, reducing testing or development time is
the same as reducing quality, while increasing resources could include users,
developers, contractors, or just overtime, etc.
Many organizations have made a big show of announcing their commitment to quality
with quality circles, quality management, total quality management (TQM), etc.
Unfortunately, in the software world many of these same organizations have
demonstrated that their only true commitment is to the schedule.
Many software projects have schedules that are at best ambitious and at worst
impossible. Once an implementation date is set, it is often considered sacred.
Customers may have been promised a product on a certain date; management
credibility is on the line; corporate reputation is at stake; or the competitors may be
breathing down a company’s neck. At the same time, an organization may have
stretched its resources to the limit. It is not the purpose of this course to address the
many reasons why test managers so often find themselves in this unenviable spot but
to discuss what you can do about it.
52© 2013 SQE Training V3.1
Page 52
8 ― Approach
Some of these example strategies may not be applicable for every organization or
project.
Since this section is the heart of the test plan, some companies choose to label it
“strategy” rather than “approach.” The approach should contain a description of how
testing will be done (approach) and discuss any issues that have a major impact on
the success of testing and ultimately of the project (strategy). For a master test plan,
the approach to be taken for each level should be discussed including the entrance
and exit criteria from one level to another.
EXAMPLE: System testing will take place in the test labs in our London office. The
testing effort will be under the direction of the London VV&T team, with support from
the development staff and users in our New York office. An extract of production data
from an entire month will be used for the entire testing effort. Test plans, test design
specifications, and test case specifications will be developed using the IEEE/ANSI
guidelines. All tests will be captured using a testing tool for subsequent regression
testing. Tests will be designed and run to test all features listed in section 4 of the
system test plan. Additionally, testing will be done in concert with our Paris office to
test the billing interface. Performance, security, load, reliability, and usability testing will
be included as part of the system test. Performance testing will begin as soon as the
system has achieved stability. All user documentation will be tested in the latter part of
the system test.
54© 2013 SQE Training V3.1
Page 53
Many organizations use an “off-the-shelf” methodology; others have either created a
brand new methodology from scratch or have adapted somebody else’s methodology.
In the event that your organization does not have even a rudimentary process,
consider using your next project as a “pilot” project. The decisions, plans, and
documentation generated by this project can be used as a basis for future project
enhancement and improvement.
FYI: A European telecommunications company runs an annual “process sample”
competition. The winning team’s documentation is used as the “sample” appendix in
the company’s process handbook. Along with the prestige that accompanies selection
as this year’s “model,” the team members also receive a cash prize.
56© 2013 SQE Training V3.1
Page 54
Perhaps the two most important entrance and exit criteria for a test manager are
� The exit criteria for unit/integration testing (i.e., What should development
have done/completed during its testing phase?)
� The entrance criteria into system testing (i.e., What can the test group
expect?)
57© 2013 SQE Training V3.1
Page 55
If you want to create a simple Web site consisting of only one HTML file, you only
need to upload that one file. On a typical Web site involving dozens, hundreds, or
even thousands of files, however, the process of uploading a Web site becomes more
complicated and time consuming, especially when the Web site runs applications that
need to be built themselves.
A common practice at several software companies is the “daily build and smoke test”
process. Every file is compiled, linked, and uploaded to a test Web site every day, and
the Web site is then put through a “smoke test,” a relatively simple check to see
whether the Web site “smokes” when it’s used.
58© 2013 SQE Training V3.1
Page 56
Perhaps the most well-known form of coverage is code coverage. However, there are
other coverage measures:
� Requirements coverage attempts to estimate the percentage of business
requirements that are being tested by the current test set.
� Design coverage attempts to measure how much of the high level design is
being validated by the current test set.
� Interface coverage attempts to estimate the percentage of module interfaces
that are being exercised by the current test set.
� Code coverage attempts to measure the percentage of program statements,
branches, or paths that are being executed by the current test set. Code
coverage typically requires the assistance of a special tool.
59© 2013 SQE Training V3.1
Page 57
Another topic that should generally be discussed in the approach is how configuration
management will be handled during test. However, it is possible that this could be
handled in a document of its own in some companies.
Configuration management in this context includes change management as well as
the decision-making process used to prioritize bugs. Change management is
important because it is critical to keep track of the version of the software and related
documents that are being tested. There have been many woeful tales of companies
that have actually shipped the wrong (untested) version of the software.
Equally important is the process for reviewing, prioritizing, fixing, and re-testing bugs.
The test environment in some companies is controlled by the developers, which can
be very problematic for test groups. As a rule, programmers want to fix every bug
immediately. It’s as though the programmers feel that if they can fix the bug quickly
enough, it didn’t happen! Testers, on the other hand, are famous for saying that
“testing a spec is like walking on water; it helps if it’s frozen.” Obviously both of the
extremes are counterproductive. If every bug fix is re-implemented immediately, the
testers would never do anything but regression testing. Conversely, if the code is
frozen prematurely, eventually the tests will become unrealistic. The key is to agree on
a process for reviewing, fixing, and implementing bugs back into the test environment.
This process may be very informal during unit and integration test but will probably
need to be much more rigid at higher levels of test.
60© 2013 SQE Training V3.1
Page 58
61© 2013 SQE Training V3.1
Page 59
62© 2013 SQE Training V3.1
Page 60
64© 2013 SQE Training V3.1
Page 61
Another strategy issue that should probably be addressed in the test plan is the use of
tools and automation. Testing tools can be a benefit to the development and testing
staff, but they also can spell disaster if their use is not planned. Using some types of
tools can actually require more time to develop, implement, and run a test set the first
time than if the tests were run manually. Using tools, however, may save time during
regression testing, and other types of tools can pay time dividends from the very
beginning.
Rules of thumb for deciding which test cases to automate:
� Repetitive tasks (i.e., regression testing)
� Longer procedures
� Tedious tasks (i.e., code coverage/complexity)
� Performance testing
� Automate if the test will be run more than x times (e.g., 3, 4, 5, or ?)
Automation issues:
� Plan for how to support the methodology
� Train in mechanics of tool
� Ensure a stable application
� Must configure environment
65© 2013 SQE Training V3.1
Page 62
Test tool realities:
� Many testers are highly interested in tools, but either do not have the time or
do not want to apply effort to use them correctly.
� Testers know nothing happens by magic but want to believe test tools will
solve all testing problems.
� Tool use must be taught on an ongoing basis. Benefits and requirements of
each tool need to be understood by everyone.
� Training must be followed up with assistance and support. Help should be
available by phone.
� Tools must be integrated into routine procedures and processes. This
includes simplified job control, software interfaces, etc.
66© 2013 SQE Training V3.1
Page 63
9 ― Item Pass/Fail Criteria
Just as every test case needs an expected result, each test item needs to have an
expected result. Typically, pass/fail criteria are expressed in terms of:
� Percentage of test cases passed/failed
� Number, type, severity, location of defects
� Usability
� Reliability
� Stability
The exact criteria used will vary from level to level and organization to organization. If
you’ve never tried to do this before, you may find it a little frustrating the first time or
two. However, trying to specify “what is good enough” in advance can really help
crystallize the thinking of the various test planners and reduce contention later. If the
software developer is a contractor, this section of the MTP can even have legal
ramifications.
An extreme example of a design pass/fail criteria would be when the number of bugs
reaches a certain predefined level, the entire design is scrapped and a new design is
developed from scratch.
67© 2013 SQE Training V3.1
Page 64
10 ― Suspension and Resumption Criteria
The purpose of this MTP section is to identify any conditions that warrant a temporary
suspension of all or some of the testing. Because test execution time is often so
hurried, testers have a tendency to surge forward no matter what happens.
Unfortunately, this often can lead to additional work and a great deal of frustration. For
example, if a group is testing some kind of communications network or switch, there
may come a time when it is no longer useful to continue testing a particular interface if
the protocol to be used is undefined or in flux.
Sometimes, metrics are established to flag a condition that warrants suspending
testing. For example, if a certain predefined number of total defects or defects of a
certain severity are encountered, testing may be halted until a determination can be
made whether to redesign part of the system or try an alternate approach, etc.
Sometimes, suspension criteria is displayed in the form of a Gantt chart (a Gantt chart
is a bar chart that illustrates a project schedule including dependencies).
Examples of suspension criteria include:
� The Web server hosting the Web site under test becomes unavailable
� The software license for a key testing tool expires
� Sample production data to be used for test data is unavailable
� Key end-users personnel are unavailable
68© 2013 SQE Training V3.1
Page 65
11 ― Testing Deliverables
This is a listing of all documents, tools, and other elements that are to be developed
and maintained in support of the testing effort. Examples include: test plans, test
design specifications, test cases, custom tools, defect reports, test summary reports,
and simulators. The software to be tested is not a test deliverable; that is listed under
“Test Items.”
69© 2013 SQE Training V3.1
Page 66
12 ― Testing Tasks
The IEEE defines this section of the Master Test Plan as:
Identify the set of tasks necessary to prepare for and perform testing. Identify all inter-
task dependencies and any special skills required.
This section can be used to keep a tally of tasks that need to be completed. It is useful
to assign responsibilities/support duties as well.
TIP: Once a task is complete, don’t delete the task from the list. Instead, cross it off to
indicate to anyone unfamiliar with the project that the task has been completed and
not missed.
TIP: Embedding the test names and/or test IDs into the Master Test Plan will allow a
word processor to find where a particular test case is referenced much faster than a
manual “eyeball” search.
70© 2013 SQE Training V3.1
Page 67
13 ― Environmental Needs
� Hardware Configuration: An attempt should be made to make the platform as
similar to the real world system as possible. If the system is destined to be
run on multiple platforms, a decision must be made whether to replicate all of
these configurations or to replicate only targeted configurations (e.g., the
riskiest, the most common, etc.). When you’re determining the hardware
configuration, don’t forget the system software as well.
� Data: Again, it is necessary to identify where the data will come from to
populate the test database/files. Choices might include production data,
purchased data, user-supplied data, generated data, and simulators. It will be
necessary to determine how to validate the data. You should not assume that
even production data is totally accurate. You must also access the fragility of
the data so you know how often to update it!
� Interfaces: When planning the test environment, it is very important to
determine and define all interfaces. Occasionally the systems that you must
interface with already exist; in other instances, they may not yet be ready and
all you have to work with is a design specification or some type of protocol. If
the interface is not already in existence, building a realistic simulator may be
part of your testing job.
� Facilities, Publications, Security Access, etc: This may seem trivial, but you
must ensure that you have somewhere to test appropriate security clearance
and so forth.
71© 2013 SQE Training V3.1
Page 68
14 ― Responsibilities
Using a matrix in this section of the MTP quickly shows major responsibilities such as
establishment of the test environment, configuration management, unit testing, and so
forth.
TIP: It is a good idea to specify the responsible parties by name or by organization.
72© 2013 SQE Training V3.1
Page 69
15 ― Staffing and Training Needs
While the actual number of staff required is, of course, dependent on the scope of the
project, schedule, etc., this section of the MTP should be used to describe the number
of people required and what skills they need. You may simply want to say that you
need fifteen journeymen testers and five apprentice testers. Often, however, you will
have to be more specific. It is certainly acceptable to state that you need a special
person: “We must have Jane Doe to help establish a realistic test environment.”
Examples of training needs might include learning about:
� How to use a tool
� Testing methodologies
� Interfacing systems
� Management systems, such as defect tracking
� Configuration management
� Basic business knowledge (related to the system under test), etc.
73© 2013 SQE Training V3.1
Page 70
16 ― Schedule
The schedule should be built around the milestones contained in the project plan, such
as delivery dates of various documents and modules, availability of resources, and
interfaces. Then, it will be necessary to add all of the testing milestones. These testing
milestones will differ in level of detail depending on the level of the test plan being
created. In a master test plan, milestones will be built around major events such as
requirements and design reviews, code delivery, completion of user manuals, and
availability of interfaces. In a unit test plan, most of the milestones will be based on the
completion of various programming specs and units.
Initially, it may be necessary to build a generic schedule without calendar dates. This
will identify the time required for various tasks and dependencies without specifying
start and finish dates. Normally, the schedule will be portrayed graphically using a
Gantt chart to show dependencies.
TIP: While doing the initial planning, use a start day of day zero, rather than a specific
date (e.g., May 14). Unfortunately, when specific dates are used, many reviewers
focus on the start and end dates and ignore the middle (i.e., the schedule).
74© 2013 SQE Training V3.1
Page 71
75© 2013 SQE Training V3.1
Page 72
17 ― Approvals
The approver should be the person (or persons) who can say that the software is
ready to move to the next stage. For example, the approver on a unit test plan might
be the development manager. The approvers on a system test plan might be the
person in charge of the system test and whoever is going to receive the product next
(which may be the customer, if they are going to be doing the acceptance testing). In
the case of the master test plan, there may be many approvers: developers, testers,
customers, QA, configuration management, etc.
You should try to avoid the situation in which you seek the appropriate signatures after
the plan has been completed. If you do get the various parties to sign at that time, all
you have is their autograph (which is fine if they ever become famous and you’re an
autograph collector). Instead, your goal should be to get agreement and commitment,
which means that the approvers should have been involved in the creation and/or
review of the plan during its development. It is part of your challenge as the test
planner to determine how to involve all of the approvers in the test planning process.
TIP: If you have trouble getting the right people involved in writing the test plan,
consider inviting them to a test planning meeting and then publish the minutes of the
meeting as the first draft of the plan.
76© 2013 SQE Training V3.1
Page 73
The purpose of the Test Summary Report is to summarize the results of the
designated testing activities and to provide evaluations based on these results.
The IEEE defines a Test Summary Report as being made up of the following sections:
� Report Identifier:
Specify the unique identifier assigned to the Test Summary Report.
� Summary:
Summarize the evaluation of the test items. Identify the items tested
indicating their versions/revision level. Indicate the environment in which the
testing activities took place. For each item, supply references to the following
documents (if they exist): test plan, test design specifications, test procedure
specifications, test item transmittal reports, test logs, and test incident
reports.
� Variances:
Report any variances of the test items from their design specifications.
Indicate any variances from the test plan, test designs, or test procedures.
Specify the reason(s) for each variance.
� Comprehensive Assessment:
Evaluate the comprehensiveness of the testing process against the
comprehensiveness criteria specified in the test plan, if the plan exists.
Identify features or feature combinations that were not sufficiently tested and
explain the reasons.
77© 2013 SQE Training V3.1
Page 74
78© 2013 SQE Training V3.1