Top Banner
Software Test Management By: Kaushik Raghavan
67
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Software test management

Software Test Management

By: Kaushik Raghavan

Page 2: Software test management

Test Strategy

A strategy for software testing integrates software test case design methods into a well-planned series of steps that result in the successful construction of software

The strategy provides a road map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time, and resources will be required.

Testing is a set of activities that can be planned in advance and conducted systematically for this reason a template for software testing—a set of steps into which we can place specific test case design techniques and testing methods—should be defined for the software process.

Page 3: Software test management

Generic Characteristics of a Test Strategy

Testing begins at the component level and works "outward" toward the integration of the entire computer-based system.

Different testing techniques are appropriate at different points in time.

Testing is conducted by the developer of the software and an independent test group.

Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.

Page 4: Software test management

Preparing a Test Strategy

Any Test Strategy Should consist of the following

Scope

Out of Scope

Test Levels

Roles and Responsibilities

Testing Tools

Risks and Mitigation

Regression Test Approach

Test Groups

Page 5: Software test management

Preparing a Test Strategy

Mile Stones

Test Priorities

Escalation Mechanism

Test Summary Reports

Status reporting Mechanism

Test Records

Page 6: Software test management

Some Best Practices for Developing Test Strategies

• Specify product requirements in a quantifiable manner long before testing commences.

• Understand the users of the software and develop a profile for each user category.

• State testing objectives explicitly.

• Develop a testing plan that emphasizes “rapid cycle testing.”

• Use effective formal technical reviews as a filter prior to testing.

• Develop a continuous improvement approach for the testing process.

• A good test strategy is Specific, Practical, Justified

• The purpose of a test strategy is to clarify the major tasks and challenges of the test project.

Page 7: Software test management

Web Testing Check List

• User Interface Testing A. Easy to use B. Instructions are simple and clear. • Site map or navigation bar A. Is the site map correct? B. Does each link on the map actually exist? C. Is the navigational bar present on every

screen?• Site Content

Page 8: Software test management

Contd..

• Application Specific Functionality

Correctness of the functionality of the website No internal and external broken links. User submitted information through forms, needs to work properly.

• Cookies

If the system uses cookies, make sure the cookies work. If cookies store login information, make sure the information is encrypted in

the cookie file. If the cookie is used for statistics, make sure those cookies are

encrypted too. Otherwise people can edit their cookies information

Page 9: Software test management

Contd..

• Error Handling

• Compatibility Testing

• Load/Stress Testing

• Log Files

Page 10: Software test management

Strategy for Test Automation

What can be automated and what cannot be automated..

Identifying the test cases to automate.

Test Automation Architecture

Defining a framework for automation.

Test Automation Process.

Page 11: Software test management

Strategy for Performance Testing

Choosing the tool.

• Performance Test Objective.

• Performance test scenarios.

• Defining the SLA.

• Defining Performance Counters.

• Performance Test Architecture.

• Performance Test Process.

Page 12: Software test management

Test Strategy for A Maintenance Project

• Defining the feature to test• Analyzing the scope of the specific maintenance

project (e.g., upgrade, minor implementation, support packs )

• Identifying missing test scenarios and documentation Performing the priority analysis from an IT perspective rather than from the business perspective as in the previous phase. The IT point of view should reflect an analysis of the impact of the changes introduced to the system

Page 13: Software test management

Items for a maintenance Project

• A description of the test assignment and scope

• A high-level description of the features to test and not to test

• A list of item pass/fail criteria

• A list of the suspension and resumption requirements

• Strategy load test data into test environment

• Strategy load test data into test environment

Page 14: Software test management

What to Test

• Complex• New• Changed• Dependency• Critical• Precise• Popular• Third-party• Buggy

Page 15: Software test management

Test Approach

• The test approach describes the types of tests performed and the sequence of tests as executed in the test lab

• In the Test Approach the following three Items have to be clearly specified.

Test Types

Test Sequence

Test Cycle

Page 16: Software test management

Test Types

• Build verification tests.

• Functional tests.

• Security tests.

• Technical writing tests.

Page 17: Software test management

BVT

• The following goals are adopted for build verification testing:

• Clarity: The steps should clearly describe the options that must be selected to enable the service.

• Completeness: All configuration options required to configure the service are described

• Presentation: The process for configuring the service is linearly presented and represents the process that must be followed when initially deploying the solution.

• Consistency: The configuration options presented in the solution should always use the tools that are best suited for configuring the service.

Page 18: Software test management

Functional Tests

• Atomicity: The test verifies the individual units of functionality such as the configuration of a particular functional option for a service.

• Integration: The functional test verifies the overall function of the services considering all the options that were configured in the solution.

• Completeness: The functional test cases ensure that the technical goals of the solution are achieved.

Page 19: Software test management

Technical Writing Tests

• Prerequisites: Documented hardware and software prerequisites should be verified.

• References: References between documents or chapters of the solution should be validated.

• Links: Links to Web pages on the web page or other approved sites should be verified as valid.

Page 20: Software test management

Security Tests

• Principle of least privilege• Availability• Integrity • Confidentiality • Spoofing

• Repudiation

Page 21: Software test management

Test Sequence

• Build phase• Build verification testing phase • Functional testing phase • Technical writing testing phase

• Security testing phase

Page 22: Software test management

Test Life Cycle

Page 23: Software test management

Test Life Cycle

• Requirements Analysis: Testing should begin in the requirements phase of the software life cycle (SDLC).

• Design Analysis: During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those tests work.

• Test Planning: Test Strategy, Test Plan (s), Test Bed creation.

• Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software.

• Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team.

• Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.

• Retesting the Defects

Page 24: Software test management

Test Methodologies

• When Planning for the methodology the following things have to be considered:

• Where will the testing take place? • Who will perform the tests?

• How will you communicate with and involve participants?

• How will you schedule the testing? • How will you manage application problems?

Page 25: Software test management

Contd..

• Do you have staff available for testing? • Does your staff have the appropriate level of expertise?

• What are the internal costs compared to the outsourcing costs?

• What is your time frame? Can the testing get done faster if you outsource it?

• What are your security requirements? Would you need to provide confidential data to an external organization?

Page 26: Software test management

Task List for Application Testing

• Inventory the applications used for business tasks.

• Develop a system for prioritizing business tasks.

• Prioritize the applications by how critical they are to running your business

• Write a test plan, including test methodology, lab and test resource requirements, and schedule.

• Develop a test tracking system for capturing and reporting test results.

• Promote the testing methodology.

• Schedule test events.

• Test applications and record results.

• Report on testing progress.

Page 27: Software test management

Automation Test methodology

• Functional Decomposition:

The Fundamental Areas Include:

Navigation

Availability

Specific Business Function

Page 28: Software test management

A hierarchical architecture Model

• Start Up Script• Main Script Module Use case• User Defined Functions Utility Functions• Reusable actions Business Functions• Reports Log Reports Status Reports• Recovery Scenarios• Test Batches

Page 29: Software test management

Various Test Methodologies

• Regression Testing

• Progression Testing

• Vertical Testing

• Horizontal testing

• State Transition testing

• Agile testing

• Ad hoc Testing

Page 30: Software test management

Test Estimation Process

• Based on the risk and criticality associated with the AUT, the project team should establish a coverage goal during the test planning.

• Test scope is the important factor to be considered.

• The objective of test estimation is to assure that the estimation process has covered the every aspect of application areas.

• Estimation of effort is very critical for estimation

Page 31: Software test management

Test Effort Estimation - Approaches

• Experiences in similar/previous projects

• Historical data

• Predefined budget

• Intuition of the experienced tester

• Extrapolation

• Testing best practice

Page 32: Software test management

Testing best practice

Page 33: Software test management

Estimation Methods

• Use Case Point

• Function Point

• Test Case Point

Page 34: Software test management

Use Case Point

• Use Case Point Analysis is an estimation method for new technology projects.

• Widely Used

• Can be used for estimation from the requirement phase itself

• Complexity of use cases are considered

• A quantified Weightage factor is used for adjustments

• Independent of technology being used

Page 35: Software test management

Contd..

• Ensures that all requirements are met.

• The Use Case Approach captures requirements from the perspective of how the user will actually use the system .

Page 36: Software test management

Steps For Use Case Point Analysis

• Define Use cases

• Count The unadjusted use case weight Factor

• Count Unadjusted Actor Weight Factor

• Calculate the unadjusted Use case Points

• Calculate the technical Complexity Factor

• Calculate The environmental Complexity Factor

• Calculate Use case points

• Convert Use case into Person Effort

Page 37: Software test management

Case Study.

A Case study to make an estimation on use case

Point Estimation. All the Details , Formulae will be

Given. A sample Estimation will be shown before

Case study.

Page 38: Software test management

Function Point Analysis

What is Function Point Analysis (FPA)?• It is designed to estimate and measure the time, and

thereby the cost, of developing new software applications and maintaining existing software applications.

• It is also useful in comparing and highlighting opportunities for productivity improvements in software development.

• It was developed by A.J. Albrecht of the IBM Corporation in the early 1980s.

• The main other approach used for measuring the size, and therefore the time required, of software project is lines of code (LOC) – which has a number of inherent problems.

Page 39: Software test management

Function Point Analysis

How is Function Point Analysis done?

Working from the project design specifications, the following system functions are measured (counted):

• Inputs

• Outputs

• Files

• Inquires

• Interfaces

Page 40: Software test management

Function Point Analysis

These function-point counts are then weighed (multiplied) by their degree of complexity:

Simple Average Complex

Inputs 2 4 6

Outputs 3 5 7

Files 5 10 15

Inquires 2 4 6

Interfaces 4 7 10

Page 41: Software test management

Function Point Analysis

A simple example:

inputs3 simple X 2 = 6 4 average X 4 = 161 complex X 6 = 6

outputs6 average X 5 = 302 complex X 7 = 14

files5 complex X 15 = 75

inquiries8 average X 4 = 32

interfaces3 average X 7 = 214 complex X 10 = 40

Unadjusted function points 240

Page 42: Software test management

Function Point Analysis

In addition to these individually weighted function points, there are factors that affect the project and/or system as a whole. There are a number (~35) of these factors that affect the size of the project effort, and each is ranked from “0”- no influence to “5”- essential.

The following are some examples of these factors:• Is high performance critical?• Is the internal processing complex?• Is the system to be used in multiple sites and/or by multiple

organizations?• Is the code designed to be reusable?• Is the processing to be distributed? • and so forth . . .

Page 43: Software test management

Function Point Analysis

Continuing our example . . . Complex internal processing = 3Code to be reusable = 2High performance = 4Multiple sites = 3Distributed processing = 5

Project adjustment factor = 17

Adjustment calculation:Adjusted FP = Unadjusted FP X [0.65 + (adjustment factor X 0.01)] = 240 X [0.65 + ( 17 X 0.01)] = 240 X [0.82] = 197 Adjusted function points

Page 44: Software test management

Function Point Analysis

But how long will the project take and how much will it cost?

• As previously measured, programmers in our organization average 18 function points per month. Thus . . .

197 FP divided by 18 = 11 man-months• If the average programmer is paid $5,200 per

month (including benefits), then the [labor] cost of the project will be . . .

11 man-months X $5,200 = $57,200

Page 45: Software test management

Function Point Analysis

Because function point analysis is independent of language used, development platform, etc. it can be used to identify the productivity benefits of . . .

• One programming language over another• One development platform over another• One development methodology over another• One programming department over another• Before-and-after gains in investing in

programmer training• And so forth . .

Page 46: Software test management

Function Point Analysis

But there are problems and criticisms:• Function point counts are affected by project size• Difficult to apply to massively distributed systems or to

systems with very complex internal processing• Difficult to define logical files from physical files• The validity of the weights that Albrecht established – and

the consistency of their application – has been challenged

• Different companies will calculate function points slightly different, making intercompany comparisons questionable

Page 47: Software test management

Test Case Point

• Analyze RFP• Count the number of features• Estimate the number of test cases using a thumb

rule based on the Historical Data• Break the verification points into GUI, Functional,

Database, Exception and Navigation• Add a weighting factor to all these• Consider the Affecting factors • Arrive at the test case points• Calculate the person effort hours

Page 48: Software test management

Estimating the % of testing time for the project

Page 49: Software test management

Contd..

Page 50: Software test management

Contd..

Page 51: Software test management

Contd..

Page 52: Software test management

Contd..

Page 53: Software test management

Project Risk Indicator

Page 54: Software test management

Risk Analysis

Risk to the project with an emphasis on testingLack of Personnel resources when testing is to begin

Lack of availability of required hardware, software , hardware or tools

Late delivery of software, hardware or tools

Delay in training

Change in requirements

Complexities involved in testing the Application

Page 55: Software test management

RISK ANALYSIS AND MANAGEMENT

• Risk concerns future happenings.• By changing our actions today, create an opportunity for

a different and hopefully better situation for ourselves tomorrow.

• Risk involves change, such as in changes of mind, opinion, actions, or places.

• how will changes in customer requirements, development technologies.

• Target computers, and all other entities connected to the project affect timeliness and overall success

Page 56: Software test management

Risk Analysis

• Risk analysis and management are a series of steps that help a software team to understand and manage uncertainty

• A risk is a potential problem—it might happen, it might not.

• Regardless of the outcome, it’s a really good idea to identify it, assess its probability of occurrence, estimate its impact, and establish a contingency plan should the problem actually occur.

Page 57: Software test management

Risk Analysis

• Who does it?

• Everyone involved in the software

• process—managers, software engineers, and customers— participate in risk analysis and management.

• understanding the risks and taking proactive measures to avoid or manage them—is a key element of good software project management.

Page 58: Software test management

Risk Analysis - Steps

• Recognizing what can go wrong is the first step, called “risk identification.”

• each risk is analyzed to determine the likelihood that it will occur and the damage that it will do if it does occur

• Once this information is established, risks are ranked, by probability and impact.

• Finally, a plan is developed to manage those risks with high probability and high impact.

Page 59: Software test management

REACTIVE VS. PROACTIVE RISK STRATEGIES

• A reactive strategy monitors the project for likely risks. Resources are set aside to deal with them, should they become actual problems. More commonly, the software team does nothing about risks until something goes wrong. Then, the team flies into action in an attempt to correct the problem rapidly.

Page 60: Software test management

REACTIVE VS. PROACTIVE RISK STRATEGIES

• A considerably more intelligent strategy for risk management is to be proactive. A proactive strategy begins long before technical work is initiated. Potential risks are identified, their probability and impact are assessed, and they are ranked by importance. Then, the software team establishes a plan for managing risk. The primary objective is to avoid risk, but because not all risks can be avoided, the team works to develop a contingency plan that will enable it to respond in a controlled and effective manner.

Page 61: Software test management

SOFTWARE RISKS

• Uncertainty

• Loss

• Project risks

• Technical risks

• Business risks

Page 62: Software test management

RISK IDENTIFICATION

• Product size—risks associated with the overall size of the software to be built or modified.

• Business impact—risks associated with constraints imposed by manage mentor the marketplace.

• Customer characteristics—risks associated with the sophistication of the customer and the developer's ability to communicate with the customer in a timely manner.

• Process definition—risks associated with the degree to which the software process has been defined and is followed by the development organization.

• Development environment—risks associated with the availability and quality of the tools to be used to build the product.

• Technology to be built—risks associated with the complexity of the system to be built and the "newness" of the technology that is packaged by the system.

• Staff size and experience—risks associated with the overall technical and

Page 63: Software test management

Assessing Overall Project Risk

1. Have top software and customer managers formally committed to support the project?

2. Are end-users enthusiastically committed to the project and the system/product to be built?

3. Are requirements fully understood by the software engineering team and their customers?

4. Have customers been involved fully in the definition of requirements?

5. Do end-users have realistic expectations?6. Is project scope stable?

Page 64: Software test management

Assessing Overall Project Risk

7. Does the software engineering team have the right mix of skills?

8. Are project requirements stable?9. Does the project team have experience with the

technology to be implemented?10. Is the number of people on the project team

adequate to do the job?11. Do all customer/user constituencies agree on

the importance of the project and on the requirements for the system/product to be built?

Page 65: Software test management

Risk Drivers

• Performance risk—the degree of uncertainty that the product will meet its requirements and be fit for its intended use.

• Cost risk—the degree of uncertainty that the project budget will be maintained.

• Support risk—the degree of uncertainty that the resultant software will be easy to correct, adapt, and enhance.

• Schedule risk—the degree of uncertainty that the project schedule will be maintained and that the product will be delivered on time.

Page 66: Software test management

Risk Classification

Page 67: Software test management

Risk Table