Top Banner
   © C o p y r i g h t 2 01 2 S a p i e nt C o r p o r a t i o n | C on f i d en t i a l XYZ Test strategy Document Version 1.0    X    Y    Z      t   e   s    t   s    t   r   a    t   e   g   y  
29

Test Strategy

Jul 18, 2015

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 1/28

 

 © Copyr ight 2012 Sapient Cor po rat i on | Conf ident ia l

XYZ Test strategy

Document Version 1.0

   X   Y   Z

  -   t  e  s   t  s   t  r  a   t  e  g  y

 

Page 2: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 2/28

  Table of Contents

QA Capability – MDM -test strategy 

Table of Contents

Glossary and Abbreviations 2 1  Project Overview 3 2  Purpose 4 3  Scope 5 4  Test Strategy 6 

4.1  Smoke Testing 6 4.2  Functional Testing 6 4.3  Cross Browser Testing 7 4.4  Usability Testing 7 4.5  Accessibility Testing 8 4.6  System Integration Testing (SIT) 8 4.7  Performance Testing 9 4.8  Database Testing 9 4.9  Security Testing 9 

5  MDM testing approach 11 5.1  Important aspects of testing MDM application 11 5.2  Size and Complexity of the project 12 

6  Defect Management Process 14 6.1  Defect Status Meetings 14 6.2  Defect Priority Guidelines 14 6.3  Defect Reporting Workflow 15 6.4  Different stages of Defect 16 

7  Test Environment 18 8  Test Tools 19 9  Suspension/Resumption Criteria 20 10  Test Deliverables 21 11  Assumptions 22 12  Communication approach 23 

12.1  Status Reporting 23 12.2  Defect Review Session 23 

13  Roles and Responsibilities 24 14  Client Responsibilities 25 15  References 26 16  Approvals 27 

Page 3: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 3/28

  Glossary and Abbreviations

QA Capability – MDM -test strategy 

Glossary and Abbreviations

No. Term Description

1. MDM Master Data management

Page 4: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 4/28

 

QA Capability – MDM -test strategy 

1 Project OverviewTo provide an internal website to access the distributed data from various standalone applications and

provide the consolidated information. This is done by creating and managing central repository of data and

accessing it through the UI.

Page 5: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 5/28

 

QA Capability – MDM -test strategy 

2 PurposeThe purpose of this document is to provide a test strategy for XYZ Corporation based on different

implementations.

The purpose of this document is to

Define a Test strategy for MDM Describe the different testing types which would be relevant based on different criteria

Describe the Testing Approach

Define and identify roles and responsibilities in testing process

Page 6: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 6/28

 

QA Capability – MDM -test strategy 

3 ScopeSapient is responsible for testing the MDM application developed by either Sapient itself or by any other

vendor.

The major testing scope items would be:

Test the customizations and configurations being done in the MDM Test the non customisable non configurable requirements based on the project needs

Integration within and third party applications

Testing the Non –functional requirements based on the closure between Sapient and clients

In general the scope of testing types includes:

Smoke Testing

Functional Testing

Cross Browser Testing

Usability Testing

Accessibility Testing

Regression Testing

System Integration Testing

Localization Testing

Performance Testing

Third party Integration testing

Database Testing

Security Testing

Page 7: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 7/28

 

QA Capability – MDM -test strategy 

4 Test StrategyAs mentioned in the scope above, following testing types should be considered while testing a MDM

implementation:-

Smoke Testing

Functional Testing

Cross Browser Testing

Usability Testing

Accessibility Testing

Regression Testing

System Integration Testing

Localization Testing

Performance Testing

Third party Integration testing

Database Testing

Security Testing

4.1 Smoke Testing

Smoke testing is designed to quickly test the application and to understand of any potential issues whichcan lead to failure of major functionality. Testing teams will have the information of what functionalities arebeing releases in a particular iteration. Only when smoke testing is completed successfully, the testing teampicks the rest of the functionality.

Smoke Testing approach and activities:

Smoke testing suite will consist of all high priority scenarios for the requirements delivered for a given

iteration.

Before QA environment is formally accepted for testing by testing team, high priority scenarios identified

above are executed

Any build that does not meet exit criteria for smoke testing will be rejected, and testing will continue on the

previous base-lined build till the time new successful build is accepted by testing team

4.2 Functional Testing

Testing team will create functional test cases and ensure that these test cases are mapped to all the testscenarios which in turn are mapped to requirements document. (Recommendation: In order to save time, it

is better to use the high level scenarios created by QA Capability team as your base and then create the

residual test scenarios) Functional testing focuses on verification of all the functionality described in the use

cases.

Functional Testing approach and activities:

Owner Testing teamActivity Stability of the application buildEnvironment QA EnvironmentEntryCriteria

Build is ready

Exit Criteria All smoke testing scenarios have passed

Owner Testing team

Activity Execution of functional test casesEnvironment QA EnvironmentEntry Criteria All smoke testing scripts are passedExit Criteria No open P1s and P2 defects

Page 8: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 8/28

 

QA Capability – MDM -test strategy 

Testing team will create functional test scripts which will be mapped to Test scenarios and subsequently

they will be mapped to requirements

Test Scenarios should be validated with all the stakeholders

Test scripts should be peer reviewed

Use stubs for testing 3rd

party integration points

The functional test scripts will be executed and results shared with stakeholders

4.3 Cross Browser Testing

Cross Browser testing should be done to confirm that the functionality/look and feel of pages on different

browsers is consistent. Different browsers that can be used for testing are:-

Internet Explorer

9.0 

Internet

Explorer 8.0 

Internet Explorer

7.0 

Firefox 3.6 Chrome 7.0 Opera10 

Flash10.0 Safari 5.0

Cross Browser testing approach:

Open the main browser and the browser to be tested at the same time and execute the scripts at

the same time. This will help in executing scripts faster

4.4 Usability Testing

Usability testing is primarily done to ensure that the end user has a good experience while traversing

through the website. Important aspects that would be tested in an MDM application are:

To effectively see that the user can accomplish the desired tasks

Effort needed to efficiently accomplish the desired tasks

We have return users

In this type of testing, the application is tested for user interface items like colour, text, font, alignment of

buttons, alignment of fields, status of buttons, size of the fields or buttons etc. This is a visual comparison of

the application pages against the Wireframes/graphic designs.

This testing will be done for all the requirements that involve a GUI component change or new

development.

Owner Testing team

Activity Execute test scripts based on closedscope(Recommended: Execute High and Medium prioritytest scripts

Environment Testing EnvironmentEntryCriteria

There are no Open P1s and P2s on the main browser

Exit Criteria There are no Open P1s and P2s on all the browsersScope There should be consistency of page layout and GUI in all

the browsers and adheres to the Wireframes documentsand no high priority defects from Business point of view.

There should be no P1 defects from functionality andBusiness point of view

Page 9: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 9/28

 

QA Capability – MDM -test strategy 

4.5 Accessibility Testing

Testing team will carry out accessibility testing to ensure application meets minimum level of accessibility

through mark up, scripting or other technologies that interact with or enable access through user agents,

including assistive technologies.

Accessibility testing will follow the guidelines specified in W3C Web Content Accessibility Guidelines 2.0,Conformance Level A.

This will be performed by ensuring:

Text alternatives are provided for any non-text content so that it can be changed into other forms people

need, such as large print, Braille, speech, symbols or simpler language.

Creation of contents that can be presented in different ways (for example simpler layout) without losing

information or structure.

Testing the application for all the functionality available from Keyboard.

Provide users enough time to read and use content, wherever time based action is involved.

4.6 System Integration Testing (SIT)

Primary purpose of SIT would be to execute end to end test scenarios and all the test cases which havebeen added over the period of different iterations.

Following strategy will be followed:

Testing team will use the test cases created for testing iterations and will also create end to end scenarios

meant for testing the integrated code.

Test Data would be provided by the team as per the agreed upon.

Owner Testing Team

Activity Testing of user interface as per Wireframes and designs

Environment Testing Environment

Entry Criteria Smoke testing scripts are 100% passedExit Criteria There should be consistency of page layout and GUI in allthe browsers and adheres to the Wireframes documentsand no high priority defects i.e. P1‟s from Business point ofview.

Owner Testing team

Activity Test the application for user accessibility. 

Environment Testing EnvironmentEntry Criteria The application is successfully tested for functional

requirementsExit Criteria Application to meet minimum accessibility criteria with no

P1‟s as specified in W3C Web Content AccessibilityGuidelines 2.0, Conformance Level A.

Owner Testing Team

Activity Interface Testing for all those interfaces where's there is

an interaction with sapient responsible systems, thus

verifying that response to request is coming properly or

Page 10: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 10/28

 

QA Capability – MDM -test strategy 

4.7 Performance Testing

Need to be covered by the respective teams in consultation with the performance testing team

4.8 Database Testing

Testing team will create database testing test cases and ensure that these test cases are mapped to all the

test scenarios which are picking the values from the database. (Recommendation: In order to save time, it

is better to use the high level scenarios created by QA Capability team as your base and then create the

database test scenarios based on project need) In Database Testing test engineer should test the data

integrity, data accessing, query retrieving, modifications, updation and deletion etc

Database Testing approach and activities:

Testing team will create database test scenarios and scripts

Test Scenarios should be validated with all the stakeholders

Test scripts should be peer reviewed

The database test scripts will be executed and results shared with stakeholders

Activities included

a) Data validity testing.

b) Data Integrity testing

c) Testing of Procedure, triggers and functions

4.9 Security Testing

Security testing is a process to determine that an information system protects data and maintains functionalityas intended.

The basic security concepts that need to be covered by security testing are: confidentiality, integrity,authentication, availability, authorization and non-repudiation using Parameter Tampering, cookie poisoning,Stealth commanding and Forceful Browsing.

not.

Environment SIT environment

Entry Criteria

Smoke testing scripts are 100% passed

Modules have undergone unit testing and passedfunctional testing.

Interfaces and interactions between the various systemsmust be operational.

Exit Criteria There are no P1 defects.

Defects identified during SIT have either been fixed oraccepted as known issues by the client.

Owner Testing team

Activity Execution of database test casesEnvironment QA EnvironmentEntry Criteria All smoke testing scripts are passedExit Criteria No open P1s and P2 defects

Page 11: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 11/28

 

QA Capability – MDM -test strategy 

Authentication - Testing the authentication schema means understanding how the authentication process

works and using that information to circumvent the authentication mechanism. Basically, it allows a

receiver to have confidence that information it receives originated from a specific known source.

Authorization - Determining that a requester is allowed to receive a service or perform an operation.

Confidentiality - A security measure which protects the disclosure of data or information to parties other

than the intended.

Integrity – Whether the intended receiver receives the information or data which is not altered intransmission.

Non-repudiation - Interchange of authentication information with some form of provable time stamp e.g.

with session id etc.

Strategy should include the following:

Testing team should validate Details /Scope for Security Testing

Testing team should check for Session Maintenance.

Testing team should validate security in various modules by testing strategies mentioned in table.

Testing team should validate Entry and Exit Criteria.

Page 12: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 12/28

 

QA Capability – MDM -test strategy 

5 MDM testing approachConsider MDM as made up of a front end (the human-computer interface), a back end (Centralised

database) and some middleware (some interfaces that integrates database with the standalone

applications).In order to fully test the system it is important to test the different aspects in isolation and

clubbed together.

5.1 Important aspects of testing MDM application

Browser compatibility

Though relatively simple to do, it pays to spend enough time testing in this area. Decide on the lowest level ofcompatibility and test that the system does indeed work without problem on the early as well as the latestbrowser versions.

Even with the same release version, browsers behave differently on different platforms, and when used withdifferent language options. Testing should cover at least the main platforms (Unix, Windows, Mac, and Linux)and the expected language options.

Session Management

Most applications and Web servers configure sessions so that they expire after a set time. Attempting toaccess a session object that has expired causes an error, and must be handled within the code. Testing ofsession expiration is ofter overlooked, largely because under normal operational circumstances, sessionexpiration is unlikely to occur.

Usability

Site navigation is crucial for attracting customers and retaining them. Sophisticated Web sites, such as fortravel booking, need to pay particular attention to navigation issues.

Large entities catalogs are central to many trading systems. Client should be able to quickly browse and

search through catalogs. Developers can define tests to measure the effectiveness of entity navigationmechanisms. For example, you could test that a search on particular keywords brings up the correct entities.

Availability

Before going live, predicted business usage patterns should indicate maximum stress levels. You should testsystem availability against the maximum stress levels plus a safety margin for a defined period of time.

Internationalization

Does the site offer the option to view non-English pages? If the choice of language is based on browserpreferences, does it work on all the desired browsers? Many older browsers do not support languagecustomization. We need to test all the above aspects.

Test that words are correctly displayed and that sentence are grammatically correct. Use a native speaker toverify that this is the case..

System Integration

The data interface defines the format of data exchanged by front- and back-end systems. Tools such as XML(Extensible Mark- Up Language) alleviate data interface problems by providing document type definitions.

The processing between front- and back-end systems may be time dependent. For example, a back-endsystem could be necessary to process a data transmission from the frontend system immediately or within adefined period. Tests should ascertain whether a system actually observes timeliness constraints and whetherit activates data transmissions at the correct time.

Page 13: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 13/28

 

QA Capability – MDM -test strategy 

One system must often update information in another system. Verify that batch programs and remoteprocedures perform the necessary update operations without side effects.

5.2 Size and Complexity of the project

The size and complexity of an MDM implementation would drive the testing strategy for the project.

Low complexity MDM implementation

Low complexity MDM implementation would contain only basic features of MDM. The integration with 3rd

partyvendors would be very limited and there would not be integration with multiple vendors. Any changes to theapplication would be an easy job.

The testing of such systems would include:

Testing of Interface

Testing of integration with standalone applications

Test data management is simpler in these implementations. Since the teams are smaller in size and the content onthe sites are not heavy, the tester will have to manage smaller sized test data.

Medium complexity MDM implementation

Medium complexity MDM implementation would contain all the basic features as well as some advance features of e-commerce. There would be multiple touch points with other vendors. The information in these systems is nothardcoded and backend systems are used for publishing data on the website.

The testing of such systems would include all that is mentioned under Low complexity implementation and the itemsmentioned below:

Testing of data flow from backend

Testing of information flowing between 3rd

party vendors

Test data management would require pre planning and co-ordination with other vendors. There should be acommunication channel wherein the data coming into the system and going out of it is coordinated with 3

rdparty

vendors.

Large complexity MDM implementation

Large complexity MDM implementation would contain all the advance features and would involve multiple integrationswith different 3

rdparty systems. The implementation would be complex and would require huge testing effort. The data

from one system would flow to multiple systems.

The testing of such systems would include all that is mentioned in low and medium complexity implementations andthe items mentioned below:

Security and integrity of data Security of personal information

Internationalization

Performance

Test data management would require proper planning and co-ordination with other vendors. The ownership of testdata needs to be defined and a central repository needs to be in place from where everyone can pick up data and useit.

Approach for testing large complexity MDM implementation:-

Test Global and Test Distributed: MDM system is global in spirit and structure. The different underlying systems

may be on different continents, but they appear to integrate seamlessly over large, distributed and non-homogenous

Page 14: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 14/28

 

QA Capability – MDM -test strategy 

networks and other communication channels. Testing team has to validate that the impact of changes in one systemshould not impact other systems.

Consider User Profile: The user profile varies in terms of role(Admin or Normal user). While testing the application,ensure that all the access rights of user profile are taken care of.

Page 15: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 15/28

 

QA Capability – MDM -test strategy 

6 Defect Management ProcessA defect is a flaw in any aspect of a system including the requirements, the design or the code, that

contributes, or may potentially contribute, to the occurrence of one or more failures. This process is

essentially a workflow that defines how defects are captured, fixed, retested, and closed. Defects that are

discovered must be logged, tracked and managed to resolution in order to ensure they are not propagated

to the production environment.

6.1 Defect Status Meetings

In order to ensure defects are on a path to resolution, daily sessions (TAR sessions) will be conducted toreview defects with Client during SIT and UAT phase. These sessions are very critical to the success oftesting. It is therefore critical that key lead resources attend. During this session, we review theoutstanding issues (New, Open, and Reopen) by the priorities identified below. Some issues may requirebreakout sessions with other teams in order to resolve. These defects will be discussed briefly in themeetings and follow up meetings will be held immediately following the session.

The duration of the meeting will vary based upon the number of open defects, and the speed at which defectsare being worked.

6.2 Defect Priority Guidelines

Priority levels will be assigned for the defects as below:

Priority Definition Example

P1  Showstopper / Critical

- Testing cannot continue until issue is resolved

- That prevents execution of major (core)functionality and has no work around

- has major implication in the business.

After clicking the Search button the application hangs

Clicking a link leads to system exception error

Submission of data leads to a system exception or error

page

From usability and accessibility perspective; flickering of

screen could not be paused or stopped

P2  High

- Major functionality produces wrong results

- Workaround exists

- Resulting system has reduced usability for

the end user

Search by specific text doesn't work

Clicking on search from the Home page opens the wrong

search page

System responds incorrectly to invalid data, error handling

not implemented

Navigation within fields resulting into error message pop

up saying the value entered is incorrect

  From usability and accessibility perspective; moving

content could not be frozen P3  Medium / Significant

- Minor functionality produces wrong results or

missed.

Clicking on a link takes the user to the wrong location on

the same page

A comments field is not getting updated in the database

Field validation missing

Scroll bar not working, incorrect labels, and instructional

text, heading or sub headings.

From usability and accessibility perspective; tab order is

not logical through links, forms, and objects

P4  Low/Minor

- A defect that does not affect the functionalityof the system.

- Only minor cosmetic issues.

Spelling or grammatical mistakes on web pages.

Font type or color used is not according to the specified

format

Page 16: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 16/28

 

QA Capability – MDM -test strategy 

6.3 Defect Reporting Workflow

StartTester Log a Defect with Status“New” & assigns to QA TL

Action Required: Tester LogsDefect along with:a) Detail steps to reproduce defect.b) Screenshot of the Defect

Defect is discussed in TAR session.

Is it a Defect?

Action Required: Reason fromdiscussion needs to be stated inDescription Field.

Defect Not Valid

Defect logged is nota Requirement.

Enhancement

Issue

TrainingRequirement

FollowAppropriateLifecycle

No YesDefect is assigned to DevTL with Priority.

Defect is assigned toDeveloper with Status “Open” 

Is Defect a Duplicate?Yes

Is Defect reproducible?No

No

Yes

Change Status to“Reject – Duplicate”. 

Action Required: Developer tomention duplicate Defect ID.

Defect assigned toQA TL.

Change Status to“Reject – InvalidDefect”. 

Change Status to“Reject – Can‟t beReproduced”. 

Stop Defect Status ischanged to “Closed” 

Developer fixes the Defect.

Change the status to “Fixed”. 

Build Released to TestingEnvironment.

Defect assigned to QA TL withstatus “Ready to Retest”. 

Defect assigned to “Tester”. Tester retests thedefect for correct fix.

Is Defect fixed?

Yes

NoDefect status changed to“Reopen” and Defectassigned back to Developer.

Action Required: Tester to addcomments in Description Field.

Page 17: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 17/28

 

QA Capability – MDM -test strategy 

The diagram above captures the defect tracking process:

The tester will enter all defects into the defect tracking system; assign the defects to the QA TL in New 

status. A defect will contain a

Title/Summary

Description with steps to reproduce and screen shot or relevant information

Status Priority

Module Name

Discovered In release

The defect is then discussed in TAR session.

  If the defect is acknowledged to be “Invalid”, that the defect raised is invalid as per the current

requirement, defect is marked as Reject-Invalid Defect.

If the defect is acknowledged to be “Valid”, then it is assigned to the Development TL who assigns it to the

concerned developer.

When a developer acknowledges the defects, he will change the status to Open and will start working on

the defect

If the developer finds a defect to be a duplicate of an existing open defect, they will assign it back to

testers mentioning the exact duplicate defect id. Testers re-verify whether it is actually a duplicate or not.

If yes then defect is marked as Reject – Duplicate else it is re-assigned back to developers with proper

comment in Open status

If the developer cannot reproduce the defect, he will change the status to Reject-Can‟t be Reproduced 

and assign it back to tester. Tester again tries to reproduce it and if it is re-producible, tester assigns it

back to developer with appropriate comment in Open state else it is Closed

If the defect is reproducible, then the developer fixes a defect & he will change its status to Fixed and

keep it with him until next build is provided to QA

Before every release, developers set the status to Ready to Retest and mentions the build in which the

defect has been fixed and assigns it back to the QA TL and he/she assigns it to the tester

Once the defect fixes are released, the tester will re-test to verify the defect. If the defect has been fixed,

the status will be changed to Closed. If the Defect has not been fixed s/he will assign it back to developerwith status as Reopen 

If the review team or developer requires further information about the issue, they will change the status to

More Info Required and assign it back to testers. Testers provide required information and assign the

defect back to developer in Open status

Before every release if there are certain known defects to developers, they log it with status Known Issue 

When a defect is raised and the review team acknowledges it as a known limitation of the system but

decides against fixing it & releasing the course with the defect, the status will be set to „Known Issue‟ 

If Testing has any suggestion about the functionality or UI which may result in requirement change, a

“Suggestion” will be logged for the same. „Suggestion‟ is the category of the defect and these artifacts

will not be counted in QA defect report.

QA team will work on only those defects which are assigned to the QA team members & are in only Re-Test/Cannot Reproduce/More Info Required/Duplicate status.

6.4 Different stages of Defect

1. New – Indicates new defect is logged on to defect tracking tool along with the Severity of the Defect. All“New” Defects are assigned to QA TL. 

2. Open  – Defect is assigned “Open” status when it is assigned to a Developer for fix along with Priorityassigned against it.

3. Fixed – Developer, after fixing the defect changes its status to “Fixed” and don‟t assign it to anyone. 4. Ready to Retest – Defect after getting fixed by developer is released to QA environment for Retest after

new build containing the fix is deployed. The Defect status is then changed to “Ready to Retest” andassigned to QA TL who in turn assign it to Testers for retest.

Page 18: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 18/28

 

QA Capability – MDM -test strategy 

5. Reopen – On retesting the defect if tester finds that defect is not fixed, then its status is changed to“Reopen” and is assigned back to developer along with comments. 

6. Reject-Duplicate – Defect if found duplicate of an already existing defect already logged in defecttracking tool will be rejected with status “Reject-Duplicate” by the developer along with Duplicate defect IDand will be assigned to QA TL.

7. Reject-Can‟t be reproduced  – If the defect, assigned to developer for fix, cannot be reproduced by thedeveloper then it‟s rejected with status “Reject-Can‟t be reproduced” by the developer along with

comments and will be assigned to QA TL. QA TL will discuss this defect with tester & developer. If thisdefect is getting reproduced then it will be assigned back to Developer with added comments andscreenshot with status Reopen else it will remain in “Reject-Can‟t be Reproduced” status. 

8. Reject-Invalid Defect  – A defect can be labelled as an invalid defect when:

Defect logged by the tester which is not a requirement.

Defect logged by the tester due to miss interpretation of requirement.9 Closed  – The defects retested and working fine are “closed” by the tester. 10 More Info Required - the review team or developer requires further information about the issue, then

status changed to “More Info Required” 11 Known Issue - known limitation of the system or before every release if there are certain known

defects to developers12 Suggestion - Testing has any suggestion about the functionality or UI which may result in requirement

change, a “Suggestion” will be logged for the same 

Page 19: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 19/28

 

QA Capability – MDM -test strategy 

7 Test EnvironmentThe following Test Environment‟s would be made available for testing:

Sapient Test QA Environment

Client Test Environment

Staging Environment

Production Environment

In normal scenarios, testing will be done only in QA environment. If the QA environment is inaccessible due totechnical issues then testing will be done on Dev environment.

We would be using the following client environment setup for our testing

Software/OS Version

Windows XP SP-4

Linux

MAC

Page 20: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 20/28

 

QA Capability – MDM -test strategy 

8 Test ToolsThe following table lists the test tools required for this project.

Requirement Tool Vendor Version

Test Management Selenium Open Source 2.19

Defect Tracking Bugzilla Open Source 4.2

Unit Testing JUnit Open Source 4.10

Page 21: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 21/28

 

QA Capability – MDM -test strategy 

9 Suspension/Resumption CriteriaThe testing will be suspended if

Test Environment and backup environment is unavailable.

Smoke testing for the build fails.

Incorrect version of code is deployed.

Functionality is unstable, i.e., too many non-reproducible defects are encountered

Testing will be resumed if

Stable test environment is available with stable build and correct version of code.

Smoke testing for the build is passed.

When showstoppers are fixed.

Page 22: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 22/28

 

QA Capability – MDM -test strategy 

10 Test DeliverablesDescribed in the table below are testing deliverables, responsibilities and details for the project.

S. No. Deliverable

Name

Details

1 Test Strategy

Document

A Test Strategy document will be created once

and it will outline the following details.

Scope and types of testing

Testing Approach and Key activities

Entry and Exit criteria

Defect Reporting workflow

2 Test Plan This document contains testing activities

planned.

Page 23: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 23/28

 

QA Capability – MDM -test strategy 

11 Assumptions

MDM would be accessed through LAN or VPN.

Proper permissions have been taken to access the real time data from standalone applications

All the legal formalities have been carried out

Page 24: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 24/28

 

QA Capability – MDM -test strategy 

12 Communication approach

12.1 Status Reporting

The following status reports will be generated only during UAT phase to track testing and provide visibility

into the status of testing, outstanding issues and risks.

Test Execution Plan and Status – Module-wise / Overall

Active, resolved and closed defects

Defects by severity and priority

% of test cases passed vs. failed vs. remaining to run

After UAT starts, Defect Triaging report for issues logged by the UAT team

12.2 Defect Review Session

Defect review session will be held during Test script execution. Its task will be to review testing activities

and prioritize and assign any defect(s) that have been raised during UAT. These TAR sessions will be done

every day during UAT phase or as agreed upon with Client.

Expected participants are Sapient and Client stake holders.

Page 25: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 25/28

 

QA Capability – MDM -test strategy 

13 Roles and Responsibilities

Role Who Responsibility

Test Lead  Sapient Responsible for drafting and executing Testing Strategy as a whole

Works with development team to ensure bugs are fixed in a timely manner

Reviewing and Ensuring Test Scripts/cases are as per agreed standards.

Creates/updates test scripts/cases.

Assists the Tester(s) in understanding of the application and writing effectiveTest Cases

Coordinates test activities

Participates in TAR sessions/Defect Prioritization meetings during SIT/UAT

Prepare and Publish Test Status Reports at the end of TestingCycle/Iteration.

Risk and Issues escalation and tracking

Will assist CO test lead in SIT planning.

Provides test data created by Hybris system .

Test Lead Client Plans and co-ordinates test activities for SIT / UAT testing at Client testenvironment

Ensures access to Testing testers to System Integration Testing testenvironment

Provides test data required from Client Internal systems and Externalsystems

Participates in TAR sessions/Defect Prioritization meetings during SIT/UAT

Tester(s) Client Responsible for Security (Cybercom for external security testing), Migration(Migration Imports performed by Sapient would be verified by Sapient whereas overall Migration testing would be done by Client), and User Acceptancetesting

Carries out System Integration testing for Client‟s internal and externalsystems to be working fine after integration

Tester(s)  Sapient Understands the requirements/stories and the application

Creates and Updates test cases

Execution of test scripts/cases as per the plan

Capturing defects in defect tracking tool

Retesting fixed defects and closing them

Page 26: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 26/28

 

QA Capability – MDM -test strategy 

14 Client ResponsibilitiesFollowing are the responsibilities:

Access to their environments for Testing Team i.e. Client Test Environment, Staging Environment and

Production Environment.

Test Data required from standalone applications would be provided by Client.

UAT Test Plan for the testing to be provided by Client team.

Performance Test Data to be provided.

User Acceptance Testing to be carried out by Client QA team.

Resources to be used for UAT.

Sign off process for High Level Scenarios from Client.

Sign off process for Test Cases from Client.

Sign off process for each Iteration.

Page 27: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 27/28

 

QA Capability – MDM -test strategy 

15 References Specifications of all standalone applications

SRS

Page 28: Test Strategy

5/16/2018 Test Strategy - slidepdf.com

http://slidepdf.com/reader/full/test-strategy-55ab5685c14db 28/28

 

QA Capability – MDM -test strategy 

16 ApprovalsApproval of Test Strategy document

By signing this, I confirm my approval of the Test Strategy.

Name Role Signature Date

Vikas Test manager

Shachi Test Lead

XYZ Client