Top Banner
Software Development Life Cycle Objectives This tutorial consists of: Principles and Concepts of Software Development. What is software ? What are software characteristics ? What are the phases of software development and building test efforts into every phase at critical points. Models of Software Development Life Cycle (SDLC). Understand the importance of adopting a life cycle approach to testing process. What is Software Software is: Instructions (computer programs) which when executed, provide desired function and performance. Data structures that help the program to adequately manipulate information. Documents that describe the operation and use of the programs. Software characteristics What are software characteristics? Software is logical unlike hardware, which is physical (contains chips, circuit boards, power supplies etc.,) Hence its characteristics are entirely different. Software is developed and not manufactured. Software does not “wear out” …as do hardware components, from dust, abuse, temperature and other environmental factors. Hence its failure curve is idealized. Although industry is moving towards component-based assembly, most systems continue to be custom built. A software component should be built such that it can be reused in many different programs.
94
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Software Development Life Cycle

Software Development Life CycleObjectivesThis tutorial consists of:

• Principles and Concepts of Software Development. • What is software ?• What are software characteristics ?• What are the phases of software development and building test efforts into every

phase at critical points.• Models of Software Development Life Cycle (SDLC).• Understand the importance of adopting a life cycle approach to testing process.

What is Software• Software is:• Instructions (computer programs) which when executed, provide desired function

and performance.• Data structures that help the program to adequately manipulate information.• Documents that describe the operation and use of the programs.

Software characteristicsWhat are software characteristics?

• Software is logical unlike hardware, which is physical (contains chips, circuit boards, power supplies etc.,) Hence its characteristics are entirely different.

• Software is developed and not manufactured.• Software does not “wear out” …as do hardware components, from dust, abuse,

temperature and other environmental factors. Hence its failure curve is idealized.

• Although industry is moving towards component-based assembly, most systems continue to be custom built.

• A software component should be built such that it can be reused in many different programs.

Software Development Life Cycle

Users

Different kinds of Software

BusinessEmbedded or

Real Time

Utility Users Tools Dev. Tools

System Software

Applications

Infrastructure

Page 2: Software Development Life Cycle

What is the 'software life cycle'? • Various activities which are undertaken when developing software. • The life cycle begins when an application is first conceived and ends with the

formal validation of the software against requirements.• It includes aspects such as initial concept, requirements analysis, functional

design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

Software Development Life Cycle• There are a number of different models for software development lifecycles. • One thing which all models have in common, is that at some point in the lifecycle,

software has to be tested.• This session outlines some of the more commonly used SDLCs, with particular

emphasis on the testing activities in each model. • Traditionally, the models used for the software development lifecycle progress

through a number of well defined phases.• They are:

• Requirements and Analysis• Design• Coding • Testing• Installation, Operation and maintenance

Traditionally, the models used for the software development lifecycle have been sequential, with the development progressing through a number of well defined phasesExamples of Requirements and Specifications DocumentationThe following list describes the various kinds formal documents that belong to the body of requirements and specifications document. These are not all mandatory for each and every software project, but they do all provide important information to the developers, designers and engineers tasked with implementing a project and to the quality assurance people and testers responsible for evaluating the implementation of the project. These topics may also be combined as sections of larger and inclusive requirements and specifications documents.User RequirementsUser requirements typically describe the needs, goals, and tasks of the user. I say "typically" here because often these user requirements don't reflect the actual person who will be using the software; projects are often tailored to the needs of the project requestor, and not the end-user of the software. I strongly recommend that any user requirements document define and describe the end-user, and that any measurements of quality or success be taken with respect to that end-user.User requirements are usually defined after the completion of task analysis, the examination of the tasks and goals of the end-user.System RequirementsThe term system requirements has two meanings. First, it can refer to the requirements that describe the capabilities of the system with which, through which, and on which the product will function. For example, the web site may need to run on a dual processor box, and may need to have the latest database software.Second, it can refer to the requirements that describe the product itself, with the meaning that the product is a system. This second meaning is used by the authors of Constructing Superior Software (part of the Software Quality Institute Series):There are two categories of system requirements. Functional requirements specify what the system must do. User requirements specify the acceptable level of user performance and satisfaction with the system (p 64).

Page 3: Software Development Life Cycle

For this second meaning, I prefer to use the more general term "requirements and specifications" over the more opaque "system requirements".Functional RequirementsFunctional requirements describe what the software or web site is supposed to do by defining functions and high-level logic. Functional SpecificationsFunctional specifications describe the necessary functions at the level of units and components; these specifications are typically used to build the system exclusive of the user interface.With respect to a web site, a unit is the design for a specific page or category of page, and the functional specification would detail the functional elements of that page or page type. For example, the design for the page may require the following functions: email submission form, search form, context-sensitive navigation elements, logic to drop and/or read a client-side cookie, etc. These aren't "look" issues so much as they are "functionality" issues. A component is a set of page states or closely related forms of a page. For example, a component might include a page that has a submission form, the acknowledgement page (i.e., "thanks for submitting"), and the various error states (i.e., "you must include your email address", "you must fill in all required fields", etc.).The functional specifications document might have implications about the design of the user interface, but these implications are typically superceded by a formal design specification and/or prototype.Design SpecificationsThe design specifications address the "look and feel" of the interface, with rules for the display of global and particular elements.Flow or Logic DiagramFlow diagrams define the end-user's paths throng the site and site functionality. For ex. A flow diagram for a commerce site would detail the sequence of pages necessary to gather the information required by the commerce application in order to complete an order. Logic diagrams describe the order that logic decisions are made during the transmission, gathering, or testing of data. So for example, upon submission of a form, information may be reviewed by the system for field completeness before being reviewed for algorithmic accuracy; in other words, the system may verify that required fields have in fact been completed before verifying that the format of the email address is correct or the credit card number is an algorithmically valid number. Another example would be the logic applied to a search query, detailing the steps involved in the query cleanup and expansion, and the application of Boolean operators.System Architecture DiagramA system architecture diagram illustrates the way the system hardware and software must be configured, and the way the database tables should be defined and laid out.Prototypes and Mock-upsA prototype is a model of the system delivered in the medium of the system. For example, a web site prototype would be delivered as a web site, using the standard web protocols, so that it could be interacted with in the same medium as the project's product. Prototypes don't have to be fully functioning, they merely have to be illustrative of what the product should look and feel like. In contrast, a mock-up is a representation in a different medium. A web site mock-up might be a paper representation of what the pages should look like.The authors of Constructing Superior Software describe several categories of prototypes: low fidelity prototypes which correspond to what I've labeled "mock-ups", and high fidelity prototypes.Low fidelity prototypes are limited function and limited interaction prototypes. They are constructed to depict concepts, design alternatives, and screen layouts rather than to

Page 4: Software Development Life Cycle

model the user interaction with the system....There are two forms of low fidelity prototype: abstract and concrete....The visual designer works from the abstract prototype and produces drawings of the interface as a concrete low fidelity prototype....High fidelity prototypes are fully interactive (p 70-71).Prototypes and mock-ups are important tools for defining the visual design, but they can be problematic from a quality assurance and testing point of view because they are a representation of a designer's idea of what the product should look and feel like. The issue is not that the designer's may design incorrectly, but that the prototype or mock-up will become the de facto design by virtue of being a representation. The danger is that the design will become final before it has been approved; this is known as "premature concretization" or "premature crispness of representation", where a sample becomes the final design without a formal decision. If you have every tried to get page element removed from a design, you have an idea what this problem is like. The value of prototypes is that they provide a visual dimension to the written requirements and specifications; they are both a proof of concept and the designers' sketchpad wrapped up in one package.Technical SpecificationsTechnical specifications are typically written the developers and coders, and describe how they will implement the project. The developers work from the functional specifications, and translate the functions into their actual coding practices and methodologies.

Software Development Life CycleWhat are the four components of the Software Development Process?

• a. Plan • b. Do• c. Check• d. Act

Phases of SDLC• Requirements and Analysis

1. Specific requirements of the software to be built are gathered and documented.

2. Adequacy of requirements determined.3. Requirements get documented in the form of System Requirement

Specification (SRS).4. Acts as a bridge between customer and designers..

The Requirements phase, in which the requirements for the software are gathered and analyzed, to produce a complete and unambiguous specification of what the software is required to do.The Design phase, where a software architecture for the implementation of the requirements is designed and specified, identifying the components within the software and the relationships between the components.What's the big deal about 'requirements'? One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application's externally-perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different

Page 5: Software Development Life Cycle

methods are available depending on the particular project. Many books are available that describe various approaches to this task. Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible. Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements. In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.

• Requirement Specifications: This specifies the objectives of the system/product, what is to be accomplished, how the it fits into the need of the business and finally, how it is used on a day-to-day basis. This is not simple.

Few questions posed during requirement analysis are:1. Is requirement consistent with overall objective for the system?2. Is each requirement bounded and clear ? 3. Do any requirements conflict with other requirements ?4. Have all requirements been specified at a proper level of abstraction?

Phases of Software System:Requirements

i. Determinate the verification approachii. Determine the adequacy of requirements

iii.Generate the functional test data.iv.Determine the consistency of the design with requirements.Design

i. Determine the adequate of design.ii. Generate the structural and functional test data.

iii. Determine the consistency with the design.

Role of Testers in Software Life Cycle:Concept Phase,

o Evaluate Concept Document,o Learn as much as possible about the product and project,o Analyze Hardware/software Requirements,o Strategic Planning,

Requirement Phase,o Analyze the Requirements,o Verify the Requirements,o Prepare Test Plan,o Identify and develop requirement based test cases,

Design Phase,

Page 6: Software Development Life Cycle

o Analyze design specifications,o Verify design specifications,o Identify and develop Function based test cases,o Begin performing Usability Tests,

Coding Phase,o Analyze the code,o Verify the code,o Code Coverage,o Unit test,

Integration & Test Phase,o Integration Test,o Function Test,o System Test,o Performance Test,o Review user manuals,

Operation/Maintenance Phase,o Monitor Acceptance Test,o Develop new validation tests for confirming problems,

• Design1. This phase is to figure out how to satisfy the requirements enumerated in the SRS.2. Input is SRS and maps the requirements to a design that can drive the coding.3. An Architectural design is evolved, followed by a HLD and LLD.4. Requirements gets translated into features.5. Each feature being designed to meet one or more of the requirements.6. Design acts as the blueprint for actual construction of the software.

• Coding1. Coding phase comprises coding the programs in the chosen programming

language.2. It produces the software that meets the requirements the design was meant to

satisfy.3. This phase also involves creation of product documentation.

• Testing• Test application systems.

Codingi.Determine the adequacy of implementation.ii.Generate the structural and functional test data for programs.Testingi.Test application systems.Installation, Operation and maintenancei.Place tested system into production.ii.Modify and retest. 

• Analysis: This is to understand the software from the system context and to review the software scope that is used to generate planning estimates.

It allows the analyst to refine the software and build data models and, functional and behavioral domains that will be treated by the software.

Page 7: Software Development Life Cycle

• Design: Once software requirements have been analyzed and specified, design the first technical activity required to build and verify the software. This is a process through which the requirements are translated into a “blueprint” for constructing the software.

• Testing : A verification process in which the code is executed with test data to assess the presence (or absence) of required features.

• Installation, Operation and Maintenance1. Place tested system into production.2. Product defects fixed and retest.3. Satisfy the changes that arises from customer expectations, environmental

changes etc.,4. Corrective maintenance5. Adaptive maintenance6. Preventive maintenance.

Corrective maintenance:Fixing customer related problems

Adaptive maintenance:Making the software run on new version of an OS or Database

Preventive maintenance:Changing the application program code to avoid a potential security hole in an OS code.

Problems in Software development processWhat are common problems in the software development process?

• Poor requirements • Unrealistic Schedule • Inadequate testing • Features• Miscommunication

Poor requirements - If requirements are unclear, incomplete, too general, or not testable, there will be problems. Unrealistic Schedule - If too much work is crammed in too little time, problems are inevitable. Inadequate testing - No one will know whether or not the program is any good until the customer complains or systems crash. Features - Requests to pile on new features after development is underway; extremely common. Miscommunication - If developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed.

Solutions

What are common solutions to software development problems?

Page 8: Software Development Life Cycle

• Solid requirements • Realistic Schedules • Adequate testing – • Stick to initial requirements as much as possible • Communication

Solid requirements - Clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. Realistic Schedules - Allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out. Adequate testing - Start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. Stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on. Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes early on so that customers' expectations are clarified.

Life Cycle Testing• Life Cycle Testing means perform testing in parallel with systems development.• While the system being developed, a test plan and test conditions are developed

and executed. • At predetermined points during the life cycle, the system is tested to ensure that it

is being developed properly and the defects are detected at the earliest possible points of the life cycle.

• Life cycle testing involves continuous testing of the system during the development process.

Each development phase in SD process creates a work product (Like requirements, specifications etc.,) that can be tested to see how successful the translation is.Early stages work products are:Requirements and Specifications-They are available to be read and compared with other documents.Later in the process the work product is code.Early is where many of the important errors are, and this point is reinforced by the fact that more than 50% of all defects are usually introduced in the requirement stage alone.

How can we reduce the cost of the Testing?Identifying the defects in the system in the early stages of the life cycle can reduce the cost the testing.

Life Cycle testing Concept• At the predetermined points, the results of the development process are inspected

to determine the correctness of the implementation.

Page 9: Software Development Life Cycle

• These inspections identify the defects at the earliest possible points.• Test early and prevent defect migration.

What is the Life Cycle Testing Concept?Life cycle testing involves continuous testing of the system during the development process. At the predetermined points, the results of the development process are inspected to determine the correctness of the implementation. These inspections identify the defects at the earliest possible points.

Test early and prevent defect migration.Every time when there is an opportunity to find defect and we don’t find it, and it is allowed to migrate stage. It is going to cost much more to fix when we do find it.At the next stage it can cost an order of magnitude more than and this magnitude increases as the stages goes up.

Role of Testers in Software Life Cycle:Concept Phase,

o Evaluate Concept Document,o Learn as much as possible about the product and project,o Analyze Hardware/software Requirements,o Strategic Planning,

Requirement Phase,o Analyze the Requirements,o Verify the Requirements o Prepare Test Plan,

Life Cycle testing Concept

Requirements

Design

Code

Test

Installation

Defect Prevention

Test Planning and Test Design

Defect Detection

Page 10: Software Development Life Cycle

o Identify and develop requirement based test cases,Design Phase,

o Analyze design specifications,o Verify design specifications,o Identify and develop Function based test cases,o Begin performing Usability Tests,

Coding Phase,o Analyze the code,o Verify the code,o Code Coverage,o Unit test,

Integration & Test Phase,o Integration Test,o Function Test,o System Test,o Performance Test,o Review user manuals,

Operation/Maintenance Phase,o Monitor Acceptance Test,o Develop new validation tests for confirming problems,

SDLCSoftware is developed based on specific process models, which are chosen

• Based on the nature of the project, methods and tools to be used.• Controls and deliverables required. • Few of the models are:

• Waterfall Model• Rapid Application Development(RAD)• Spiral Model• Progressive Development Life Cycle• Iterative Life Cycle models

Sequential SDLC models• The sequential phases are usually represented by waterfall diagram.• There are many variations of waterfall lifecycle models, introducing different

phases to the lifecycle and creating different boundaries between phases. • Here Project is divided into a set of phases (or activities).• On completion of each phase, the project moves to the next phase and so on. The

phases are strictly time sequenced.• Each phase communicates to the next phase through pre-specified outputs.• On error detection, it is traced back to the one previous phase at a time, until it

gets resolved at some earlier phase.

The Sequential phases are usually represented by a V or waterfall diagram.There are in fact many variations of V and waterfall lifecycle models, introducing different phases to the lifecycle and creating different boundaries between phases. The following set of lifecycle phases fits in with the practices of most professional software developers.Requirements specification- which specifies what the software is required to do and may also specify constraints on how this may be achieved.

Page 11: Software Development Life Cycle

Architectural Design-Which describes the architect of a design which implements the requirements. Components within the software and the relationship between them will be described in this document.Detailed Design –Which describe how each component in the software, down to individuals, is to be implemented.Code and Unit Test-in which each unit(basic component) of the software is testes to verify that the detailed design for the unit has been correctly implemented.Integration – in which progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a whole.System testing-in which the software is integrated to the overall product and tested to show that all the requirements are met.Acceptance Testing – upon which acceptance of the completed software is based.This will often use a subset of the system tests, witnessed by the customers for the software or system.

Different Phases of Sequential Model • Requirements phase

In which the requirements for the software are gathered and analyzed, to produce a complete and unambiguous specification of what the software is required to do

• Detailed Design phase where the detailed implementation of each component is specified.

• Code and Unit Test phase In which each component of the software is coded and tested to verify that it

faithfully implements the detailed design.

Note: Software specifications will be products of the first three phases of this lifecycle model. The remaining four phases all involve testing the software at various levels, requiring test specifications against which the testing will be conducted as an input to each of these phases.

• Software Integration phase In which progressively larger groups of tested software components are integrated

and tested until the software works as a whole.• System Integration phase

In which the software is integrated to the overall product and tested.• Acceptance Testing phase

Where tests are applied and witnessed to validate that the software faithfully implements the specified requirements.

Page 12: Software Development Life Cycle

Describes a process of stepwise refinement.Takes a static view of requirements.Unrealistic separation of specification from design.Advantages/Disadvantages:

• Most-widely used process model.• Controls schedules,budgets & documentation.• Tends to favor well-understood system aspects over poorly understood

system components.• Does not detect development areas behind schedule early in the life cycle

stages.

V-Model

• The V-model is an internationally recognized standard for Testing systems.

• It uniformly and bindingly lays down what has to be done in testing, how the tasks are to be performed and what is to be used to carry this out.

• This model is based on observation of respective tests at each phase of development and different types of testing apply at different levels

Water Fall Method

In this model the application is developed as a series of sequential steps.

Here the project is broken into series of increments, each of which delivers a portion of the functionality of the overall project.

Requirement Analysis

Maintenance

Architectural Design

Detailed Design

Coding/Implementation

Testing

Page 13: Software Development Life Cycle

The HLD – Breaking the system into subsystems with identified interfaces; then gets translated to a more detailed design.The LLD/ Detailed Design phase, where the detailed implementation of each component is specified. The detailed design goes into issues like Data structures, Algorithm choices (Procedure in simple language For Ex: For adding two numbers ; First step: get the two nos. Step 2: add the nos. Step 3: display the nos.), Table layouts, processing logic, exception conditions and so on.It results in the identification of a number of components realized by program code written in appropriate programming languages.

Verification-Does the solution solve the stated problem?Validation- did we solve the right problem?

V-Model

G VV

“Validation”“Verification”

Coding

User AcceptanceTesting

System Testing

IntegrationTesting

Unit Testing

ProgramSpecification

Detail Design

High Level Design

Requirement Specifications

Page 14: Software Development Life Cycle

Rapid Application Development(RAD)• RAD is a variation of the Prototyping model.• It relies on feedback and interaction by the customers to gather the initial

requirements.• Adapts itself to rapid changes, since changes are unavoidable. • The actual product is produced and is reviewed . The customers’ feed back is

taken and changes made.• CASE tool is used throughout the life cycle.• This method has wider applicability for even general purpose products.• Automatic generation of design and programs produced by a CASE tool make this

model more attractive.• Cost of a CASE tool is a factor to decide use of this model for development of a

product.• Case tools and this model is generally suitable more suited for application projects

rather than systems projects.CASE TOOL: Computer Aided Software Engineering toolIn order to ensure formalism in capturing the requirements and proper reflection of the requirements in the design and subsequent phases, CASE tool is used.Such CASE tools have:

• Methodologies to elicit requirements;• Repositories to store the gathered requirements and all down stream entities

such as design objects; and • Mechanism to automatically translate the requirements stored in the

repositories to design and generate the code in the chosen programming environment.

The methodologies provided by a CASE tool can provide inbuilt means of verification and validation. For ex: the tool may be able to automatically detect and resolve inconsistencies in data types or dependencies . Since the design(perhaps,even

V-Model

G VV

Requirements

HLD

LLD Component Testing

Integration Testing

System Testing

User Acceptance Testing

Coding

Design

Validate requirements

Architectural Design

“Validation”“Verification”

Unit Testing

Page 15: Software Development Life Cycle

the program code) can be automatically generated from the requirements, the validation can be very complete, extending to all the downstream phases, unlike the prototype model.

Spiral ModelIncorporates prototyping and risk analysis but…Cannot cope with unforeseen changes(e.g.new business objectives)Not clear how to analyze risk.

Prototype model comprises the following activities:1.Interaction with customers to understand their requirements.2.A prototype is produced to showThe Spiral Model A combination of prototyping with iterative development Subsumes most other models Seek feedback from customer Iterates over requirements Prototypes built Processes can run concurrently

• Incorporates prototyping and risk analysis• A combination of prototyping with iterative development• Prototype model comprises the following activities:

1. Interaction with customers to understand their requirements.2. A prototype is produced to show how the eventual system would look like.3. Prototype is reviewed along with the customer frequently.

Spiral Model

Page 16: Software Development Life Cycle

4. Based on the feed back and the prototype that is produced, SRS is produced.

5. Often the prototype makes its way to becoming the product itself. A prototype is produced to show how the eventual system would look like.This prototype would have the models of how the input screens and output reports would look like, in addition to having some “empty can functionality” to demonstrate the work flow and processing logic.

Each spiral consists of four main activities:Planning: setting project objectives; defining alternatives; further planning on the next spiral; etc.Risk Analysis: analysis of alternatives & the identification & solution of risks. Development: designing, coding and testing etc. in increments.Evaluation: user evaluation of each spiral and then the final product.

Progressive Development Lifecycle ModelsThe sequential V and waterfall lifecycle models represent an idealized model of software development.Other lifecycle models may be used for a number of reasons, such as volatility of requirements, or a need for an interim system with reduced functionality when long timescales are involved.As an example of other lifecycle models, let us look at progressive development and iterative lifecycle models.A common problem with software development is that software is needed quickly, but it will take a long time to fully develop. The solution is to form a compromise between timescales and functionality, providing "interim" deliveries of software, with reduced functionality, but serving as a stepping stones towards the fully functional software. It is also possible to use such a stepping stone approach as a means of reducing risk.

Progressive SDLC model

Page 17: Software Development Life Cycle

The usual names given to this approach to software development are progressive development or phased implementation. The corresponding lifecycle model is referred to as a progressive development lifecycle. Within a progressive development lifecycle, each individual phase of development will follow its own software development lifecycle,typically using a V or waterfall model. The actual number of phases will depend upon the development.Each delivery of software will have to pass acceptance testing to verify the software fulfils the relevant parts of the overall requirements. The testing and integration of each phase will require time and effort, so there is a point at which an increase in the number of development phases will actually become counter productive, giving an increased cost and timescale, which will have to be weighed carefully against the need for an early solution.The software produced by an early phase of the model may never actually be used, it may just serve as a prototype. A prototype will take short cuts in order to provide a quick means of validating key requirements and verifying critical areas of design. These short cuts may be in areas such as reduced documentation and testing. When such short cuts are taken, it is essential to plan to discard the prototype and implement the next phase from scratch, because the reduced quality of the prototype will not provide a good foundation for continued development.

• Progressive model used for reasons, such as volatility of requirements, or a need for an interim system with reduced functionality when long timescales are involved.

• Solution to form a compromise between timescales and functionality, providing "interim" deliveries of software, with reduced functionality, but serving as a stepping stones towards the fully functional software.

• Stepping stone approach as a means of reducing risk.

• Each individual phase of development follows its own SDLC,typically using a waterfall model. The actual number of phases will depend upon the development.

• Each delivery of software to pass acceptance testing to verify the software fulfils the relevant parts of the overall requirements.

• The testing and integration of each phase requires time and effort.• Hence,increase in the number of development phases actually becomes counter

productive.• Consequently,increased cost and timescale, which will have to be weighed

carefully against the need for an early solution.

Iterative model• An Iterative lifecycle model does not attempt to start with a full specification of

requirements.• Development begins by specifying and implementing just part of the software,

which is then reviewed in order to identify further requirements.• Above process repeated, producing a new version of the software for each cycle

of the model.• Consists of repeating the following four phases in sequence till the requirements

are met.• Requirements phase-Requirements are gathered and analyzed.

Page 18: Software Development Life Cycle

Iterative Lifecycle ModelsA Requirements phase, in which the requirements for the software are gathered and analyzed. Iteration should eventually result in a requirements phase which produces a complete and final specification of requirements.Design phase, in which a software solution to meet the requirements is designed. This may be a new design, or an extension of an earlier design.An Implementation and Test phase, when the software is coded, integrated and tested.A Review phase, in which the software is evaluated, the current requirements are reviewed, and changes and additions to requirements proposed.For each cycle of the model, a decision has to be made as to whether the software produced by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes referred to as incremental prototyping). Eventually a point will be reached where the requirements are complete and the software can be delivered, or it becomes impossible to enhance the software as required, and a fresh start has to be made.The iterative lifecycle model can be likened to producing software by successive approximation. Drawing an analogy with mathematical methods which use successive approximation to arrive at a final solution, the benefit of such methods depends on how rapidly they converge on a solution.Continuing the analogy, successive approximation may never find a solution. The iterations may oscillate around a feasible solution or even diverge. The number of iterations required may become so large as to be unrealistic. We have all seen software developments which have made this mistake!The key to successful use of an iterative software development lifecycle is rigorous validation of requirements, and verification (including testing) of each version of the software against those requirements within each cycle of the model. The first three phases of the example iterative model are in fact an abbreviated form a sequential V or waterfall lifecycle model. Each cycle of the model produces software which requires testing at the unit level, for software integration, for system integration and for acceptance. As the software evolves through successive cycles, tests have to be repeated and extended to verify each version of the software.

• Design phase-Solution to meet the requirements is designed. May be a new design or an extension of an earlier design.

• Implementation and Test phase-Software coded, integrated and tested.• Review phase-Software evaluated, current requirements reviewed and

changes and additions to requirements proposed• Decision as to software produced by the cycle will be discarded, or kept as

a starting point for the next cycle• Key to successful is rigorous validation of requirements and verification

(including testing).

Page 19: Software Development Life Cycle

Iterative Lifecycle ModelsThe first three phases of the example iterative model are in fact an abbreviated form a sequential V or waterfall lifecycle model. Each cycle of the model produces software which requires testing at the unit level, for software integration, for system integration and for acceptance. As the software evolves through successive cycles, tests have to be repeated and extended to verify each version of the software.

• Divide project into builds– each adds new functions– each build integrated &

product tested as a whole• Advantages ?

– operation product in weeks– less traumatic to organization– smaller capital outlay

• Disadvantages ?– need an open architecture

• a big advantage come maintenance!– too few builds ® build-and-fix– too many builds ® overhead

For each build, Perform detailed design, implementation, and integration. Test, Deliver to client

Maintenance phase

• Successfully developed software eventually becomes part of a product and enter a maintenance phase.

Iterative model

Page 20: Software Development Life Cycle

• During Maintenance phase software undergoes modification to correct errors and to comply with changes to requirements.

• Like initial development, modifications also follow a development lifecycle, but not necessarily using the same lifecycle model as the initial development.

• Throughout maintenance phase, software tests have to be repeated, modified and extended.

MaintenanceSuccessfully developed software will eventually become part of a product and enter a maintenance phase, during which the software will undergo modification to correct errors and to comply with changes to requirements. Like the initial development, modifications will follow a software development lifecycle, but not necessarily using the same lifecycle model as the initial development.Throughout the maintenance phase, software tests have to be repeated, modified and extended. The effort to revise and repeat tests consequently forms a major part of the overall costs of developing and maintaining software.The term regression testing is used to refer to the repetition of earlier successful tests in order to make sure that changes to the software have not introduced side effects.

• Effort to revise and repeat tests consequently forms a major part of the overall costs of developing and maintaining software.

• Regression testing used for repetition of earlier successful tests to make sure that changes to software have not introduced side effects.

Summary and Conclusion• Irrespective of lifecycle model used for software development, software has to be

tested.

• Efficiency and quality are best served by testing software as early in the lifecycle as practical, with full regression testing whenever changes are made.

• Such practices become even more critical with progressive development and iterative lifecycle models

• Regression testing is a major part of software maintenance.

• The ease with which tests can be repeated has a major influence on the cost of maintaining software.

Summary and ConclusionIrrespective of lifecycle model used for software development, software has to be tested.Efficiency and quality are best served by testing software as early in the lifecycle as practical, with full regression testing whenever changes are made.Such practices become even more critical with progressive development and iterative lifecycle modelsRegression testing is a major part of software maintenance. The ease with which tests can be repeated has a major influence on the cost of maintaining software.

Page 21: Software Development Life Cycle

A common mistake in the management of software development is to start by badly managing a development within a V or waterfall lifecycle model, which then degenerates into an uncontrolled iterative model. This is another situation which we have all seen causing a software development to go wrong.

• A common mistake in the management of software development is to start by badly managing a development within a V or waterfall lifecycle model, which then degenerates into an uncontrolled iterative model.

• This is another situation which we have all seen causing a software development to go wrong.

Exercises1. Which SDLC model would be most suitable for the following:

• The product is for a specific customer, who is always available to give feed back.

• The same as above, except that we also have access to CASE tool .• A product that is made of a number of features that become available

sequentially and incrementally.2. List three or more challenges from the testing perspective for each of the

following models:• Spiral model• Water fall model• Iterative model

Software Test Life CycleObjectivesThis session consists of:

• The software testing life cycle

• The various activities involved in the testing process.

STLC• Different stages involved in testing and certifying a product is called as Software

Test life cycle (STLC).

• STLC is a part of SDLC (Software Development Life Cycle)

• Since cost of fixing the defects is less during the early phases of development, testing should start as early as possible.

Test RequirementsAnalysis

Is there a

Defect?

Test PlanningTest Case

Development

Fix

The

Defect

Test CycleClosure

TestExecution

Page 22: Software Development Life Cycle

Independent Testing Lifecycle

Test RequirementsAnalysis

Is there a

Defect?

Test Planning

Test CaseDevelopment

Fix

The

Defect

Test CycleClosure

TestExecution

Page 23: Software Development Life Cycle

Requirements Management in Testing

• Testability• Can we define the acceptance criteria for this requirement?

• Completeness of the requirement• Analysis and clasification of requirements• Build in Traceability

• Eliminates redundancy• Helps in Change Management

Requirement Analysis

Activities

Interview stakeholders to understand different types of testable requirementsStudy Software requirements and existing Test processUnderstand the Domain, H/w and Architecture Understand the timelines available and budget allocatedPrepare Requirement Traceability Matrix Identify scope of automation and tools required

RequirementsCapture and Analysis

Entry Criteria…..

Exit Criteria…..

Requirements gathered includeFunctionalNon-Functional

PerformanceSecurityUsabilityCompatibility

Test Requirements and Automation feasibility report

Signed off by clientRTM

Page 24: Software Development Life Cycle

Test Planning

Activities

Preparation of Test PlanTest tool selectionTest effort estimation

Test PlanningEntry Criteria…..

Exit Criteria…..

Following artifactssigned offTest PlansTest Estimation

Test Requirements and Automation feasibility report Signed off by client

Test Case Development

Activities

Create test cases, automation scripts (where applicable)Create test dataSetup test environment and test itBaseline test cases and scripts

Test Case DevelopmentEntry Criteria…..

Exit Criteria…..

Requirements documentTest PlansTest EstimationRTMAutomation analysis report

Test CasesTest scriptsTest dataReviewed and signed off by client

Page 25: Software Development Life Cycle

Test ExecutionActivities***

Perform smoke test on the buildAccept/reject the depending on smoke test resultExecute tests as per planUpdate test plans, if necessaryDocument test results, and log defects for failed casesMap defects to test cases in RTM

Execution

Entry Criteria…..

Exit Criteria…..

Build to be testedUnit/Integration test report for the build to be tested Test EnvironmentTest Cases, scripts,Test dataRTM

Tests executedDefects loggedCompleted RTM

Test Cycle Closure

Activities

Evaluate cycle completion criteria based on

TimeTest coverageCost Software Quality

Prepare Test cycle completion reportDocument best practicesIdentify weak areas and plan for improvements

Test Cycle Closure

Entry Criteria…..

Exit Criteria…..All tests for the cycle executedDefect logsUpdated RTM

Application &Test result Report delivered to customer

*** This is an iterative process. Once the defects raised are fixed, the test execution will repeat.

Page 26: Software Development Life Cycle

Test Cycle Closure• The test cycle can be closed when:

• All the test cases for that cycle have been executed• All the outstanding defects have been resolved• No. of defects exceeds a previously-defined threshold• Severity of defect(s) impairs further progress of testing• Acceptance of all work-products by customer

Basic forms of TestingVERIFICATION VALIDATION

Am I Building

PRODUCT

RIGHT?

RIGHT

PRODUCT?

“Complies with the process to yield a right product”

“Validates correctness of software w.r.t user needs and requirements”

Requirement Analysis User Acceptance Testing

Functional Specification System Testing

High Level Design Integration Testing

Detailed Design / Program Specification

Unit Testing

CODE

Unit Test Plan

Integrated Test Plan

System Test Plan

User Acceptance Test Plan

Requirement Analysis User Acceptance Testing

Functional Specification System Testing

High Level Design Integration Testing

Detailed Design / Program Specification

Unit Testing

CODE

Unit Test Plan

Integration Test Plan

System Test Plan

User Acceptance Test Plan

Sta

tic

Test

ing

Dyn

amic

Tes

tin

g

Functional & Non-Function

Specifications are verified with the

customer

Inspections,

Reviews & Code Walk

through White BoxTesting

Black BoxTesting

Page 27: Software Development Life Cycle

Test DeliverablesThe following list gives the list of Test deliverables

• RTM• Estimation• Test Plan• Test Case• Test Data• Defect Report• Closure report

RTM• Provides mapping between test case to business scenario to requirements.• Helps in impact analysis during change requirements and enhancements.• Helps in prioritization of test cases during crunch times• Test Scenario -A set of test cases evaluating a business requirement for a

specific business scenario/condition.

Estimation • Provides information regarding number of resources required along with

the number of daysTest Plan

• Gives information regarding ‘why’ and ‘how’ of software testing• Provides information regarding objectives, scope, approach and focus of

s/w testing effort• Test strategy which talks about the approach for testing is a part of Test

planTest Case

Step by step procedure to test the system with expected results documented for each step.Test Data

SDLC Vs. STLC

Transition / Rollout

Testing (System, SI &

UAT)

Unit and Integration

Testing

DetailedDesign &

Development

High Level Design

Detailed *Requirements

Requirements Testability

review

Test Plan Test bed setup

RTM

Unit testing

Integration testing

Business Analysis

Software Development Life Cycle

Requirements could be defined along many dimensions e.g. : Functional Reqts, Usability Reqts, System Reqts, Performance Reqts, Quality Reqts, Technical Reqts, etc

Test data generation

Functional Test planning & scripting

IT test planning

Unit test planning

Functional testing

IT results review

Functional results review

Non-functional

testing

Non-functional

results review

Activities by Testers

User Interviews

Non-functional Test planning

Defect tracking

Software Testing Life Cycle

Page 28: Software Development Life Cycle

– Input data that is fed to test the systemDefect Report

• Contains the information regarding the defect• Should have details with which developer should be able to reproduce the

defects

TYPES OF TESTINGObjective

This session explains

• Types of testing– Depending on lifecycle stage– Depending on testing objective

Types of Testing

• Depending on lifecycle stage– Unit testing– Integration testing– System testing– User Acceptance testing

• Depending on testing objective– White box testing or structural testing– Black box testing or functional testing

Classification based on lifecycle stages

Unit Testing• Testing performed on a single, standalone module or unit of code to ensure

correctness of the particular module.• Focuses on implementation logic• The goal of unit testing is to isolate each part of the program and show that the

individual parts are correct.• This type of testing is mostly done by the developers.

This isolated testing provides four main benefits:Flexibility when changes are required.Ensures documentation of the code

Integration Testing

• Phase of software testing which follows unit testing and precedes system testing in which individual software modules are combined and tested as a group.

• It takes as its input modules that have been checked by unit testing, groups them, applies tests defined in an Integration test plan and delivers as its output, the integrated system .

• The purpose of Integration testing is to detect any inconsistencies between the software units that are integrated together

Page 29: Software Development Life Cycle

Types Of Integration Testing

• Incremental Integration• Incremental integration can be defined as continuous testing of an

application by constructing and testing small components.

• Big-bang Integration (non-incremental)• All components are combined in advance.• Correction is difficult because isolation of causes is complicated

• Incremental Integration can be classified as• Top-Down Strategy

• It is an incremental approach which can be done Depth-first or Breadth First

• Stubs are used until the actual program is ready

• Bottom-up Strategy• Process starts with low level modules where critical

modules are built first• Cluster approach and test drivers are used

Stubs And Drivers

• Stubs and drivers are dummy module interfaces which are used in integration testing. Stubs are used in top-down approach while drivers are used in bottom-up approach

• Stub is a piece of code emulating a called function

Integration Testing Contd…

Online bankingLoans Account Opening

External interface

Banking System

CRM HR Financial

Java Based Based on C, C++ VB based

Inter module interface

Intra module interface

Unit Module

External System

Interface between various modules of a system

Interface Between units within a module

Interface of External System

Page 30: Software Development Life Cycle

• Driver is a piece of code emulating  a calling function• Both will return the values which is sufficient for testing without doing the actual

processing/calculation.

Interfaces are the sources of many errors, as often there is a misunderstanding of the way components and subsystems should work together because they are developed by different people.Interface testing focuses specifically on the way components and subsystems are linked and work together.Interface testing can be applied to internal interfaces between subcomponents of a system (between separate modules)

System testing is also done for end to end scenarios. This can be explained using a simple example of a chain of events starting with opening an account in a bank, making deposits in it then withdrawing it, editing it and finally deleting it.The focus needs to be on the fact that an entire end to end scenario is tested

End to end scenario in System testingThe ATM cash withdrawal example

• Check whether the ATM accepts the Debit Card.• ATM must display the user name and prompt for the PIN number• Upon entering the Valid PIN number, the System is ready for Transactions• System should check for the available balance left for continuing the transaction• The Valid Transaction should take place• User gets the required amount• Further Transactions – optional• Signs out• Gets the Transaction Receipt – optional• Gets back the Debit Card

System Testing

System testing is a black-box Testing technique which is primarily aimed at end to end testing of the application. It is carried out by a non-Development Team

System testing includes live/simulated user dataThe whole system including the functional and non-functional

requirements tested

Open Account

Deposit Amount

Withdraw Amount

Close Account

Unit Testing

Integration Testing

Sanity TestingRegression TestingPerformance TestingUser Acceptance Testing

Bank Application

Page 31: Software Development Life Cycle

Functional Testing – Sanity Test• Very basic, minimal number of tests to verify the product• Typically an initial testing effort to determine if a new software version is

performing well enough to accept it for a major testing effortFunctional Testing – Regression Testing

• Regression testing refers to the continuous testing of an application for each new release.

• The regression testing is done to ensure proper behavior of an application after fixes or modifications have been applied to the software or its environment and no additional defects are introduced due to the fix.

• The Regression testing scope increases with new buildsPerformance Testing

• This testing is carried out to analyze/measure the behavior of the system in terms of time, stability and scalability.

• Parameters generally used are response time, transaction rates etc.

Acceptance Testing

• Acceptance testing is one of the last phases of testing which is typically done at the customer place. Testers usually perform the tests which ideally are derived from the User Requirements Specification, to which the system should conform. The focus is on a final verification of the required business function and flow of the system.

• For software developed under contracts acceptance testing involves evaluating the software against the acceptance criteria defined in contract.

• For software not developed under contract, ALPHA Testing and BETA Testing form a part of acceptance testing.

Highlight that acceptance testing is typically done by Customer while System test is done by Dev or independent test teams.

System Testing Contd…

Types Of System TestingFunctional

Testing

Compatibility Testing

Comparison Testing

Disaster and Recovery Testing

Exploratory Testing

Usability Testing

Performance Testing

Adhoc Testing

Security Testing

Page 32: Software Development Life Cycle

For software developed under contract (which is the usual case in Infosys), this means evaluating the software against the acceptance criteria defined in the contract.

For software not developed under contract (typically products), ALPHA and BETA testing is resorted to.

Alpha & Beta Testing• Forms of Acceptance testing• Testing in the production environment• Alpha testing is performed by end users within a company but outside

development group• Beta testing is performed by a sub-set of actual customers outside the company

Other Types of TestingOther types of testing are

• Disaster and recovery testing• Security/Penetration Testing• Usability testing• Compatibility testing• Exploratory testing• Ad-hoc testing• Comparison testing• Installation testing

Disaster and Recovery testing• Uses test cases designed to examine how easily and completely the system can

recover from a events like power shut down,disk crash,insufficient memory etc.

• Desirable to have a system capable of recovering quickly with minimal human intervention.

• Testing that the backup and recovery mechanisms of a system are stable and sufficient.

There should be some recovery scenarios which has to taken care at development stage itself.These recovery scenarios will take care about the system recovery at the time of crashes and improper shutdowns

Security/Penetration Testing• Application level security, including access to data or business functions

and system-level security.

• Testing to ensure that unauthorized persons or systems are not given access to the system.

Check for access to the system via passwords are tested along with organization’s established security procedures

Page 33: Software Development Life Cycle

Session hijacking: stealing session information through sniffing and intercepting session data by using the same.Session reply:using stolen session information and replying the same to initiate multiple transactions.SQL injection: an attack on a web application’s login page in which , instead of entering a password, a hacker enters a partial SQL command.Hidden file manipulation:viewing source code as html and changing hidden data before submitting the page to server.

Usability Testing• The ability of the end-user to utilize the system as defined• Testing for user-friendliness• This Testing considers the way in which information is presented to the user, such

as layout of data, buttons, colors used to highlight information, messages etc

Compatibility Testing• Testing how well software performs in a particular

• Hardware/Software• Browsers• Databases

• Testing a Web site, or Web-delivered application for compatibility with a range of leading browsers and desktop hardware platforms.

Exploratory Testing• Informal test that is not based on formal test plans or test cases. • Testers will not know much on the software and will be learning the software as

they test itTesting software without following any specifications & without knowing the functionality

Ad-hoc Testing• Informal test that is not based on formal test plans or test cases. • Testers will have significant understanding of the software before they test itTesting the software without following any specifications, but knowing the functionality

Comparison TestingComparison testing is testing that compares software weaknesses and strengths to

those of competitors' products

Installation Testing• Basic installation• Installation on various platforms• Regression testing of basic functionality

Classification based on objective

Page 34: Software Development Life Cycle

Black Box Testing Techniques

• Equivalence partitioning • Boundary value analysis• Error Guessing

BB testing techniques is used to reduce the problem that it is not possible to test every input valuesASCII codes for digitsCharacterASCII code

White Box Testing

Tests how the system was implemented

Testing based on knowledge of internal structure and logic

Usually logic drivenE.g. Unit testing, Integration testing

Black Box Testing Tests that validate business requirements Tests what the system is supposed to do Based on external specifications without knowledge of how the system is constructed Usually process drivenE.g. System testing, User Acceptance testing

input

events

output

Page 35: Software Development Life Cycle

/ 470 481 492 503 514 525 536 547 558 569 57: 58A to Z 65 to 90 a to z 97 to 122A byte can store between 0 and 255 numbers0 to 64564 -32767 to +32766If the program stores only positive numbers, then can store anything from 0 to 255 numbers in a byte(8bits)If negative numbers are to be handled then one of the 8 bit will be used as a sign bit (be it negative or positive)Now instead of 255, A byte holds any numbers bet’ –127 and +128. The program will fail with sums greater than 127This condition is interesting because it depends on how the programmer defines the memory storage requirements for a piece of data

Equivalence Partitioning• Divides the input domain of a program into classes of data .

• Assumes that similar inputs will evoke similar responses.

• Each partition shall contain a set or range of values.

• Both valid and invalid values are to be partitioned

• Result of testing a single value from an equivalence partition is considered representative of all other values in the partition.

CalculatorOpen an give commands for copyingInputsClick in copy menuType cCtrl CCtrl+shift+cEach input copies the current number into the clip board-They perform same output and produce the same resultJob is to test the copy command, partition these four inputs to 2 inputs or even 1

EXAMPLE• Consider a component ‘City-name’ with the following specs:• The City name with a minimum of 5 characters and a maximum of 12 characters • All inputs are passed as alphabets including special characters.

Page 36: Software Development Life Cycle

• Here the equivalence partitions are identified and then the test cases derived to exercise the partitions.

Identifying Test cases1.Assign a unique number to each EC2.Until all valid ECs have been covered by test cases, write a new test case covering as many of the uncovered ECs as possible3.Until all invalid ECs have been covered by test cases, write a test case that covers one , and only one of the uncovered invalid ECs.4.If multiple invalid ECs are tested in the same test case, some of those tests may never be executed because the first test

• Here the valid partitions can be described by:– City name-All alphabets and special characters– City name-Characters greater than or equal to 5 and less than or equal to

12Invalid partitions are:

– All integers– Characters less than 5 and more than 12

Boundary Value Analysis

• Boundary Value analysis complements Equivalence partitioning.

• Rather than selecting any element in an equivalence class, select those at the “edge/boundary" of the class

• Both valid and invalid values are partitioned in this way.

• The boundaries of both valid and invalid partitions are considered.

BVA leads to the selection of test cases at the edges of the class.6.Each partition shall contain a set or range of values. , chosen such that all the values can reasonably be expected to be treated by the component in the same way (i.e. they may be considered ‘equivalent’).

Example 1• A field is required to accept amounts of money between Rs.1 and Rs.10.

• What are the checks to be done using BVA?Example 1• Following tests can be executed. • 0 = rejected

1 = accepted (this is on the boundary)2 = accepted 9 = accepted.10 = accepted11 =rejected.

Page 37: Software Development Life Cycle

Error Guessing• Based on experience, the test designer guesses the types of errors that could occur

in a particular type of software and designs test cases to uncover them.

• Check list should be maintained based on experience gained in earlier tests, to improve the effectiveness of error guessing.

• High probability that defects that have been there in the past are the kind that are going to be there now.

1.Error guessing is based mostly upon experience, with some assistance from other techniques such as BVA.2.Based on experience, the test designer guesses the types of errors that could occur in ah particular type of software and designs test cases to uncover them.For e.g.,if any type of resource is allocated dynamically, a good place to look for errors is in the deallocation of resources.Are all resources correctly deallocated or are some lost as the software executes?3.To make maximum use of available experience and to add some structure to this test case design technique, it is good idea to build a check list of type of errors. This check list can then be used to help”guess” where errors may occur within a unit.4. The check list should be maintained with the benefit of experience gained in earlier unit tests, helping to improve the overall effectiveness of error guessing.

Black BoxExercise 1

A payroll program may as part of its function calculate tax payments. Tax is payable on net income i.e. after certain deductions have been made. Assuming these deductions have been correctly calculated then the amount of tax payable will depend upon the net income remaining e.g. net income at or under Rs.3000/- is taxed at 20%; income over this amount but at or below Rs.20700/- is taxed at 25%;income over Rs.20700/- is taxed at 40%.

Identify the equivalence classes and representative values for test cases(boundary values) Exercise 2Testing Scenario:

ABX Corporation is an e-retailer. They have just ventured into the IT Business. For this two teams have been formed at the vendor’s side: Development and Testing Team. The Dev Team has developed two modules in the application used for managing the client’s e-retailing business. There are two modules of the application: Order Management and Inventory Management.

i. What type of testing is required for testing each of these two modules in the application?

ii. The two modules in the application have been tested. Now it is required to combine the two modules together and test if they work in sync with each other. What type of testing is needed to be done at this stage?

ii. After integrating the two modules in the application its needed to test if the application is performing its needed purpose when used. What type of testing would be needed?

Page 38: Software Development Life Cycle

iv. During the Functional Testing the Testing Team found out a major bug in the Order Management module of the application. The application is sent back to the Development Team for fixing up the bug. As a result of fixing up of this bug in the Order Management module, the functioning of the Inventory Management module might also have been effected. What type of testing is needed to ensure that the bug fix in one module does not have an effect on the other module?

v. Now a set of core tests of basic GUI functionality are performed on the functionality to demonstrate connectivity to the database, application servers, printers, etc. This testing is performed as a cursory testing to determine if the application can be accepted from the development for detailed end to end testing. What type of testing is this?

vi. A software product has been developed and functionally tested. It is expected that the product can be deployed and run on both Windows and Unix environment. Given this requirement, what additional testing is required to ensure that the product is usable in both the environments?

vii. As per the client’s requirements, the application should be able to bear a load of 1000 users at one time. After the application is installed at the client’s side, it crashes with a load of only 500 users. What type of testing was required for the application?

viii. Now the actual users of the application test a completed information system. This testing is done to validate that the software meets a set of agreed criteria. What type of testing will this be called?

ix. For a product, the customer is invited into the vendor's facilities for testing to ensure the product meets the client’s requirements. This testing is a subset of the acceptance testing. What kind of testing is this?

x. The vendor of the application software offer some of its customers / end users an opportunity of using new release with an intent of providing feedback, before it is launched to all its customers / end users. What kind of testing is this?

TEST DESIGN, TEST STRATEGY AND ELEMENTS OF TEST PLANSessions Objective• Understand the test design and strategy• Provide guidance on preparation of test plan and understand each element of the

test plan.• Understand what is a test case, how to write a test case and how to execute the

same.

Test Design• The design of tests is subject to the same basic engineering principles as the

design of software.• The design of the tests is driven by the specification of the software.• Test design is decomposing the task of testing a system into smaller manageable

activities, ultimately to a level that corresponds to establishment of individual test case.

• Good test design consists of a number of stages which progressively elaborate the design of tests:

• Test strategy• Test planning• Test specification

Page 39: Software Development Life Cycle

• Test procedure

Basic steps of detailed Test designIdentifying the items that should be testedAssigning priorities to these items, based on riskDeveloping high-level test designs for groups of related test itemsDesigning individual test cases from high level designs.Designing what test cases are required is a labor intensive activity. No tool can automatically determine what test cases are needed for your system, as each system is different and test tools do not know what constitutes correct or incorrect operation. Test design requires the testers experience, reasoning and intuition.

• The four stages of test design apply to all levels of testing• For e.g.• For unit testing, tests are designed to verify that an individual unit

implements all design decisions made in the unit’s design specification.

Test Design: States what type of tests must be conducted, what sequence and how much time,

One master Test plan is prepared to provide an overview of the entire testing effortOne or more detailed plan for each of the validation activity like Unit testing, integration testing can also be prepared

Test Strategizing

Test case development

Test Execution

Bug Reporting and Tracking

Test Process Analysis

Test Cycle Closure

Defect Fixing

Scenario and Use-case design

Test Strategizing

Test Requirements Gathering

Test Planning

Page 40: Software Development Life Cycle

Test Strategy• Basic principle of testing process is to plan for testing early in the SDLC.• For large projects, this starts in the analysis phase. • Testing Strategy provides overall guidance and direction for the rest of the testing

process.• Preparing a Testing Strategy helps the project team think through the work at a

high level, concentrating on the requirements of the testing events, the approach that will be used, and the resources necessary to complete the testing.

1. One of the basic principles of the testing process is to plan for testing early in the development life cycle.2. For large projects, this starts in the analysis phase with the formulation of the Testing Strategy, which you later translate into the lower-level Testing Plan.3. Testing Strategy provides overall guidance and direction for the rest of the testing process.

Testing Strategies:·        The objective of the testing is to reduce the risks inherent in the computer, the strategy must address the risk and present a process that can reduce those risks,·        Two components of the testing strategy are:o      Test Factor: The risk or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application,o      Test Phase: The phase of the system development life cycle in which the testing will occur.·        A Strategy for the software testing integrates the software test case design methods into a well planned series of steps that result in the successful construction of the software. ·        It provides a road map for –o      Software developers,o      The quality assurance organizations,o      The customer,·        The road map describes the steps to be undertaken while testing, and the effort, time and resources required for the testing,·        The test Strategy should incorporate test planning, test case design, test execution, resultant data collection and data analysis,·        In designing a test strategy, the Risk factors becomes the basic or the objective of the testing,·        A strategy must provide a guidance for the tester and a set of milestones for the manager,·         Developing Testing Strategy:·        Select and rank the test factors,·        Identify the system development phases,·        Identify the business risks associated with the system under development,·        Place the risks in the Test Factor / Test Phase matrix, 

A Strategy for Software Testing integrates software test case design methods into a well planned series of steps that result in successful construction of softwareA test strategy answers the following questions :• What is the objective, time and cost of testing ?

Page 41: Software Development Life Cycle

• What is the technical architecture of the application/product ?• What are the various types of testing to be performed ?• What is the development methodology for the application/product ?• What should be tested and how ? What should not be tested and why ?• Should the entire product/application be tested as a whole or run tests only on a

certain part of it? • As new components are added to a large system, should the tests be re-run which

have already conducted?• What are the tools that can be used? • What should the resources be trained on?• When should the end-user be involved?

Why is Test Strategizing and Planning essential?• To define the objectives, timelines and approach for the testing effort• To list the various testing activities and define roles and responsibilities• To identify the test environment and data requirements and before the start of

testing phase• To communicate the testing plans to stakeholders and obtain buy-in from business

clients• To minimize firefights during the testing phase by identifying the risks earlier

(Good planning is half job done)• To optimize the use of resources (people, machines, time, cost)

The callouts are parts of the objective that needs to be met while the test strategy design. Complete objective is defined in the yellow coloured oblong.

Test planning/Strategy• Includes –

• Testing Objectives and Goals• Test Strategy/Approach based on customer priorities• Test Environment (Hardware, Software, Network, Communication etc.)

Objectives to be met while designing a test strategy

“ To provide an optimal solution, striking a balance between time, cost and quality in order to meet stakeholders’ expectations”

Optimal solution in limited timeframe

Effort estimation to maximize usage of resources

Identify Types of Testing

Scope of testing

Identification of tools

Page 42: Software Development Life Cycle

• Features to test with priority/criticality• Test Deliverables• Test Procedure – Activities and tools• Test Entry and Exit criteria• Test Organization and Scheduling• Testing Resources and Infrastructure• Test Measurements/Metrics

• Benefits• Sets clear and common objectives• Helps prioritize tests• Facilitates Technical tasks• Helps improve coverage• Provides structure to activities• Improves communication• Streamlines tasks, roles and responsibilities• Improves test efficiency• Improves test measurability

Test Optimization• Categorize entire gamut of testing into Sanity / Regression /Performance and

Stress• Efficient Review Process • Use Tools to do the testing (both Automation and Simulators )• Look out for Automation if it helps ( leverage 24 hours a day ) • Use Traceability Matrix• Tracking and Reporting of bugs and defects – Defining a process • Revisit the Test coverage and categorization on an ongoing basis• A Strategy needs to be formulated for mitigating risk.• Strategy for Re-testing essentially regression testing also needs to be in place

1. Amount of testing to be performed is directly related to amount of risk involved . The 4 methods for conducting risk analysis?  Judgment and instinct; cost estimate (cost of failure); identification and weighting of risk attributes; software risk assessment packages  Name 3 primary testing risks. Budget, test environment, number of qualified test resources

Name 3 strategic risks associated with development and installation of a computer system.  Incorrect results produced, unauthorized transactions, accepted by system, computer file integrity will be lost.

Test Strategy• A Test strategy balances the corporate and technical requirements of the

project.• A Test strategy must consider:

1. What risks are most critical to your system.2. Where in the development process defects are introduced that

contribute to these risks.

Page 43: Software Development Life Cycle

3. Propose testing approaches to detect these defects.

Testing Strategies:The objective of the testing is to reduce the risks inherent in the computer, the strategy must address the risk and present a process that can reduce those risks,Two components of the testing strategy are:Test Factor: The risk or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application,Test Phase: The phase of the system development life cycle in which the testing will occur.Risks1.Making a parachute jumping2.Investing in stock market3.Piloting a new aircraft4.Loaning a friend5.disarming a bombRisk is the probability that undesirable things will happen, such as loss of life or large financial loss

We need to work out how risky they are before we do them.

The systems we develop, when they don’t work properly, have consequences that can vary from mildly irritant to catastrophicTesting these systems should involve informed, conscious risk management. We cant do everything. We have to make compromises. But we don’t want to take risks that are high.Key questions must be askedWho is going to use the product?What is it being used for?What is the danger of it going wrong?What are the consequences if it does go wrong?Is it loss of moneyLoss of customer satisfaction

So decisions have to be made based on the risks involved. When risk is being used as a basis for testing choices, we are doing the rational thing by choosing the parts of the system that have the most serious consequences and focusing our attention on them.Another basis for choice of testing focus is frequency of use. If a part of the system is used often, and it has an error in it, its frequent use alone makes the chances of a failure emerging much higher.It is also rational to focus on those parts of the system or program that are most likely to have errors in them.There are many reasons why there may be extra risks involved in a particular piece of software, but the point is to look at them and make informed decisions based on that observation. The reason may be inherent in how that part of the system was created:It may be because the development team is constructed in a certain way and a junior person has written a complex piece of code, so that there’s an extra risk in that part of the product; it could have to do with the schedule(too tight);it could have to do with the development resources(insufficient);it could have to do with the budget(too tight again)

Page 44: Software Development Life Cycle

• How do you create a test strategy? • The test strategy is a formal description of how a software product will be tested. • A test strategy is developed for all levels of testing, as required. The test team

analyzes the requirements, writes the test strategy and reviews the plan with the project team.

• The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.

• The objective of the testing is to reduce the risks inherent in the system, the strategy must address the risk and present a process that can reduce those risks.

• Components of the testing strategy are:• Test Factor: Attributes of the software that, if not present, pose a risk to

the success of the software. The risk or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application,

• Test Phase: The phase of the system development life cycle in which the testing will occur.

• Test Tactics: are the Test plans, test criteria, techniques and tools used to assess the software system.

Testing Strategies:1. Amount of testing to be performed is directly related to amount of risk involved . The 4 methods for conducting risk analysis?  Judgment and instinct; cost estimate (cost of failure); identification and weighting of risk attributes; software risk assessment packages Name 3 primary testing risks. Budget, test environment, number of qualified test resourcesRisks1.Making a parachute jumping2.Investing in stock market3.Piloting a new aircraft4.Loaning a friend5.disarming a bombRisk is the probability that undesirable things will happen, such as loss of life or large financial loss

We need to work out how risky they are before we do them.

The systems we develop, when they don’t work properly, have consequences that can vary from mildly irritant to catastrophicTesting these systems should involve informed, conscious risk management. We cant do everything. We have to make compromises. But we don’t want to take risks that are high.Key questions must be askedWho is going to use the product?What is it being used for?What is the danger of it going wrong?What are the consequences if it does go wrong?Is it loss of moneyLoss of customer satisfaction

Page 45: Software Development Life Cycle

So decisions have to be made based on the risks involved. When risk is being used as a basis for testing choices, we are doing the rational thing by choosing the parts of the system that have the most serious consequences and focusing our attention on them.Another basis for choice of testing focus is frequency of use. If a part of the system is used often, and it has an error in it, its frequent use alone makes the chances of a failure emerging much higher.It is also rational to focus on those parts of the system or program that are most likely to have errors in them.There are many reasons why there may be extra risks involved in a particular piece of software, but the point is to look at them and make informed decisions based on that observation. The reason may be inherent in how that part of the system was created:It may be because the development team is constructed in a certain way and a junior person has written a complex piece of code, so that there’s an extra risk in that part of the product; it could have to do with the schedule(too tight);it could have to do with the development resources(insufficient);it could have to do with the budget(too tight again)

• Defect : A specific cause of a failure, e.g. incorrect parameters to write pay- cheques function call.

Failure : Inability of the system to perform its intended function e.g. prints pay cheques with the wrong value• Risk : The chance of incurring a certain cost

over a given period resulting from a failure

• Risk based Testing• As pointed earlier, it is impossible to test everything. Furthermore testing must be

carried out within limited budgets and schedules .• Whenever there is not enough time and resources to test everything, decisions

must be made to prioritize what is tested.• To prioritize we must decide what are the most important parts of the system

( those parts that have greatest impact if not operating incorrectly or unexpectedly.

• To prioritize testing we compare risks of system failures: the likelihood of incurring damage if a defect in the system causes the system to operate incorrectly or unexpectedly.

Risk is a product of a failure:• Severity of impact• Likelihood of occurrence

For example, a 50% chance of issuing cheques that total more than a Rs. 1 million in overpayment during the life time of the system

• To prioritize testing of the system based on risk, the tester must break the system into smaller components and then determine the risk of each component.

• Those components with higher risk receive higher priority for testing, those of lower risk receive lower priority.

Page 46: Software Development Life Cycle

• It provides a road map for –• Software developers• The quality assurance organizations,• The customer,• The road map describes the steps to be undertaken while testing, and the

effort, time and resources required for the testing,• The Test Strategy should incorporate test planning, test case design, test

execution, resultant data collection and data analysis,• A strategy must provide a guidance for the tester and a set of milestones

for the manager,

A Strategy for the software testing integrates the software test case design methods into a well planned series of steps that result in the successful construction of the software. It provides a road map for –Software developers,The quality assurance organizations,The customer,The road map describes the steps to be undertaken while testing, and the effort, time and resources required for the testing,The test Strategy should incorporate test planning, test case design, test execution, resultant data collection and data analysis,In designing a test strategy, the Risk factors becomes the basic or the objective of the testing,A strategy must provide a guidance for the tester and a set of milestones for the manager,Test Planning

Test case development

Test Execution

Bug Reporting and Tracking

Test Process Analysis

Test Cycle Closure

Defect Fixing

Scenario and Use-case design

Test Strategizing

Test Requirements verification

Test Planning

Page 47: Software Development Life Cycle

  Test Plan

  What is a Test plan?• A software Project Test Plan is a document that describes the objectives,

scope, approach and focus of a software testing effort. It also describes the tasks, schedules, approach, resources, and tools for integrating and testing the software application

• Test Plan describes the design of the testing process, which should fall within the overall umbrella of the Testing Strategy.

• Test Plan includes the details of:• Testing process, including the types of tests to be performed.• Testing environment, testing tools, and the people doing the

testing.• The test scenarios look like• The expected results• How to track errors,• Who will test and where the testing will take place, etc.

    Test Plan:• What we are going to do,• How we are going to do it,• What testing methods we are going to use,• What are the documents we are referring,• What resources required,• How work is distributed,• How long it will take,• What is the test completion criteria,• How we are measuring testing effectiveness,Developing the test planThe purpose of the test plan is to communicate the intent of the testing activities. It is critical that this document be created as early as possible. It may be desirable to develop the test plan iteratively, adding sections, as the information is available with constant updates as changes occur. Test plan is a document that describes the overall approach to testing and its objectives and contains the following headers: • Test Scope• Test Objectives• Assumptions• Risk Analysis • Test Design• Roles and Responsibilities• Schedules and Resources• Test Data Management• Test Environment• Communication Approach• Test Tools

Page 48: Software Development Life Cycle

Test Planning (contd)

• The Test Plan should include:• Scope of the application being tested• Approach to be taken (in detail)• A tentative go live date• A list of functions and procedures to be tested and their priority• A list of who is responsible for testing each function and procedure• Various milestones and timelines• Details of the resources• List of all deliverables• Risk and mitigation• Specifies roles/responsibilities of the various players: Business

analysts, Technical analysts, Development team, Testing team, etc• In the process the Test Plan answers such questions as:

• What is being tested? • What are pass/fail criteria? • When will each test occur? • What hardware and software environment is required? • What features must be tested? • What features will not be tested? • What are the responsibilities of individuals and organizations

involved in the project? • Identify the various modules and decide which modules/

functionalities will be tested. Clearly mention which items are out-of-scope for testing

• List down the priority of testable items (conflict-resolution between the various players is of importance)

• The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product.

• The completed document will help people outside the test group understand the 'why' and 'how' of product validation.

• It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.

Page 49: Software Development Life Cycle

Test Strategy is a high-level document at an overall project level. The strategy will address all factors to be taken into consideration for execution of a testing-project. This will take into account critical factors for all types of testing (that affect the whole project), but not every detail about each type of testing.

Test Plan is prepared for EACH different type of testing and is more detailed for that particular type of testing. It will include the execution details for EACH type of testing - team composition, individual roles/responsibilities, etc.

A test plan is basically derived from a test strategy document.

Differentiate between test strategy and test plan:

• Test Strategy should define “What and Why” of testing, while the Test Plan should describe “How”.

• Test Strategy talks about the entire project involving all types of testing, but at a higher level. This is more of a high-level doc.

• Test Plan is prepared for each type of testing. This talks about the intricacies involved for each type of testing. This is a detailed plan for execution of each testing type.

Test Strategy is a high-level document at an overall project level. The strategy will address all factors to be taken into consideration for execution of a testing-project. This will take into account critical factors for all types of testing (that affect the whole project), but not every detail about each type of testing.

Test Plan is prepared for EACH different type of testing and is more detailed for that particular type of testing. It will include the execution details for EACH type of testing - team composition, individual roles/responsibilities, etc.

Test Strategy

Functional Test Plan

Project Level

Specific to each type of testing

Performance Test Plan

Test Strategy vs. Test Plan

Other Test Plan…

Page 50: Software Development Life Cycle

A test plan is basically derived from a test strategy document.

Test PlanThe following are some of the items that might be included in the test plan,

depending upon the project.

1.Brief history of the project• In this section, briefly discuss the project background and

objectives.Write the brief history of what the project is all about and what the project can do

• The focus area of the project.• What are the reports which can be generated?• The platform on which the application is running.

Software Description: Provide a chart and briefly describe the inputs, outputs and functions of the software being tested as a frame of references for the test descriptions.

2.Objectives and purpose• Describes the testing strategy and approach to testing.• It lists out the various resources required for the successful completion of

the project.• It defines the activities and to prepare for the various tests and the UAT.• It also defines the deliverables and responsible parties.• It communicates the various dependencies and risks.• List out the intended audience for this document.

3.Scope• To identify which portion of the application are covered under the test and

the features being tested like Installation, Transaction, reports etc.,

Test Objective: State the goal of the Testing. States what the tester is expected to accomplish or validate during the testing. This guides in development of test cases, procedures and test data. Enable the tester and manager to gauge the testing progress and success.

Test Scope: Answers two questions - What is covered in the test? And what is not covered in the test?

4.Test Coverage4.1Features to be tested:Here list out the features to be tested like

• 4.1.1 User Interfaces• 4.1.2 Generic test conditions• 4.1.3 Browser type• 4.1.4 Software interfaces to be tested• 4.1.5 Security• 4.1.6 Communication interface• 4.1.7 Installation• 4.1.8 Transactions• 4.1.9 Login details• 4.1.10 Master details

Test CoveragePage 5-91 

Page 51: Software Development Life Cycle

The Test Plan and Strategy utilizes a balance of testing techniques to cover a representative sample of the system. The test planning process is a critical step in testing process. Since exhaustive testing is impractical and testing each and every scenario with each variable is time consuming and expensive, the project team establishes a test coverage goal, based on the risks and criticality associated with the application under test. Projects where the application supports critical functions like air traffic control or the military defense systems, or the nuclear power plants or space technology etc., the coverage goal mat be 100% at all stages of testing. One way to leverage a dynamic analyzer during system test if to begin by generating test cases based on functional or black box test techniques. When functional testing provides a diminishing rate of additional coverage for the effort expended, use coverage results to conduct additional white box or structural testing on the remaining parts of the application until the coverage goal is achieved. 

4.2 Features not to be tested:• Here list out the features not to be tested like

• Reports if not ready• Utilities if not ready

4.3 Test criteria• This sub section identifies the testing coverage that will be followed. For

e.g.• 4.3.1 Functionality testing• 4.3.2 Volume testing• 4.3.3 Usability testing• 4.3.4 Security testing• 4.3.5 System testing

4.3.6 Storage testing4.3.7 Performance testing4.3.8 Automated testing4.3.9 Configuration testing4.3.10 Compatibility testing4.3.11 Recovery testing

Page 52: Software Development Life Cycle

This section contains a summary of the overall testing schedule. It is not at the activity level, but it should contain more detail than the major milestones included as a part of the Testing Strategy.

6.Control procedures6.1 Reviews

Project team to perform reviews for each phase . For e.g.Requirements reviewDesign reviewCode reviewTest plan review

6.2 Bug review meetingsRegular weekly meetings held to discuss reported bugs.Development dept. will provide status/updates on all the bugs reported.

Bug review meetingsAll member of the project team will participate.Test dept. will provide addln. Defect information if reqd.

6.3 Change RequestIf functional changes are reqd., these proposed changes would be discussed with

the Change Control Board (CCB). The CCB will determine the impact of the change and if or when it should be

implemented.6.4 Defect Reporting

When defects are found, the testers will complete a defect report on the defect tracking system.

5.Test Schedule This section contains a summary of the overall testing schedule.The schedule of the various activities and their milestones are to be detailed here

dd/mm/yydd/mm/yyUAT

dd/mm/yydd/mm/yySystem tests

dd/mm/yydd/mm/yySystem familiarization

dd/mm/yydd/mm/yyTesting plan

dd/mm/yydd/mm/yyTesting strategy

End dateStart DateTesting Milestone/Events

Test Plan

Page 53: Software Development Life Cycle

Systems are accessible by testers, developers and all members of the project team.

• When a defect has been fixed or more information is needed, the developer will change the status of the defect to indicate the current state.

• Once a defect is verified as FIXED by the testers, the testers will close the defect report.

 

7.0 Resources and Responsibilities 7.1 Resources

7.1.1 PersonnelThe list of personnel reqd. for formation of the

test team are detailed here like a project Manager, Test lead, Testers.

7.1.2 TrainingThe type and duration of training reqd. for the

testing team is detailed here like Domain, KT, Automation tool etc.,

7.1.3 Testing environmentList of hardware and software requirements,

communication, security level, Automation tool etc.,

Roles and Responsibilities: States who is responsible for each stage of testing,

Test Schedule and Planned Resources: States the major test activities, sequence, dependence on other project activities, initial estimation on each activity. Resource planning include the people, tools, facilities etc.,

Test Environment: States the environment required for each stage of the testing,Tools: States the tools needed for the Testing in different phases,Testing Tools:

• Tools are needed to help the testing,• The kind of tools needed depends upon the kind of the testing to performed and

the environment in which the test will be performed,• The tool selection depends upon the following criteria:o Test Phase,o Test Objective,o Test Targets or Deliverables,o Test Techniques,o Software Category,o Test History (Error / defect History),

7.2 ResponsibilitiesThe responsibilities of each member of the team is detailed

Project Manager Responsible for Project schedules and the overall success of the Project. Participates in CCB. Coordinating with customer.

Lead Developer Serves as a primary contact/liaison between the development dept. and the project team. Participates in CCB.

Page 54: Software Development Life Cycle

Test Lead Ensures the overall success of the test cycles. To coordinate weekly meetings and will communicate the testing status to the project team. Participates in CCB.

Testers Responsible for performing the actual system testing.

User Will perform the Beta and User Acceptance testing

8.0 System Test Procedure 8.1 Activities, techniques and tool

• Test scenarios and/or use cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures.

• Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking...

• Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.

• Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.

• It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.

• Test scenarios are executed through the use of test procedures or scripts.   

Scenario and Use-case DesignUse Cases:

• A Use case is a sequence of actions performed by a system, which together produce results required by

users of the system.

Test Planning

Test case development

Test Execution

Bug Reporting and Tracking

Test Process Analysis

Test Cycle Closure

Defect Fixing

Test Strategizing

Test Requirements verification

Project Execution

Scenario and Use-case design

Page 55: Software Development Life Cycle

• It defines process flows through a system based on its likely use.

• Tests derived from use cases help uncover defects in process flows during actual use of the system.

• Use case also involve interaction or different features and functions of the system. For this reason, tests derived from use cases will help uncover integration errors.

Scenarios and their importance in testingThe pre-requisite to test-scenario design is that the requirements gathering phase needs to be completed and a requirements document needs to be preparedScenarios are generated based on the understanding of users’ needs (as perceived in requirements gathering analysis)Scenario-design maximizes the success in communicating intentions, requirements, and constraints among team members, and to the clientTesters have greater chances of uncovering most of the alternative scenarios, including cases of failureScenarios serve as a ground for negotiating application features, from general application characteristics to implementation details

Use Cases:Use cases tell the customer what to expect, the developer what to code, the technical writer what to document, and the tester what to test.Each use case represents a big chunk of functionality that will be implementedThe Use Cases have actors which represent someone or something outside the system that interacts with it.

Scenario Testing: The Use-case method

Scenario and Use-case Design

OR

A use case fully describes a sequence of actions performed, to provide an observable result of value to a person or another system

using the software-under-test

A use case describes a scenario in which a user interacts with the system being defined to achieve a specific goal or accomplish a

particular task

Page 56: Software Development Life Cycle

Test Plan

8.2 Test execution procedure• Test procedures or scripts define a series of steps necessary to perform one

or more test scenarios.

The most important part of a use case for generating test cases is the flow of events

The two main parts of the flow of events are:Basic flow of eventsThey cover what "normally" happens

when the use case is performedAlternate flows of eventsThey cover behavior of an optional or

exceptional character relative to normal behavior, and also variations of the normal behavior

A use-case scenario is an instance of a use case, or a complete "path" through the use case

Scenario and Use-case Design (contd)

Note: The terms “scenario” and “use case” are sometimes used as synonyms and sometimes they are distinguished by defining a scenario to be a specific realization of a use case.

Requirement Requirement

Use Case

Scenario

Test Case

Scenario Scenario

Test Case Test Case

Use Case, Scenario and Test Case

Page 57: Software Development Life Cycle

• Test procedures or scripts include the specific data that will be used for testing the process or transaction.

• Test procedures or scripts may cover multiple test scenarios. • Test scripts are mapped back to the requirements and traceability matrices

are used to ensure each test is within scope. • Test data is captured and base lined, prior to testing. This data serves as

the foundation for unit and system testing and used to exercise system functionality in a controlled environment.

• Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.

Contd...Test Procedure:Recommended steps in the Test Process:a. Test Criteria: The questions to be answered by the test team.b. Assessment: The test team’s evaluation of the test criteria,c. Recommended Tests: Recommended test to be conducted,d. Test Techniques: The recommended test Techniques to be used in evaluating the test criteria,e.  Test Tools: The tools to be used to accomplish the test techniques.

Test Data Management: States the data set required for the Testing, and the infrastructure required maintaining the data. Includes the methods for preparing the test data

9.0 Completion Criteria• Completion means knowing when to stop.  You can test forever and still

not cover every possibility. • The goal is to guarantee a quality product that is satisfactory to the users in

a finite amount of time. • Risk analysis and equivalence analysis can help to reduce the amount of

testing. • Metrics can determine the release threshold. • Function points covered, time between failures, number of tests completed

can all be used to determine a reasonable stopping point.• The product should meet all the requirements as mentioned in the SRS and

the application should perform as expected in production.• The defect should be minimum and should not exceed as mentioned in the

SPP.

Test completion criteria:• Wrong criteria:o Stop when the scheduled time for testing expires,o Stop when all the test cases executes without detecting errors,• Correct Criteria:o When the test manager is able to report, with some confidence, that the

application will perform as expected in production,o This can be decided based on whether the quality goals, defined at the starting of

the project have been met,o Is there any opened defects and their severity level,o The risks associated with the product moving to the production,

Completion Criteria

Page 58: Software Development Life Cycle

There are a number of different ways to determine the test phase of the software life cycle is complete. Some common examples are: 

• All black-box test cases are run  • White-box test coverage targets are met  • Rate of fault discovery goes below a target value  • Target percentage of all faults in the system are found  • Measured reliability of the system achieves its target value (mean time to failure)  • Test phase time or resources are exhausted 

When we begin to talk about completion criteria, we move naturally into a discussion of software testing metrics.

The test results should be reviewed by PL and approved by PM or Customer.

10. Test start criteria

11. Test stop criteria

12. Suspension criteria• Identifies the conditions under which testing can be suspended and

what testing activities are to be repeated if testing resumesCannot proceed further with testingRun time errorYellow page (Link not available in Web based application)Interface errors (specific to data base connectivity)

13. Resumption CriteriaWhen the above problems are fixed.

Entrance Criteria: defines the required conditions and standards for product quality that must be present or met for entry into the next stage of the development stage. Exit Criteria: defines standards for product quality, which block the promotion of defects to subsequent stages of the development process. Examples: System test entrance criteria:

• Successful execution of the integration test plan• No open severity 1 or severity 2 defects• 75-80% of total system functionality and 90% of major functionality delivered• System stability for 48-72 hours prior to start of test

 System test exit criteria:

• Successful execution of the system test plan• System meets pre-defined quality goals• 100% of total system functionality delivered

14.Test Deliverables

The list of deliverables and the persons responsible for it .

• Test plan• Test case review

Page 59: Software Development Life Cycle

• Requirements validation matrix• Final test summary reports• Test coverage• Develop test cases and execute• Test results & defect reports• UAT

15.Dependencies15.1 Personnel Dependencies

The Test team requires experienced testers to develop perform and validate tests15.2 Software Dependencies

The source code must be unit tested and provided within the scheduled time outlined in the project schedule.

15.3Hardware DependenciesThe required hardware needs to be available within the normal

working hours and any downtime will affect the test schedule.15.4Test Data and Data Base

Test data (mock information) and database should be also be available to the testers for use during testing.

16.Risks Testing risks are circumstances and events that may occur before or during

testing that would have an adverse impact on the testing process. For each high- and medium-level risk, determine what activities you will do to ensure that the risk does not occur. These risk plan activities should also be moved to the project work plan.

16.1 ScheduleThe schedule for each phase is very aggressive and could affect

testing. A slip in the schedule in one or the other phases could result in a subsequent slip in the test phase. Close project management is crucial to meeting the forecasted completion date.16.2 Technical

In the event of failure of the application, the old system can be used. Test will be run in parallel with the production system so that there is no

down time of the current system.16.3 Requirements

The test plan and test schedule are based on the current requirements document. Any changes in the requirement could affect the test schedule and will need to be approved by the CCB.

Risk Analysis: This documents the risks associated with the testing and their possible impact on the test effort. The possible risks are system integration, regression testing, new tools used, new technology used, skill level of the tester, testing techniques used, etc.,Risk AnalysisWe cannot eliminate risks but can reduce their occurrences and or impact or loss.Too little testing is a crime; too much testing is a sin. Risk IdentificationPage 5-80

Page 60: Software Development Life Cycle

 Risk is identified as the product of the probability of the negative event occurring and the impact or potential loss associated with the negative event. Testing is driven by the impact of the risk. Lame projects with minimum functionalities require the least amount of testing while high risk projects like Airport air traffic control, nuclear plants and defense systems require rigorous and thorough testing.  FIVE dimensions of risk

1. Technology integration 2. Size and complexity 3. System environment and stability 4. Criticality / mission impact 5. Reliability and integrity

1. FOUR methods for conducting Risk Analysis:2. Judgment and Instinct3. Estimate on the cost of failure4. Identification and weighing the risk attribute5. Software risk assessment package

16.4 PersonnelIt is very important to have experienced testers on the project.

Unexpected turnovers can impact the schedule. If attrition does happen, all efforts must be made to replace the experienced individual.

Testing risk Level(H/M/L)

Risk plan

ExampleThe team is not familiar with the Web testing tools we will be using on this project.

H Try to find at least one person who has used the tools before. Send at least two team members to formal training.Set up follow-up training sessions to cross-train the rest of the developers

17. MetricsThe Test Report containing the details of the test runs and defect analysis, which

will be prepared by each tester.

The data from all the Test Reports shall be consolidated in the defect log.

Describe any metrics you will capture as part of the testing process, such as total components tested, total defects per testing event, average time for defect correction, total number of hours spent on testing, total cost of testing, etc.

18.Documentation

The following documentation should be available at the end of the test phase.

• Test plan

Page 61: Software Development Life Cycle

• Test cases• Test case review• Requirements validation matrix• Defect reports• Final test summary reports

Testing Approach/Specification• Describe the testing process at a high level, including how you will

conduct unit testing, integration testing, system testing, and acceptance testing.

• This is where fundamental decisions are made about the type of testing that makes sense for your project.

• If you are doing iterative development cycles, the testing approach will reflect this overall development life cycle.

• For system testing, define the major testing events such as stress testing, security testing, disaster-recovery testing, usability testing, and response-time testing.

• A unit test specification should include positive testing,that the unit does what it is supposed to do, and also negative testing, that the unit does not do anything that it is not supposed to do.

Purpose of Test SpecTo specify refinements of the test approach and to identify the features to be covered by the design and its associated tests. It also identifies the test cases and test procedures

This is where fundamental decisions are made about the type of testing that makes sense for your project. For instance, if you are implementing a packaged solution, the approach may start in system testing, with the vendor providing close support.

States what type of tests must be conducted, what sequence and how much time,

Test approachHow do we organise the tests?Are they requirements based? Function based tests or internal based testsHow do we categorise themWhat is the logical grouping of tests we intend to execute together?

Page 62: Software Development Life Cycle

A test case is a unit of testing activity Test cases basically have three parts:

Test Planning

Test Execution

Bug Reporting and Tracking

Test Process Analysis

Test Cycle Closure

Defect Fixing

Scenario and Use-case design

Test Strategizing

Test Requirements verification

Test case development

Test Case Development – Definition of a Test Case

A test case is a set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a

particular program path or to verify compliance with a specific requirement

A test case should contain particulars such as test case identifier, test case name, objective, test conditions / setup, input data requirements, steps, and expected results.

A Good Test case is the one that has a high probability of finding an as-yet undiscovered error

A successful test case is the one that uncovers an as-yet undiscovered error

Economical-no unnecessary steps & Traceable-to a requirement

Page 63: Software Development Life Cycle

Goal – the aspect of the system being tested.Input and System State –data provided to the systemExpected behavior- the output or action the system should take according to its requirements

(System State: The state of the system (data, active screen, etc) that needs to exist, just before the testcase is executed. In other words, the pre-requisite for the testcase to be executed)

Incomplete, incorrect and missing test cases can cause incomplete and erroneous test results. All required Test Cases should be identified so that all system functionality requirements are tested. Test Cases can be developed with system users and designers as the use cases are being developed. Using the use case approach will ensure not only meeting requirements, but also expectations.

Generating Test Cases From Use Cases• A three-step process can be described for generating test cases from a

fully-detailed use case:• Step One: Generate Scenarios

• For each use case, generate a full set of use-case scenarios.• Read the use-case textual description and identify each combination of

main and alternate flows -- the scenarios -- and create a scenario matrix.• Step Two: Identify Test Cases

• For each scenario, identify at least one test case and the conditions that will make it "execute.“

• Reread the use-case textual description and find the conditions or data elements required to execute the various scenarios

• Step Three: Identify Data Values to Test• For each test case, identify the data values with which to test and

determine the expected results.

Test Case Development – Characteristics of a Good Test Case

A good Test Case is characterized by 4 E’s:Effective -

A Good Test case has a high probability of finding an error.Exemplary-

A Good Test case should represent other test cases that have a similar intent, time and resource limitations

Evolvable-A Good Test case should not be redundant.

Economic-A Good Test case should be neither too simple nor too

complex.

Represents others

Find faults

Easy to maintain

Inexpensive

Page 64: Software Development Life Cycle

There are two ways to select test cases systematically:

Without regard to the internal structure of the system: testing to specifications, data-driven testing,input/output-driven testing,functional testing,or black-box testingBased on the internal structure of the system: testing to code, path-oriented testing,logic-driven testing,structural testing, glass-box testing,or white-box testing

Test CaseHow to write test case

Step-by –step

Step Action Result

1 Enter new name and address. Press <OK>

Displays screen 008 new name details

2 Fill all blanks with natural data. Make screen grab. Press<OK>

Displays screen 005 maintenance.

3 Click on <inquiry> button Displays screen 009 inquiry details

4 Enter name from screen grab. Press <OK>

Displays screen 010 record detail

5 Compare record detail with screen grab

All details match exactly

A simple test case templateTest Category

Type of test Screen layout         Data input, etc.Test No.

Identifies unique serial no. for sub test case required to test individual test categoryInput Condition

State of the class before the testEvent

     Sequence of operations to be carried outExpected Result

Output to be producedActual Result

Actual result after executing the test case.

Page 65: Software Development Life Cycle

Test Data - Classification of Data Types• Environmental data - Environmental data tells the system about its technical

environment. It includes communications addresses, directory trees and paths and environmental variables. The current date and time can be seen as environmental data.

• Setup data - Setup data tells the system about the business rules. It might include a cross reference between country and delivery cost or method, or methods of debt collection from different kinds of customers.

• Transitional data - Transitional data is data that exists only within the program, during processing of input data. Transitional data is not seen outside the system (arguably, test handles and instrumentation make it output data), but its state can be inferred from actions that the system has taken. Typically held in internal system variables, it is temporary and is lost at the end of processing.

• Input data - Input data is the information input by day-to-day system functions. Accounts, products, orders, actions, documents can all be input data. For the purposes of testing, it is useful to split the categorization once more:

• FIXED INPUT DATA - Fixed input data is available before the start of the test, and can be seen as part of the test conditions.

• CONSUMABLE INPUT DATA - Consumable input data forms the test input

• Output data - Output data is all the data that a system outputs as a result of processing input data and events. It generally has a correspondence with the input data), and includes not only files, transmissions, reports and database updates, but can also include test measurements. A subset of the output data is generally compared with the expected results at the end of test execution. As such, it does not directly influence the quality of the tests.

Test Data

Testing consumes and produces large amounts of data. Data describes the initial conditions for a test, forms the input, is the medium through which the tester influences the software.

Data is manipulated, extrapolated, summarized and referenced by the functionality under test, which finally spews forth yet more data to be checked against expectations.

Data is a crucial part of most functional testing.

Data developed in support of a specific test case. The test data resulting from a test execution may serve as input to a subsequent test. Test data may be manually generated or extracted from an existing source such as

production data. Recording of user input using capture/playback tools also may be a source of test data.

Page 66: Software Development Life Cycle

Test Execution - Activities in the Test Execution Cycle• Resolution of setup issues• Verification of Entry criteria and suspension criteria• Perform testing based on test cases, test scripts• Continuous and timely updating of result sheets, checklists, etc• Regular status reporting, meetings and tracking against project schedules• Review and rework on test cases (after bug-fixing or document reviews)• Bug-reporting and tracking• Verification of exit criteria (checklists, acceptance criteria, etc)• Preparation of output data (bugs, observations, etc) for post-execution

analysis• Regular interaction with project manager (change requests, status updates,

etc)

Resolution of issues: This could happen within the team, with development team, with environment-setup/production-support team.Test ExecutionHow do you execute tests?

• Execution of tests is completed by following the test documents in a methodical manner.

• Checkpoint meetings are held throughout the execution phase.• The output from the execution of test procedures is known as test results.

Test Planning

Test case development

Bug Reporting and Tracking

Test Process Analysis

Test Cycle Closure

Defect Fixing

Scenario and Use-case design

Test Strategizing

Test Requirements verification

Test Execution

Page 67: Software Development Life Cycle

• All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution.

• A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report.

• The severity of a problem, found during system testing, is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool.

• Proposed fixes are delivered to the testing environment, based on the severity of the problem.

• Fixes are regression tested and flawless fixes are migrated to a new baseline.

• Following completion of the test, members of the test team prepare a summary report.

• The summary report is reviewed by the Project Manager, Software QA (SWQA) Manager and/or Test Team Lead.

••• Test Execution:•• The test plan should be updated throughout the project to ensure that the

true expected results have been documented for each planned test. The test plan should contain the procedure, environment and tools necessary to implement an orderly controlled process for test execution, defect tracking, coordination or rework and configuration and change control.

•  • Executing the integration test plan should begin once unit testing for the

components to be integrated is completed. The objective is to validate the application design, and process that the application components can successfully be integrated to perform one or more application functions.

•  • Depending on the sequence and design of the integration test build, the

application may be ready for System Test once the pre-defined Exit Criteria has been met.

•  • Executing the system test should begin as soon as a minimal set of

components have been integrated and successfully passed integration testing. Testing for standard backup and recovery operations are conducted and the application is subjected to performance load and stress tests for measuring the response time under peak load with multiple users applying multiple functionalities at the same time.

•  • System test ends when the test team measures the application capabilities

and each reported defect is thoroughly regression tested to generate enough confidence that the application will operate successfully in the production environment.

•• Test Data:• The in put data used for testing is known as the Test Data,• The test data set contains both the valid and invalid data for the test case,• The test data are generated during the design and analysis phase for all test

cases which are identified during the Requirement analysis phase,

Page 68: Software Development Life Cycle

• The test cases along with the test data ensures the test team that all the requirements are in the testable form, If it is not so, then the requirements are rewritten in the testable form,

• Exhaustive testing with all possible test data is impracticable, so some techniques (Equivalence partitioning, Boundary value analysis, etc.,) are used selecting the test data,

• The test data should check for the form, format, value and unit types,Test Execution• After a particular level of testing has been certified, it is the responsibility

of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan.

• The software is only migrated to the production environment after the Project Manager's formal acceptance.

• The test team reviews test document problems identified during testing, and update documents where appropriate.

Inputs for this process: • Approved test documents, e.g. Test Plan, Test Cases, Test Procedures. • Test tools, including automated test tools, if applicable. • Developed scripts. • Changes to the design, i.e. Change Request Documents.

Test data. • Availability of the test team and project team. • General and Detailed Design Documents, i.e. Requirements Document,

Software Design Document.• A software that has been migrated to the test environment, i.e. unit tested

code, via the Configuration/Build Manager• Test Readiness Document. • Document Updates.

Test Data:• The in put data used for testing is known as the Test Data,• The test data set contains both the valid and invalid data for the test case,• The test data are generated during the design and analysis phase for all test cases

which are identified during the Requirement analysis phase,• The test cases along with the test data ensures the test team that all the

requirements are in the testable form, If it is not so, then the requirements are rewritten in the testable form,

• Exhaustive testing with all possible test data is impracticable, so some techniques (Equivalence partitioning, Boundary value analysis, etc.,) are used selecting the test data,

• The test data should check for the form, format, value and unit types, Testing Tools:

• Tools are needed to help the testing,• The kind of tools needed depends upon the kind of the testing to performed and

the environment in which the test will be performed,• The tool selection depends upon the following criteria:o Test Phase,o Test Objective,o Test Targets or Deliverables,o Test Techniques,

Page 69: Software Development Life Cycle

o Software Category,o Test History (Error / defect History),

Outputs for this process:

• Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.

• Changes to the code, also known as test fixes. • Test document problems uncovered as a result of testing. Examples are

Requirements document and Design Document problems.• Reports on software design issues, given to software developers for

correction. Examples are bug reports on code issues. • Formal record of test incidents, usually part of problem tracking.• Base-lined package, also known as tested source and object code, ready

for migration to the next level.

Bug Reporting and Tracking• Once a potential bug is found, it needs to be confirmed as a definite bug• Requirement documents (logical design, functional specs, etc) are cross-

checked. Requirements traceability matrix is verified.• Test engineer needs to confirm with Team Leader whether it is a valid bug

or has been reported previously• Once the defect is confirmed, substantial data (steps to reproduce,

severity, priority, module affected, etc) is provided to the development team through the defect tracking tool

• The defects will be reproduced, analyzed and fixed by the dev team, and reassigned to the testing team

Test Planning

Test case development

Test Execution

Test Process Analysis

Test Cycle Closure

Defect Fixing

Scenario and Use-case design

Test Strategizing

Test Requirements verification

Bug Reporting and Tracking

Page 70: Software Development Life Cycle

• The bug goes through various phases like “New”, “Open”, “Fixed”, etc and needs to be tracked to closure

• Analysis of bugs

• A Bug Report should :• Be an accurate and concise technical document • Be clear to management, actionable by development• Enhance credibility, standing and resources• Help developers be effective, reduce arguments• Support increased product quality

Contents of a typical (good) Bug Report : Statement of condition (short one-line summary) Expected behavior and actual behavior Exhaustive and step-by-step instructions to recreate defect Supporting documents (screenshots, log files, etc) Probable cause of the defect Severity, Priority of defect Additional information