Top Banner
Software Development Life Cycle (SDLC) is a procedural process, in the development of a software product. The process is carried in a set of steps, which explains the whole idea about how to go through each product. The classification of Software Development Life Cycle process is as follows 1. Planning 2. Analysis 3. Design 4. Software Development 5. Implementation 6. Software Testing 7. Deployment 8. Maintenance Software Testing is an important factor in a product's life cycle, as the product will have greater life, only when it works correctly and efficiently according to the customer's requirements. Introduction to Software Testing Before moving further towards introduction to software testing , we need to know a few concepts that will simplify the definition of software testing. Error: Error or mistake is a human action that produces wrong or incorrect result. Defect (Bug, Fault): A flaw in the system or a product that can cause the component to fail or misfunction. Failure: It is the variance between the actual and expected result. Risk: Risk is a factor that could result in negativity or a chance of loss or damage. Thus Software testing is the process of finding defects/bugs in the system, that occurs due to an error in the application, which could lead to failure of the resultant product and increase in probability of high risk. In short, software testing have different goals and objectives, which often include: 1. finding defects; 2. gaining confidence in and providing information about the level of quality; 3. preventing defects.
133
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Software Development Life Cycle

Software Development Life Cycle (SDLC) is a procedural process, in the development of a software product. The process is carried in a set of steps, which explains the whole idea about how to go through each product.

The classification of Software Development Life Cycle process is as follows 1. Planning 2. Analysis

3. Design

4. Software Development

5. Implementation

6. Software Testing

7. Deployment

8. Maintenance

Software Testing is an important factor in a product's life cycle, as the product will have greater life, only when it works correctly and efficiently according to the customer's requirements.

Introduction to Software Testing

Before moving further towards introduction to software testing, we need to know a few concepts that will simplify the definition of software testing.

Error: Error or mistake is a human action that produces wrong or incorrect result. Defect (Bug, Fault): A flaw in the system or a product that can cause the component to fail or

misfunction. Failure: It is the variance between the actual and expected result. Risk: Risk is a factor that could result in negativity or a chance of loss or damage.

Thus Software testing is the process of finding defects/bugs in the system, that occurs due to an error in the application, which could lead to failure of the resultant product and increase in probability of high risk. In short, software testing have different goals and objectives, which often include:

1. finding defects; 2. gaining confidence in and providing information about the level of quality;

3. preventing defects.

If you are new to the field of software testing, then the article software testing for beginners will be of great help.

Scope of Software Testing

The primary function of software testing is to detect bugs in order to correct and uncover it. The scope of software testing includes execution of that code in various environment and also to examine the aspects of code - does the software do what it is supposed to do and function according to the specifications? As we move further we come across some questions such as "When to start testing?" and "When to stop testing?" It is recommended to start testing from the initial stages of the software development. This not only helps in rectifying tremendous errors before the last stage, but also reduces the rework of finding the bugs in the initial stages every now and then. It also saves the cost of the defect required to find it. Software testing is an ongoing process, which is potentially

Page 2: Software Development Life Cycle

endless but has to be stopped somewhere, due to the lack of time and budget. It is required to achieve maximum profit with good quality product, within the limitations of time and money. The tester has to follow some procedural way through which he can judge if he covered all the points required for testing or missed out any. To help testers to carry out these day-to-day activities, a baseline has to be set, which is done in the form of checklists. Read more on checklists for software tester.

Software Testing Key Concepts o Defects and Failures: As we discussed earlier, defects are not caused only due to the coding

errors, but most commonly due to the requirement gaps in the non-functional requirement, such as usability, testability, scalability, maintainability, performance and security. A failure is caused due to the deviation between an actual and an expected result. But not all defects result to failures. A defect can turn into a failure due to the change in the environment and or the change in the configuration of the system requirements.

o Input Combination and Preconditions: Testing all combination of inputs and initial state (preconditions), is not feasible. This means finding large number of infrequent defects is difficult.

o Static and Dynamic Analysis: Static testing does not require execution of the code for finding defects, whereas in dynamic testing, software code is executed to demonstrate the results of running tests.

o Verification and Validation: Software testing is done considering these two factors. Verification: This verifies whether the product is done according to the specification? Validation: This checks whether the product meets the customer requirement?

o Software Quality Assurance: Software testing is an important part of the software quality assurance. Quality assurance is an activity, which proves the suitability of the product by taking care of the quality of a product and ensuring that the customer requirements are met.

Software Testing Types:

Software test type is a group of test activities that are aimed at testing a component or system focused on a specific test objective; a non-functional requirement such as usability, testability or reliability. Various types of software testing are used with the common objective of finding defects in that particular component.

Software testing is classified according to two basic types of software testing: Manual Scripted Testing and Automated Testing.

Manual Scripted Testing: Black Box Testing White Box Testing

Gray Box Testing

The levels of software testing life cycle includes : Unit Testing Integration Testing

System Testing

Acceptance Testing

1. Alpha Testing

2. Beta Testing

Page 3: Software Development Life Cycle

Other types of software testing are: Functional Testing Performance Testing

1. Load Testing

2. Stress Testing

Smoke Testing

Sanity Testing

Regression Testing

Recovery Testing

Usability Testing

Compatibility Testing

Configuaration Testing

Exploratory Testing

For further explanation of these concepts, read more on types of software testing.

Automated Testing: Manual testing is a time consuming process. Automation testing involves automating a manual process. Test automation is a process of writing a computer program in the form of scripts to do a testing which would otherwise need to be done manually. Some of the popular automation tools are Winrunner, Quick Test Professional (QTP), LoadRunner, SilkTest, Rational Robot, etc. Automation tools category also includes maintenance tool such as TestDirector and many other.

Software Testing Methodologies

The software testing methodologies or process includes various models that built the process of working for a particular product. These models are as follows:

Waterfall Model V Model

Spiral Model

Rational Unified Process(RUP)

Agile Model

Rapid Application Development(RAD)

These models are elaborated briefly in software testing methodologies.

Software Testing Artifacts

Software testing process can produce various artifacts such as:

Page 4: Software Development Life Cycle

Test Plan: A test specification is called a test plan. A test plan is documented so that it can be used to verify and ensure that a product or system meets its design specification.

Traceability matrix: This is a table that correlates or design documents to test documents. This verifies that the test results are correct and is also used to change tests when the source documents are changed.

Test Case: Test cases and software testing strategies are used to check the functionality of individual component that is integrated to give the resultant product. These test cases are developed with the objective of judging the application for its capabilities or features.

Test Data: When multiple sets of values or data are used to test the same functionality of a particular feature in the test case, the test values and changeable environmental components are collected in separate files and stored as test data.

Test Scripts: The test script is the combination of a test case, test procedure and test data. Test Suite: Test suite is a collection of test cases.

Software Testing Process

Software testing process is carried out in the following sequence, in order to find faults in the software system: 1. Create Test Plan 2. Design Test Case

3. Write Test Case

4. Review Test Case

5. Execute Test Case

6. Examine Test Results

7. Perform Post-mortem Reviews

8. Budget after Experience

Here is a sample Test Case for you:

# Software Test Case for Login Page: Purpose : The user should be able to go to the Home page. Pre-requisite :

1. S/w should be compatible with the Operating system.

2. Login page should appear.

3. User Id and Password textboxes should be available with appropriate labels.

4. Submit and Cancel buttons with appropriate captions should be available.

Test Data : Required list of variables and their values should be available.eg: User Id:{Valid UserId, Invalid UserId, empty}, Password:{Valid, Invalid, empty}.

Page 5: Software Development Life Cycle

Sr.NoTest Case Id

Test Case Name Steps/Action Expected Results

Page 6: Software Development Life Cycle

1. TC1. Checking User Interface requirements.

User views the page to check whether it includes UserId and Password textboxes with appropriate labels. Also expects that Submit and Cancel buttons are available with appropriate captions

Screen dispalys user interface requirements according to the user.

2. TC2.

Textbox for UserId should:i)allow only alpha-numeric characters{a-z, A-Z}ii)not allow special characters like{'$','#','!','~','*',...} iii)not allow numeric characters like{0-9}

i)User types numbers into the textbox. i)Error message is displayed for numeric data.

ii)User types alphanumeric data in the textbox.

ii)Text is accepted when user enters alpha-numeric data into the textbox.

3. TC3.

Checking functionality of the Password textbox:i)Textbox for Password should accept more than six characters.ii)Data should be displayed in encrypted format.

i)User enters only two characters in the password textbox.

i)Error message is displayed when user enters less than six characters in the password textbox.

ii)User enters more than six characters in the password textbox.

System accepts data when user enters more than six characters into the password textbox.

ii)User checks whether his data is displayed in the encrypted format.

System accepts data in the encrypted format else displays an error message.

4. TC4. Checking functionality of 'SUBMIT' button.

i)User checks whether 'SUBMIT' button is enabled or disabled.

i)System displays 'SUBMIT' button as enabled

ii)User clicks on the 'SUBMIT' button and expects to view the 'Home' page of the application.

ii)System is redirected to the 'Home' page of the application as soon as he clicks on the 'SUBMIT' button.

5. TC5. Checking functionality of 'CANCEL' button.

i)User checks whether 'CANCEL' button is enabled or disabled.

i)System displays 'CANCEL' button as enabled.

ii)User checks whether the textboxes for UserId and Password are reset to blank by clicking on the 'CANCEL' button.

ii)System clears the data available in the UserId and Password textbox when user clicks on the 'CANCEL' button.

Fault Finding Techniques in Software Testing

Finding of a defect or fault in the earlier parts of the software not only saves time and money, but is also efficient in terms of security and profitability. As we move forward towards the different levels of the software, it becomes difficult and tedious to go back for finding the problems in the initial conditions of the components. The cost of finding the defect also increases. Thus it is recommended to start testing from the initial stages of the life cycle.

There are various techniques involved alongwith the types of software testing. There is a procedure that is to be followed for finding a bug in the application. This procedure is combined into the life cycle of the bug in the form of

Page 7: Software Development Life Cycle

contents of a bug, depending upon the severity and priority of that bug. This life cycle is named as the bug life cycles, which helps the tester in answering the question - how to log a bug?

Measuring Software Testing

There arises a need of measuring the software, both, when the software is under development and after the system is ready for use. Though it is difficult to measure such an abstract constraint, it is essential to do so. The elements that are not able to be measured, needs to be controlled. There are some important uses of measuring the software:

Software metrics helps in avoiding pitfalls such as 1. cost overruns,

2. in identifying where the problem has raised,

3. clarifying goals.

It answers questions such as:

1. What is the estimation of each process activity?,

2. How is the quality of the code that has been developed?,

3. How can the under developed code be improved?, etc.

It helps in judging the quality of the software, cost and effort estimation, collection of data, productivity and performance evaluation.

Some of the common software metrics are: Code Coverage Cyclomatic complexity

Cohesion

Coupling

Function Point Analysis

Execution time

Source lines of code

Bug per lines of code

In short, measurement of a software is for understanding, controlling and improvement of the software system. Software is subject to changes, with respect to, changing environmental conditions, varying user requirements, as well as configuration and compatibility issues. This gives rise to the development of newer and updated versions of software. But, there should be some source of getting back to the older versions easily and working on them efficiently. Testers play a vital role in this. Here is where change management comes into picture.

Software Testing as a Career

Software testing is a good career opportunity for those who are interested in the software industry. Video game testing is an offshoot of software testing. There are many industries specializing in this field. Believe it or not, you can actually get paid to test video games. You can read more on how to become a video game tester.

Page 8: Software Development Life Cycle

Software Testing Interview Questions

I hope this article has helped you gain a deeper insight into software testing. If you are planning to choose the software testing industry as your career ground, you might like to go through this extensive list of software testing

interview questions. Before you step out for a job in the testing field or before you take your first step towards becoming a software tester, you can acquire these software testing certifications.

Software Testing Certifications

Software testing certifications will not only boost up ones knowledge, but also prove to be beneficial for his academic performance. There are some software testing certification programs that can support the professional aspirations of software testers and quality assurance specialists.

ISTQB- International Software Testing Qualifications Board CSTE- Certified Software Tester

CSTP- Certified Software Test Professional

CTM- Certified Test Manager

CSPM- Certified Software Project Manager

CSPE- Certified Software Process Engineer

CAST- Certified Associate in Software Testing

Quality Assurance Certifications: CSQA- Certified Software Quality Analyst Software Quality Assurance Certification

CSQE- Certified Software Quality Engineer

CQIA- Certified Quality Improvement Associate

Software testing is indeed a vast field and accurate knowledge is crucial to ensure the quality of the software developed. I consider that this article on software testing tutorial must have given you a clearer idea on various software testing types, methodologies and different software testing strategies.

Page 9: Software Development Life Cycle

Software Development Life CycleWhat is Software Development Life Cycle?

The Software Development Life Cycle is a step-by-step process involved in the development of a software product. It is also denoted as Software Development process in certain parts of the world. The whole process is generally classified into a set of steps and a specific operation will be carried out in each of the steps.

ClassificationThe basic classification of the whole process is as follows

Planning Analysis

Design

Development

Implementation

Testing

Deployment

Maintenance

Each of the steps of the process has its own importance and plays a significant part in the product development. The description of each of the steps can give a better understanding.

PlanningThis is the first and foremost stage in the development and one of the most important stages. The basic motive is to plan the total project and to estimate the merits and demerits of the project. The Planning phase includes the definition of the intended system, development of the project plan, and Parallel management of the plan throughout the proceedings of the development.

A good and matured plan can create a very good initiative and can positively affect the complete project.

AnalysisThe main aim of the analysis phase is to perform statistics and requirements gathering. Based on the analysis of the project and due to the influence of the results of the planning phase, the requirements for the project are decided and gathered.

Once the requirements for the project are gathered, they are prioritized and made ready for further use. The

Page 10: Software Development Life Cycle

decisions taken in the analysis phase are out and out due to the requirements analysis. Proceedings after the current phase are defined.

DesignOnce the analysis is over, the design phase begins. The aim is to create the architecture of the total system. This is one of the important stages of the process and serves to be a benchmark stage since the errors performed until this stage and during this stage can be cleared here.

Most of the developers have the habit of developing a prototype of the entire software and represent the software as a miniature model. The flaws, both technical and design, can be found and removed and the entire process can be redesigned.

Development and Implementation

The development and implementation phase is the most important phase since it is the phase where the main part of the project is done. The basic works include the design of the basic technical architecture and the maintenance of the database records and programs related to the development process.

One of the main scenarios is the implementation of the prototype model into a full-fledged working environment, which is the final product or software.

TestingThe testing phase is one of the final stages of the development process and this is the phase where the final adjustments are made before presenting the completely developed software to the end-user.

In general, the testers encounter the problem of removing the logical errors and bugs. The test conditions which are decided in the analysis phase are applied to the system and if the output obtained is equal to the intended output, it means that the software is ready to be provided to the user.

MaintenanceThe toughest job is encountered in the maintenance phase which normally accounts for the highest amount of money. The maintenance team is decided such that they monitor on the change in organization of the software and report to the developers, in case a need arises.

The information desk is also provided with in this phase. This serves to maintain the relationship between the user and the creator.

Page 11: Software Development Life Cycle

Software Testing - Check Lists For Software Tester

I would like to make a note that the following checklists are defined in most generic form and do not promise to cover all processes that you are be required to go through and follow during your work. There may be some processes which are completely missed out from the lists or it may also contain processes which you don’t need to follow in your form of work.

First Things First Check the scripts assigned to you: This is the first and foremost process in the list. There is no

specific logic used to assign scripts to testers who should execute them all, but you may come across practices where you will be assigned script based on your work load for the day or your skill to understand and execute it in least possible time.

Check the status/comments of the defect in the Test Report Tool: Once you unveil a bug, its very important to keep track of the status of it as you will have to re-test the bug once it is fixed by a developer. Most of the times, general practice is to confirm if any fix to a bug is successful as it also makes it sure that the tester can proceed with other tests involving deeper side of that particular functionality. Sometimes, it also addresses issues related to understanding of functionality of the system for example: if a tester registered a defect, which is not an actual bug as per the programming/business logic. Then in that case, a comment from developer might help in understanding the mistake committed by the tester.

Checks while executing scrips: Update the test data sheet with all values which are required such as user name, functionality,

test code etc. Use naming conventions defined as testing standards to define a bug appropriately. Take screen prints for the script executed using naming conventions and provide test data that

you used for the testing. The screen prints will help other testers and developers to understand how the test was executed and it will also serve as a proof for you. If possible, try to explain the procedure you followed, choice of data and your understanding etc.

If your team is maintaining any type of tracking sheet, do not forget to update all the tracking sheets for the bug, it’s status, time and date found, severity etc.

If you are using a test reporting tool, do not forget to execute the script in the tool. Many test reporting tools require scripts to be executed in order to initiate the life cycle of a bug. For example Test Director needs script to be executed till the step where it the test script failed, other test steps before failed test step are declared as passed.

Update the tracking sheets with current status, status in reporting tools etc. if it is required to be updated after you execute the script in the reporting tool.

Check if you have executed all the scripts properly and updated the test reporting tool. After you complete your days work, it is better to do a peer to peer review. This step is very

important and often helps in finding out missing steps/processes.

Checks while logging defects First of all, confirm with your test lead if the defect is valid. Follow the appropriate naming conventions while logging defects. Before submitting the defect, get it reviewed by Work Lead/Team Lead. Give appropriate description and names in the defect screen prints as per naming conventions. After submitting defects attach the screen prints for the defect on Test Reporting Tool. Note down the defect number/unique identifier and update the test tracking sheet with

appropriate information.

Page 12: Software Development Life Cycle

Maintain a defect log, defect tracking sheet, screen prints dump folder etc. for a backup.

Checks for blocking and unblocking scripts

Blocking or unblocking of a script relates to a bug which affects execution of a script. For example if there is a bug on login screen, which is not allowing anyone enter the account after entering valid username and password and pressing OK button, there is no way you can execute any test script which requires the account screen that comes after the login screen.

Confirm with your test lead/work lead if the scripts are really blocked due to an existing bug. Block scripts with an active defect (Defect status: New/Assigned/Fixed/Reopen) Update the current script/defect in test reporting tool and tracking sheets with the defect

number/unique identifier, which is blocking the execution of the script or testing of the defect. If a defect is retested successfully, then unblock all scripts/defects blocked by it.

At the end of day, send an update mail to your Team Lead/Work Lead which should include the following: Scripts executed (Number) Defects raised/closed (Number) If any comments added on defects Issues/queries if any

Page 13: Software Development Life Cycle

Software Testing - Black Box Testing Strategy

What is a Black Box Testing Strategy?

Black Box Testing is not a type of testing; it instead is a testing strategy, which does not need any knowledge of internal design or code etc. As the name "black box" suggests, no knowledge of internal logic or code structure is required. The types of testing under this strategy are totally based/focused on the testing for requirements and functionality of the work product/software application. Black box testing is sometimes also called as "Opaque Testing", "Functional/Behavioral Testing" and "Closed Box Testing".

The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. Now a days, it is becoming common to route the Testing work to a third party as the developer of the system knows too much of the internal logic and coding of the system, which makes it unfit to test the application by the developer.

In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action.

Various testing types that fall under the Black Box Testing strategy are: functional testing, stress testing, recovery testing, volume testing, User Acceptance Testing (also known as UAT), system testing, Sanity or Smoke testing, load testing, Usability testing, Exploratory testing, ad-hoc testing, alpha testing, beta testing etc.

These testing types are again divided in two groups: a) Testing in which user plays a role of tester and b) User is not required.

Testing method where user is not required:

Functional Testing:

In this type of testing, the software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected.

Stress Testing:

The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand.

Load Testing:

The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades.

Page 14: Software Development Life Cycle

Ad-hoc Testing:

This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing.

Exploratory Testing:

This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.

Usability Testing:

This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user.

Smoke Testing:

This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level.

Recovery Testing:

Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications.

Volume Testing:

Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system.

Testing where user plays a role/user is required:

User Acceptance Testing:

In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to.

Alpha Testing:

In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.

Beta Testing:

In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.

Page 15: Software Development Life Cycle

Software Testing - White Box Testing Strategy

What is a White Box Testing Strategy?

White box testing strategy deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code etc.

In order to implement white box testing, the tester has to deal with the code and hence is needed to possess knowledge of coding and logic i.e. internal working of the code. White box test also needs the tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning.

Advantages of White box testing are:

i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.

ii) The other advantage of white box testing is that it helps in optimizing the codeiii) It helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing are:

i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.

ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.

Types of testing under White/Glass Box Testing Strategy:

Unit Testing:

The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.

Static and dynamic Analysis:

Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.

Statement Coverage:

Page 16: Software Development Life Cycle

In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.

Branch Coverage:

No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.

Security Testing:

Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.

Mutation Testing:

A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

Besides all the testing types given above, there are some more types which fall under both Black box and White box testing strategies such as: Functional testing (which deals with the code in order to check its functional performance), Incremental integration testing (which deals with the testing of newly added code in the application), Performance and Load testing (which helps in finding out how the particular code manages resources and give performance etc.) etc.

Page 17: Software Development Life Cycle

Software Testing - Acceptance Testing

Acceptance testing (also known as user acceptance testing) is a type of testing carried out in order to verify if the product is developed as per the standards and specified criteria and meets all the requirements specified by customer. This type of testing is generally carried out by a user/customer where the product is developed externally by another party.

Acceptance testing falls under black box testing methodology where the user is not very much interested in internal working/coding of the system, but evaluates the overall functioning of the system and compares it with the requirements specified by them. User acceptance testing is considered to be one of the most important testing by user before the system is finally delivered or handed over to the end user.

Acceptance testing is also known as validation testing, final testing, QA testing, factory acceptance testing and application testing etc. And in software engineering, acceptance testing may be carried out at two different levels; one at the system provider level and another at the end user level (hence called user acceptance testing, field acceptance testing or end-user testing).

Acceptance testing in software engineering generally involves execution of number test cases which constitute to a particular functionality based on the requirements specified by the user. During acceptance testing, the system has to pass through or operate in a computing environment that imitates the actual operating environment existing with user. The user may choose to perform the testing in an iterative manner or in the form of a set of varying parameters (for example: missile guidance software can be tested under varying payload, different weather conditions etc.).

The outcome of the acceptance testing can be termed as success or failure based on the critical operating conditions the system passes through successfully/unsuccessfully and the user’s final evaluation of the system.

The test cases and test criterion in acceptance testing are generally created by end user and cannot be achieved without business scenario criteria input by user. This type of testing and test case creation involves most experienced people from both sides (developers and users) like business analysts, specialized testers, developers, end users etc.

Process involved in Acceptance Testing 1. Test cases are created with the help of business analysts, business customers (end users), developers, test

specialists etc. 2. Test cases suites are run against the input data provided by the user and for the number of iterations that

the customer sets as base/minimum required test runs. 3. The outputs of the test cases run are evaluated against the criterion/requirements specified by user. 4. Depending upon the outcome if it is as desired by the user or consistent over the number of test suites

run or non conclusive, user may call it successful/unsuccessful or suggest some more test case runs. 5. Based on the outcome of the test runs, the system may get rejected or accepted by the user with or

without any specific condition.

Page 18: Software Development Life Cycle

Acceptance testing is done in order to demonstrate the ability of system/product to perform as per the expectations of the user and induce confidence in the newly developed system/product. A sign-off on contract stating the system as satisfactory is possible only after successful acceptance testing.

Types of Acceptance Testing

User Acceptance Testing: User acceptance testing in software engineering is considered to be an essential step before the system is finally accepted by the end user. In general terms, user acceptance testing is a process of testing the system before it is finally accepted by user.

Alpha Testing & Beta Testing: Alpha testing is a type of acceptance testing carried out at developer’s site by users (internal staff). In this type of testing, the user goes on testing the system and the outcome is noted and observed by the developer simultaneously.

Beta testing is a type of testing done at user’s site. The users provide their feedback to the developer for the outcome of testing. This type of testing is also known as field testing. Feedback from users is used to improve the system/product before it is released to other users/customers.

Operational Acceptance Testing: This type of testing is also known as operational readiness/preparedness testing. It is a process of ensuring all the required components (processes and procedures) of the system are in place in order to allow user/tester to use it.

Contact and Regulation Acceptance Testing: In contract and regulation acceptance testing, the system is tested against the specified criteria as mentioned in the contract document and also tested to check if it meets/obeys all the government and local authority regulations and laws and also all the basic standards.

Page 19: Software Development Life Cycle

Software Testing - Stress Testing

Stress testing has different meaning for different industries where it is used. For a financial industry/sector, stress testing means a process of testing financial instruments to find out their robustness and level of accuracy they can maintain under extreme conditions such as sudden or continuous market crash at a certain level, sudden or extreme change in various parameters, for example interest rates, repo and reverse repo used in the financial sector, sudden rise or decline in the price of materials that can affect financial projections etc. For the manufacturing industry, stress testing may include different parameters and operating process for testing of different systems. For medical industry, stress testing means a process that can help understand a patient’s condition, etc.

Stress Testing in IT Industry

Stress testing in IT industry (hardware as well as software sectors) means testing of software/hardware for its effectiveness in giving consistent or satisfactory performance under extreme and unfavorable conditions such as heavy network traffic, heavy processes load, under or over clocking of underlying hardware, working under maximum requests for resource utilization of the peripheral or in the system etc.

In other words, stress testing helps find out the level of robustness and consistent or satisfactory performance even when the limits for normal operation for the system (software/hardware) is crossed.

Most important use of stress testing is found in testing of software and hardware that are supposed to be operating in critical or real time situation. Such as a website will always be online and the server hosting the website must be able to handle the traffic in all possible ways (even if the traffic increases manifold), a mission critical software or hardware that works in real time scenario etc. Stress testing in connection with websites or certain software is considered to be an effective process of determining the limit, at which the system/software/hardware/website shows robustness, is always available to perform its task, effectively manages the load than the normal scenario and even shows effective error management under extreme conditions.

Need for Stress Testing

Stress testing is considered to be important because of following reasons: 1. Almost 90% of the software/systems are developed with an assumption that they will be operating under

normal scenario. And even if it is considered that the limit of normal operating conditions will be crossed, it is not considerably as high as it really could be.

2. The cost or effect of a very important (critical) software/system/website failure under extreme conditions in real time can be huge (or may be catastrophic for the organization or entity owning the software/system).

3. It is always better to be prepared for extreme conditions rather than letting the system/software/website crash, when the limit of normal operation is crossed.

4. Testing carried out by the developer of the system/software/website may not be sufficient to help unveil conditions which will lead to crash of the system/software when it is actually submitted to the operating environment.

5. It's not always possible to unveil possible problems or bugs in a system/software, unless it is subjected to such type of testing.

Page 20: Software Development Life Cycle

To help overcome problems like denial of service attacks, in case of web servers for a web site, security breach related problems due to spamming, hacking and viruses etc., problems arising out of conditions where software/system/website need to handle requests for resource allocation for requesting processes at the time when all the required resources are already allocated to some other process that needs some more resources to complete its work (which is called as deadlock situation), memory leak, race condition etc.

This type of testing is mostly done with the help of various stress testing softwares available in market. These tools are configured to automate a process of increasing stress (i.e. creation and increasing degree of adverse environment) on a system/software/website and capturing values of various parameters that help confirm the robustness, availability and performance of the system/software/website being tested. Few of the actions involved in stress testing are bombarding a website with huge number of requests, running of many resource hungry applications in a computer, making numerous attempts to access ports of a computer in order to hack it and use it for various purposes such as spamming, spreading virus etc.

Intensity of the adverse conditions is increased slowly while measuring all the parameters till the point where the system/software/website crashes. The collected data (observation and parameter values) are used for further improvement of the system/software/website.

Page 21: Software Development Life Cycle

Software Testing - An Introduction To Usability Testing

Usability Testing:

As the term suggest, usability means how better something can be used over the purpose it has been created for. Usability testing means a way to measure how people (intended/end user) find it (easy, moderate or hard) to interact with and use the system keeping its purpose in mind. It is a standard statement that "Usability testing measures the usability of the system".

Why Do We Need Usability Testing?

Usability testing is carried out in order to find out if there is any change needs to be carried out in the developed system (may it be design or any specific procedural or programmatic change) in order to make it more and more user friendly so that the intended/end user who is ultimately going to buy and use it receives the system which he can understand and use it with utmost ease.

Any changes suggested by the tester at the time of usability testing, are the most crucial points that can change the stand of the system in intended/end user’s view. Developer/designer of the system need to incorporate the feedbacks (here feedback can be a very simple change in look and feel or any complex change in the logic and functionality of the system) of usability testing into the design and developed code of the system (the word system may be a single object or an entire package consisting more than one objects) in order to make system more and more presentable to the intended/end user.

Developer often try to make the system as good looking as possible and also tries to fit the required functionality, in this endeavor he may have forgotten some error prone conditions which are uncovered only when the end user is using the system in real time.

Usability testing helps developer in studying the practical situations where the system will be used in real time. Developer also gets to know the areas that are error prone and the area of improvement.

In simple words, usability testing is an in-house dummy-release of the system before the actual release to the end users, where developer can find and fix all possible loop holes.

How Usability Test Is Carried Out?

Usability test, as mentioned above is an in-house dummy release before the actual release of the system to the intended/end user. Hence, a setup is required in which developer and testers try to replicate situations as realistic as possible to project the real time usage of the system. The testers try to use the system in exactly the same manner that any end user can/will do. Please note that, in this type of testing also, all the standard instruction of testing are followed to make it sure that the testing is done in all the directions such as functional testing, system integration testing, unit testing etc.

The outcome/feedback is noted down based on observations of how the user is using the system and what are all the possible ways that also may come into picture, and also based on the behavior of the system and how

Page 22: Software Development Life Cycle

easy/hard it is for the user to operate/use the system. User is also asked for his/her feedback based on what he/she thinks should be changed to improve the user interaction between the system and the end user.

Usability testing measures various aspects such as:

How much time the tester/user and system took to complete basic flow?

How much time people take to understand the system (per object) and how many mistakes they make while performing any process/flow of operation?

How fast the user becomes familiar with the system and how fast he/she can recall the system’s functions?And the most important: how people feel when they are using the system?

Over the time period, many people have formulated various measures and models for performing usability testing. Any of the models can be used to perform the test.

Advantages of Usability Testing Usability test can be modified to cover many other types of testing such as functional testing, system

integration testing, unit testing, smoke testing etc. (with keeping the main objective of usability testing in mind) in order to make it sure that testing is done in all the possible directions.

Usability testing can be very economical if planned properly, yet highly effective and beneficial. If proper resources (experienced and creative testers) are used, usability test can help in fixing all the

problems that user may face even before the system is finally released to the user. This may result in better performance and a standard system.

Usability testing can help in uncovering potential bugs and potholes in the system which generally are not visible to developers and even escape the other type of testing.

Usability testing is a very wide area of testing and it needs fairly high level of understanding of this field along with creative mind. People involved in the usability testing are required to possess skills like patience, ability to listen to the suggestions, openness to welcome any idea, and the most important of them all is that they should have good observation skills to spot and fix the problems on fly.

Page 23: Software Development Life Cycle

Software Testing - Compatibility Testing

Compatibility testing is a non-functional software testing that helps evaluate a system/application's performance in connection with the operating environment. Read on to know more about compatibility testing.

Compatibility testing is one of the several types of software testing performed on a system that is built based on certain criteria and which has to perform specific functionality in an already existing setup/environment. Compatibility of a system/application being developed with, for example, other systems/applications, OS, Network, decide many things such as use of the system/application in that environment, demand of the system/application etc. Many a times, users prefer not to opt for an application/system just because it is not compatible with any other system/application, network, hardware or OS they are already using. This leads to a situation where the development efforts taken by developers prove to be in vain.

What is Compatibility Testing

Compatibility testing is a type of testing used to ensure compatibility of the system/application/website built with various other objects such as other web browsers, hardware platforms, users (in case if its very specific type of requirement, such as a user who speaks and can read only a particular language), operating systems etc. This type of testing helps find out how well a system performs in a particular environment that includes hardware, network, operating system and other software etc.

Compatibility testing can be automated using automation tools or can be performed manually and is a part of non-functional software testing.

Developers generally lookout for the evaluation of following elements in a computing environment (environment in which the newly developed system/application is tested and which has similar configuration as the actual environment in which the system/application is supposed to fit and start working).

Hardware: Evaluation of the performance of system/application/website on a certain hardware platform. For example: If an all-platform compatible game is developed and is being tested for hardware compatibility, the developer may choose to test it for various combinations of chipsets (such as Intel, Macintosh GForce), motherboards etc.

Browser: Evaluation of the performance of system/website/application on a certain type of browser. For example: A website is tested for compatibility with browsers like Internet Explorer, Firefox etc. (usually browser compatibility testing is also looked at as a user experience testing, as it is related to user’s experience of the application/website, while using it on different browsers).

Network: Evaluation of the performance of system/application/website on network with varying parameters such as bandwidth, variance in capacity and operating speed of underlying hardware etc., which is set up to replicate the actual operating environment.

Page 24: Software Development Life Cycle

Peripherals: Evaluation of the performance of system/application in connection with various systems/peripheral devices connected directly or via network. For example: printers, fax machines, telephone lines etc.

Compatibility between versions: Evaluation of the performance of system/application in connection with its own predecessor/successor versions (backward and forward compatibility). For example: Windows 98 was developed with backward compatibility for Windows 95 etc.

Softwares: Evaluation of the performance of system/application in connection with other softwares. For example: Software compatibility with operating tools for network, web servers, messaging tools etc.

Operating System: Evaluation of the performance of system/application in connection with the underlying operating system on which it will be used.

Databases: Many applications/systems operate on databases. Database compatibility testing is used to evaluate an application/system’s performance in connection to the database it will interact with.

How helpful is it?

Compatibility testing can help developers understand the criteria that their system/application needs to attain and fulfill, in order to get accepted by intended users who are already using some OS, network, software and hardware etc. It also helps the users to find out which system will better fit in the existing setup they are using.

The most important use of the compatibility testing is as already mentioned above: to ensure its performance in a computing environment in which it is supposed to operate. This helps in figuring out necessary changes/modifications/additions required to make the system/application compatible with the computing environment.

Page 25: Software Development Life Cycle

Software Testing - Brief Introduction To Exploratory Testing

Exploratory Software Testing, even though disliked by many, has found its place in the Software Testing world. Exploratory testing is the only type of testing that can help in uncovering bugs that stand more chance of being ignored by other testing strategies.

What is an Exploratory Testing?

Bach’s Definition: ‘Any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.’

Which simply can be put as: A type of testing where we explore software, write and execute the test scripts simultaneously.

Exploratory testing is a type of testing where tester does not have specifically planned test cases, but he/she does the testing more with a point-of-view to explore the software features and tries to break it in order to find out unknown bugs.

A tester who does exploratory testing, does it only with an idea to more and more understand the software and appreciate its features. During this process, he/she also tries to think of all possible scenarios where the software may fail and a bug can be revealed.

Why do we need exploratory testing? At times, exploratory testing helps in revealing many unknown and un-detected bugs, which is very hard

to find out through normal testing. As exploratory testing covers almost all the normal type of testing, it helps in improving our productivity

in terms of covering the scenarios in scripted testing and those which are not scripted as well. Exploratory Testing is a learn and work type of testing activity where a tester can at least learn more and

understand the software if at all he/she was not able to reveal any potential bug. Exploratory testing, even though disliked by many, helps testers in learning new methods, test strategies,

and also think out of the box and attain more and more creativity.

Who Does Exploratory Testing?

Any software tester knowingly or unknowingly does it!

While testing, if a tester comes across a bug, as a general practice, tester registers that bug with the programmer. Along with registering the bug, tester also tries to make it sure that he/she has understood the scenario and functionality properly and can reproduce the bug condition. Once programmer fixes the bug, tester runs a test case with the same scenario replication in which the bug had occurred previously. If tester finds that the bug is fixed, he/she again tries to find out if the fix can handle any such same type of scenario with different inputs.

For an example, lets consider that a tester finds a bug related to an input text field on a form, where the field is

Page 26: Software Development Life Cycle

supposed to accept any digit other than the digits from 1 to 100, which it fails to and accepts the number 100. Tester logs this bug to the programmer and now is waiting for the fix. Once programmer fixes the bug, it sends it across to the tester so as to get it tested. Tester now will try to test the bug with same input value (100: as he/she had found that this condition causes application to fail) in the field. If application rejects the number (100) entered by the tester, he/she can safely close the defect.

Now, along with the above given test input value, which had revealed the bug, tester tries to check if there is any other value from this set (0 to 100), which can cause the application to fail. He/she may try to enter values from 0 to 100, or may be some characters or a combination of character and numbers in any order. All these test cases are thought by the tester as a variation of the type of value he/she had entered previously and represent only one test scenario. This testing is called exploratory testing, as the tester tried to explore and find out the possibility of revealing a bug by using any possible way.

What qualities I need to posses to be able to perform an Exploratory Testing?

As I mentioned above, any software tester can perform exploratory testing.The only limit to the extent to which you can perform exploratory testing is your imagination and creativity, more you can think of ways to explore, understand the software, more test cases you will be able write and execute simultaneously.

Advantages of Exploratory Testing Exploratory testing can uncover bugs, which are normally ignored (or hard to find) by other testing

strategies. It helps testers in learning new strategies, expand the horizon of their imagination that helps them in

understanding & executing more and more test cases and finally improve their productivity. Exploratory testing helps tester in confirming that he/she understands the application and its functionality

properly and has no confusion about the working of even a smallest part of it, hence covering the most important part of requirement understanding.

As in case of exploratory testing, we write and execute the test cases simultaneously. It helps in collecting result oriented test scripts and shading of load of unnecessary test cases which do not yield and result.

Exploratory testing covers almost all type of testing, hence tester can be sure of covering various scenarios once exploratory testing is performed at the highest level (i.e. if the exploratory testing performed can ensure that all the possible scenarios and test cases are covered).

Page 27: Software Development Life Cycle

Waterfall Model in Testing

The waterfall model in testing is one of the most widely used and popular process module for developing and designing software programs to meet specific customer needs. The linear sequential properties of the model make it a universally accepted software development process model.

Every piece of software that is released needs to undergo a rigorous and extreme testing procedure before it can be released for general use. The waterfall model in testing is one such testing procedure that has gained immense popularity over the years, owing to its simple understandability and elementary design. The development of software is a long and arduous process, and without proper testing and checking, it cannot be sold anywhere. The process of software development is known as 'Software Development Process Models', and these define the many stages of developing a software, right from the initial idea conception to the final usage.

Waterfall model in testing envelopes the use of a design concept that is also known as System Developing Life Cycle (SDLC) Model, or the Linear Sequential Model. Thus, waterfall model in software engineering defines the various stages that a software developer must undertake in order to ensure that the software is meeting customer requirements, and is working glitch free.

Waterfall Model Life-cycle

A lot of research and development is enforced during the various growth stages of any particular software. The idea of generating a standardized model of testing (like the waterfall model in testing) is to ensure that a software engineer follows the correct sequence of process development, and does not get too far ahead too soon. Each line of the program needs to be checked and double checked, and each stage of the waterfall model is required to follow a standard protocol. The various waterfall model phases are as follows.

The waterfall model diagram shown above illustrates the various stages of this process.

Page 28: Software Development Life Cycle

Requirements Gathering

The first and most obvious step in software development is the gathering of all requirements of the customer. The primary purpose of the final program is to serve the user, so therefore all his needs and requirements need to be known in detail, before the development process actually begins. The purpose of the model and a basic specifications and requirements chart is made after careful consultation with the user, and this is incorporated into the development process. Waterfall model in testing begins primarily with the gathering of all pertinent and necessary data from the customer.

Requirements Analysis

Next up, these requirements are studied and analyzed closely, and the developer takes a decision regarding which platform, which computer language, and what kind of databases are necessary for the development process. A feasibility study is then carried out to ensure that all resources are available and the actual programming of the software is possible. A projected blueprint of sorts is created of the software.

Designing and Coding

This is where the real work begins, and the algorithms and the flowcharts of the software are devised. Based on the data collected and the feasibility study carried out, the actual coding of the program commences. Without the information gathered in the previous two stages, the design of the program would be impossible. This is the most important stage of the model, and the use of the waterfall model in testing would be impossible without something to actually test. It goes without saying that the final design has to meet all the necessary requirements of the customer.

Testing Now comes the litmus test of the code developed. This stage defines the actual transition of the program from a mere hypothesis to a real usable software. Without testing the functionality of the code, all the possible bugs cannot be detected. Moreover, use of waterfall model in testing also ensures that all the requirements of the customer are satisfactorily met, and there are no loose ends anywhere in the code developed. If any flaws or bugs are detected, the software is reverted to the designing stage and all the deficiencies are fixed.

The designing process is divided into smaller parts known as units, and unit testing needs to be carried out for each of these divisions individually. Once the units are declared to be flaw free, they are integrated into the final system and then this system is tested to ensure proper integration and compatibility between the various units. Waterfall model in testing can only be done by dividing up the coded program into various manageable parts. Thus, the importance of the testing phase in waterfall model is universally known and undoubted.

Page 29: Software Development Life Cycle

Final Acceptance

Once the design has been tried and tested by the testing team, the customers are given a demo version of the final program. Now they must use the program and indicate whether they are satisfied with the product or not. If they accept that the software is satisfactory and as per their demands and requirements, the process is completed. On the other hand, if he is dissatisfied with certain aspects of the software, or feels that an integral component is missing, the design team proceeds to solve this problem. The benefits of dividing the work into these various stages is that everyone knows what they are doing, and are specifically trained to carry out their responsibility.

Waterfall model in testing ensures that a high degree of professionalism is met within the development process, and that all the parties involved in this development process are specialists in their respective fields.

Advantages of Waterfall Model in Testing

The primary advantage is that it is a linear model, and follows a proper sequence and order. This is a crucial factor in determining the model's effectiveness and suitability. Also, since the process is following a linear sequence, and documentation is produced at every stage, it is easy to track down mistakes, deficiencies and any other problems that may arise. The cost of resources at each stage get minimized due to the linear sequencing as well.

Disadvantages of Waterfall Model in Testing

As is the case with all other models, if the customer is ambiguous about his needs, the design process can go horribly wrong. This factor is further highlighted by the fact that if some mistake is made in a certain stage and is not detected or tracked, all the subsequent steps will go wrong. Therefore the need for testing is very intense. Customers often have a complaint that if they could get a sample of the software in the early stages, they could find out whether it is suitable or not. Since they do not receive the program till it is almost completed, it becomes a little more complicated for them to offer feedback. Thus, a situation of complete trust from the client is essential.

Page 30: Software Development Life Cycle

Software Verification & Validation Model - An Introduction

An introduction to ‘Verification & Validation Model’ used in improvement of software project development life cycle.

A perfect software product is built when every step is taken with full consideration that ‘A right product is developed in a right manner’. ‘Software Verification & Validation’ is one such model, which helps the system designers and test engineers to confirm that a right product is build right way throughout the development process and improve the quality of the software product.

‘Verification & Validation Model’ makes it sure that, certain rules are followed at the time of development of a software product and also makes it sure that the product that is developed fulfills the required specifications. This reduces the risk associated with any software project up to certain level by helping in detection and correction of errors and mistakes, which are unknowingly done during the development process.

What is Verification?

The standard definition of Verification goes like this: "Are we building the product RIGHT?" i.e. Verification is a process that makes it sure that the software product is developed the right way. The software should confirm to its predefined specifications, as the product development goes through different stages, an analysis is done to ensure that all required specifications are met.

Page 31: Software Development Life Cycle

Methods and techniques used in the Verification and Validation shall be designed carefully, the planning of which starts right from the beginning of the development process. The Verification part of ‘Verification and Validation Model’ comes before Validation, which incorporates Software inspections, reviews, audits, walkthroughs, buddy checks etc. in each phase of verification (every phase of Verification is a phase of the Testing Life Cycle)

During the Verification, the work product (the ready part of the Software being developed and various documentations) is reviewed/examined personally by one ore more persons in order to find and point out the defects in it. This process helps in prevention of potential bugs, which may cause in failure of the project.

Few terms involved in Verification:

Inspection:Inspection involves a team of about 3-6 people, led by a leader, which formally reviews the documents and work product during various phases of the product development life cycle. The work product and related documents are presented in front of the inspection team, the member of which carry different interpretations of the presentation. The bugs that are detected during the inspection are communicated to the next level in order to take care of them.

Walkthroughs:Walkthrough can be considered same as inspection without formal preparation (of any presentation or documentations). During the walkthrough meeting, the presenter/author introduces the material to all the participants in order to make them familiar with it. Even when the walkthroughs can help in finding potential bugs, they are used for knowledge sharing or communication purpose.

Buddy Checks:

This is the simplest type of review activity used to find out bugs in a work product during the verification. In buddy check, one person goes through the documents prepared by another person in order to find out if that person has made mistake(s) i.e. to find out bugs which the author couldn’t find previously.

The activities involved in Verification process are: Requirement Specification verification, Functional design verification, internal/system design verification and code verification (these phases can also subdivided further). Each activity makes it sure that the product is developed right way and every requirement, every specification, design code etc. is verified!

What is Validation?

Validation is a process of finding out if the product being built is right?i.e. whatever the software product is being developed, it should do what the user expects it to do. The software product should functionally do what it is supposed to, it should satisfy all the functional requirements set by the user. Validation is done during or at the end of the development process in order to determine whether the product satisfies specified requirements.

Validation and Verification processes go hand in hand, but visibly Validation process starts after Verification process ends (after coding of the product ends). Each Verification activity (such as Requirement Specification Verification, Functional design Verification etc.) has its corresponding Validation activity (such as Functional Validation/Testing, Code Validation/Testing, System/Integration Validation etc.).

Page 32: Software Development Life Cycle

All types of testing methods are basically carried out during the Validation process. Test plan, test suits and test cases are developed, which are used during the various phases of Validation process. The phases involved in Validation process are: Code Validation/Testing, Integration Validation/Integration Testing, Functional Validation/Functional Testing, and System/User Acceptance Testing/Validation.

Terms used in Validation process:

Code Validation/Testing:

Developers as well as testers do the code validation. Unit Code Validation or Unit Testing is a type of testing, which the developers conduct in order to find out any bug in the code unit/module developed by them. Code testing other than Unit Testing can be done by testers or developers.

Integration Validation/Testing:

Integration testing is carried out in order to find out if different (two or more) units/modules co-ordinate properly. This test helps in finding out if there is any defect in the interface between different modules.

Functional Validation/Testing:

This type of testing is carried out in order to find if the system meets the functional requirements. In this type of testing, the system is validated for its functional behavior. Functional testing does not deal with internal coding of the project, in stead, it checks if the system behaves as per the expectations.

User Acceptance Testing or System Validation:

In this type of testing, the developed product is handed over to the user/paid testers in order to test it in real time scenario. The product is validated to find out if it works according to the system specifications and satisfies all the user requirements. As the user/paid testers use the software, it may happen that bugs that are yet undiscovered, come up, which are communicated to the developers to be fixed. This helps in improvement of the final product.

Page 33: Software Development Life Cycle

Spiral Model - A New Approach Towards Software Development

The Waterfall model is the most simple and widely accepted/followed software development model, but like any other system, Waterfall model does have its own pros and cons. Spiral Model for software development was designed in order to overcome the disadvantages of the Waterfall Model.

In last article we discussed about "Waterfall Model", which is one of the oldest and most simple model designed and followed during software development process. But "Waterfall Model" has its own disadvantages such as there is no fair division of phases in the life cycle, not all the errors/problems related to a phase are resolved during the same phase, instead all those problems related to one phase are carried out in the next phase and are needed to be resolved in the next phase, this takes much of time of the next phase to solve them. The risk factor is the most important part, which affects the success rate of the software developed by following "The Waterfall Model".

In order to overcome the cons of "The Waterfall Model", it was necessary to develop a new Software Development Model, which could help in ensuring the success of software project. One such model was developed which

Page 34: Software Development Life Cycle

incorporated the common methodologies followed in "The Waterfall Model", but it also eliminated almost every possible/known risk factors from it. This model is referred as "The Spiral Model" or "Boehm’s Model".

There are four phases in the "Spiral Model" which are: Planning, Evaluation, Risk Analysis and Engineering. These four phases are iteratively followed one after other in order to eliminate all the problems, which were faced in "The Waterfall Model". Iterating the phases helps in understating the problems associated with a phase and dealing with those problems when the same phase is repeated next time, planning and developing strategies to be followed while iterating through the phases. The phases in "Spiral Model" are:

Plan: In this phase, the objectives, alternatives and constraints of the project are determined and are documented. The objectives and other specifications are fixed in order to decide which strategies/approaches to follow during the project life cycle.

Risk Analysis: This phase is the most important part of "Spiral Model". In this phase all possible (and available) alternatives, which can help in developing a cost effective project are analyzed and strategies are decided to use them. This phase has been added specially in order to identify and resolve all the possible risks in the project development. If risks indicate any kind of uncertainty in requirements, prototyping may be used to proceed with the available data and find out possible solution in order to deal with the potential changes in the requirements.

Engineering: In this phase, the actual development of the project is carried out. The output of this phase is passed through all the phases iteratively in order to obtain improvements in the same.

Customer Evaluation: In this phase, developed product is passed on to the customer in order to receive customer’s comments and suggestions which can help in identifying and resolving potential problems/errors in the software developed. This phase is very much similar to TESTING phase.

The process progresses in spiral sense to indicate iterative path followed, progressively more complete software is built as we go on iterating through all four phases. The first iteration in this model is considered to be most important, as in the first iteration almost all possible risk factors, constraints, requirements are identified and in the next iterations all known strategies are used to bring up a complete software system. The radical dimensions indicate evolution of the product towards a complete system.

However, as every system has its own pros and cons, "The Spiral Model" does have its pros and cons too. As this model is developed to overcome the disadvantages of the "Waterfall Model", to follow "Spiral Model", highly skilled people in the area of planning, risk analysis and mitigation, development, customer relation etc. are required. This along with the fact that the process needs to be iterated more than once demands more time and is somehow expensive task.

Page 35: Software Development Life Cycle

Rational Unified Process (RUP) Methodology

The rational unified process (RUP) is a software process product designed as an object-oriented and web-enabled program development methodology by Rational Software Corporation, a division acquired by IBM, since 2003. This article provides a brief overview of the rational unified process (RUP) methodology.

Rational Unified Process (RUP) methodology is an software engineering tool which compounds development aspects such as manuals, documents, codes, models, etc. with the procedural aspects of development such as techniques, mechanics, defined stages, and practices within a unified framework.

What is RUP?

Rational Unified Process (RUP) methodology is fast becoming a popular software development to map business process and practices. Development is phased into four stages. RUP methodology is highly flexible in its developmental path, as any stage can be updated at any time. The first stage or inception centers on assessing needs, requirements, viability and feasibility of the program or project. The second step or elaboration measures the architecture of the system's appropriateness based on the project needs. The third stage is the construction phase, wherein the actual software system is made, by developing components and features. This phase also includes the first release of the developed software. The final stage is that of transition, and marks the end of the development cycle, if all objectives are met. This phase deals with the training of the end users, beta testing and the final implementation of the system.

Understanding RUP: Six Best Industry Practices of RUP

RUP is designed to incorporate the six best software industry practices for software development, while stressing strongly on object-oriented design. They are basically six ideas, when followed while designing any software project, will reduce errors and faults and ensure optimal productivity. The practices are listed below:

Develop Iteratively

Loops are created to add extra information or to facilitate processes that are added later in the development stage.

RequirementsGathering requirements is essential to the success of any project. The end users' needs have to be built into the system completely.

ComponentsLarge projects, when split into components, are easier to test and can be more methodically integrated into a larger system. Components allow the use of code reuse through the use of object-oriented programming.

Design Model Visual

Many projects use Unified Modeling Language (UML) to perform object-oriented analysis and designs, which consist of diagrams to visually represent all major components.

Page 36: Software Development Life Cycle

Quality and Defects Management

Testing for quality and defects is an integral part of software development. There are also a number of testing patterns that should be developed, to gauge the readiness of the project for its release.

Synchronized Changes

All components created by separate teams, either from different locations or on different platforms need to be synchronized and verified constantly.

Rational Unified Process (RUP) methodology's developmental approach has proved to be very resourceful and successful for a number of reasons. The entire development process takes into account the changing requirements and integrates them. Risks and defects can, not only be discovered but addressed, and reduced or eliminated in the middle of integration process. As defects are detected along the process, errors and performance bottlenecks can be rectified by making use of the several iterations (loops). RUP provides a prototype at the completion of each iteration, which make it easier for the developers to synchronize and implement changes.

Rational Unified Process (RUP) methodology is designed to work as an online help that provides content, guidelines, processes templates, and examples for all stages of program development. To be a certified solution designer, authorized to use this methodology, one needs to get a minimum of 62% in IBM RUP certification examination.

Page 37: Software Development Life Cycle

What is Rational Unified Process (RUP)

Rational Unified Process (RUP) is a comprehensive software engineering process. It features a disciplined approach towards industry-tested practices for designing softwares and systems within a development organization. Continue reading, if you want to know what is rational unified process (RUP)?

The concept of Rational Unified Process (RUP) came from the Rational Software Corporation, a division of IBM (International Business Machines Corporation). It keeps a check on effective project management and high-quality production of software. The basic methodology followed in RUP is based on a comprehensive web-enabled program development and object-oriented approach.The 'Rational Unified Process' adopts the 'Unified Modeling Language' and provides the best practiced guidelines, templates and illustrations of all aspects for program development. Here is a simple breakdown of all the aspects related to this concept, so as to give you a brief understanding as to what is rational unified process (RUP)?

There are primarily four phases or stages of development that is concluded with a release in RUP. Here is a quick review of all the four stages or cycles.

Inception Phase

In the inception phase, the goal is to develop the parent idea into a product vision by defining its scope and the business case. The business case includes business context, factors influencing success, risk assessment and financial forecast. This is to get an understanding of the business drivers and to justify the launch of the project. This phase is to identify the work flows required by the project.

Elaboration Phase

Here the architectural foundation, project plan and high-risk factors of the project are determined, after analyzing the problem domain. For establishing these objectives, an in-and-out knowledge of the system is a must. In other words, the performance requirements, scope and functionality of the system, influence the deciding factor in the architectural concept of the project. Architectural and planning decisions are governed by the most critical use-cases. So, a perfect understanding of the use-cases and an articulated vision is what this phase of elaboration looks forward to achieve. This is an important phase. Since, after this phase the project is carried on to a level where any changes might cause disastrous outcome for the entire operation.

Construction Phase

As the name suggests, the phase involves construction of the software system or project. Here, development of the remaining components and application features is performed. Thereafter, they are integrated into the product which is moved from an architectural baseline to a completed system. In short, the source code and the application design is created for the software for its transition to the user community. The construction phase is the first external release of the software, wherein, adequate quality with optimization of resources is achieved rapidly.

Transition Phase

Transition phase marks the transition of the project from development to production. This stage is to ensure that the user requirements have been satisfied and met by the product. The initiative is done by testing the product

Page 38: Software Development Life Cycle

before its release as a beta version. This beta version is enhanced by bug fixing, site preparation, manual completion, defect identification and improving performance and usability. Other objectives are also taken up. They include

Training users and maintainers for successful operation of the system Purchasing hardware

Converting data from old to new systems

Arranging for activities for successful launch of the product

Holding sessions of learning lessons for improving future process and tool environment.

Rational Unified Process mentions six best practices, which have to be kept in mind when designing any software. These practices help prevent flaws in the project development and create more scope for efficient productivity. These six practices are as follows.

1. An iterative (executing the same set of instructions a given number of times or until a specified result is obtained) approach towards the software development.

2. Managing user requirements. 3. Use and test individual components before being integrated into a larger system.

Use 'Unified Modeling Language' tool to get a visual model of the components, users and their interaction relating to the project. Constant testing of the software quality is considered one of the best practices in any software development.

4. For a successful iterative development, monitoring, tracking and controlling changes made to a system is essential for a team to work together as a single unit.

The concept of rational unified process has endless explanation and description. Each and every important and essential considerations in a software development has been defined to its root. RUP results in a reduced IT costs, improved IT business, higher quality, higher service level and sharper adaptability, and most importantly, higher ROI (return on investments), and many other benefits. The above is just a theoretical brief explanation to the question as to what is RUP? However, a clearer and elaborated idea can be achieved once the process is put into practical use.

Page 39: Software Development Life Cycle

Quality Assurance Certification

The process of quality assurance helps in testing the products and services as per the desired standards and the needs of customers. The quality assurance certifications for the software industry, organic food, and many other products are discussed in the following article.

In short, the activity or process that proves the suitability of a product for the intended purpose could be described as quality assurance. The quality assurance process takes care of the quality of the products and ensures that customer requirements pertaining to the products are met. The certifications used to assess different products have different parameters which should be understood thoroughly. Total quality management is vital for the survival and profitability of business nowadays.

Quality Assurance Certification (Software Industry)

Certifications like the ISO and CMMi, P-CMM, etc., are some of the most sought after quality certifications in the IT-ITES sector. The quality assurance procedures are implemented in software testing.

International Standards Organization (ISO): It is a European standard used for quality assurance. The ISO 9000 systems makes use of different documents or procedures for quality assurance, namely, ISO 9001, ISO 9002, ISO 9003. The ISO 9000 and ISO 9003 documents contain supporting guidelines. The ISO 9001 looks after the design, production, installation and maintenance/servicing, while the ISO 9002 is used for production and installation only. The final inspection is done with the help of the ISO 9003 model.

Capability Maturity Model Integration (CMMi): The CMMi acts as a guiding force in the improvement of the processes of an organization. The management of development, acquisition and maintenance of the services and products of a company is also improved with the help of CMMi. A set of proven practices are placed in a structure to assess the process area capability and organizational maturity of a company. The priorities for improvement are established and it is seen that these priorities are implemented properly with the help of CMMi.

People Capability Maturity Model (P-CMM): The P-CMM model is similar to SW-CMM (Software Capability Maturity Model) in its approach. The objective of P-CMM is to improve the software development capability of an organization by means of attracting, developing, motivating, organizing and retaining the manpower or the

Page 40: Software Development Life Cycle

required talent. The management and development of the workforce of a company is guided by the P-CMM model. The P-CMM model makes use of the best current practices used in organizational and human resource development to achieve its objectives.

e Services Capability Model (eSCM): The eSCM model serves the needs of the BPO/ITES industries. This model is used to assist the customers in measuring the service provider's capability. The measurement is needed for establishing and managing the outsourcing relationships which improve continually.

BS 7799: It is a security standard which originated in the mid-nineties, and till the year 2000 it evolved into a model known known as BS EN ISO17799. It is difficult to comply with the requirements/standards of this model since it covers the security issues comprehensively and contains the control requirements that are significant in number.

QAI Certification Program

The Quality Assurance International (QAI) is an agency which awards the organic certification to the Producers, Private labelers, Processors, Retailers, Distributors and other 'links' involved in the production of organic food.

Food and Drug Administration Certification

The Food and Drug Administration (FDA) of the US awards quality assurance certification for the food products which comply with the performance and safety standards. The FDA certifies different types of products like the dietary supplements, drugs & vaccines, medical devices, animal drugs & food, cosmetic products, etc.

Canadian Standards Association (CSA) International

The various products certified under the CSA are building products, heating & cooling equipment, concrete products, home equipment, health care equipment, gas appliances, etc. Rigorous tests are conducted in order to award the quality assurance certificates.

The quality control and quality assurance certifications help in developing the trust of customers in a particular product. The quality assurance certificates awarded by various agencies also act as a motivational force for industries to maintain the required standards. The short account of various agencies awarding the certifications would help the concerned people in the industries.

Page 41: Software Development Life Cycle

Software Testing - Test Cases

What are test cases in software testing, how they are designed and why they are so important to the entire testing scenario, read through to know more..

What is a Test Case?

A test case is a set of conditions or variables and inputs that are developed for a particular goal or objective to be achieved on a certain application to judge its capabilities or features.

It might take more than one test case to determine the true functionality of the application being tested. Every requirement or objective to be achieved needs at least one test case. Some software development methodologies like Rational Unified Process (RUP) recommend creating at least two test cases for each requirement or objective; one for performing testing through positive perspective and the other through negative perspective.

Test Case Structure

A formal written test case comprises of three parts -1. Information

Information consists of general information about the test case. Information incorporates Identifier, test case creator, test case version, name of the test case, purpose or brief description and test case dependencies.

2. ActivityActivity consists of the actual test case activities. Activity contains information about the test case environment, activities to be done at test case initialization,

Page 42: Software Development Life Cycle

activities to be done after test case is performed, step by step actions to be done while testing and the input data that is to be supplied for testing.

3. ResultsResults are outcomes of a performed test case. Results data consist of information about expected results and the actual results.

Designing Test Cases

Test cases should be designed and written by someone who understands the function or technology being tested. A test case should include the following information -

Purpose of the test Software requirements and Hardware requirements (if any)

Specific setup or configuration requirements

Description on how to perform the test(s)

Expected results or success criteria for the test

Designing test cases can be time consuming in a testing schedule, but they are worth giving time because they can really avoid unnecessary retesting or debugging or at least lower it. Organizations can take the test cases approach in their own context and according to their own perspectives. Some follow a general step way approach while others may opt for a more detailed and complex approach. It is very important for you to decide between the two extremes and judge on what would work the best for you. Designing proper test cases is very vital for your software testing plans as a lot of bugs, ambiguities, inconsistencies and slip ups can be recovered in time as also it helps in saving your time on continuous debugging and re-testing test cases.

Page 43: Software Development Life Cycle

Software Testing - Contents of a Bug

Complete list of contents of a bug/error/defect that are needed at the time of raising a bug during software testing. These fields help in identifying a bug uniquely.

When a tester finds a defect, he/she needs to report a bug and enter certain fields, which helps in uniquely identifying the bug reported by the tester. The contents of a bug are as given below:

Project: Name of the project under which the testing is being carried out.

Subject: Description of the bug in short which will help in identifying the bug. This generally starts with the project identifier number/string. This string should be clear enough to help the reader in anticipate the problem/defect for which the bug has been reported.

Description: Detailed description of the bug. This generally includes the steps that are involved in the test case and the actual results. At the end of the summary, the step at which the test case fails is described along with the actual result obtained and expected result.

Summary: This field contains some keyword information about the bug, which can help in minimizing the number of records to be searched.

Detected By: Name of the tester who detected/reported the bug.

Assigned To: Name of the developer who is supposed to fix the bug. Generally this field contains the name of developer group leader, who then delegates the task to member of his team, and changes the name accordingly.

Page 44: Software Development Life Cycle

Test Lead: Name of leader of testing team, under whom the tester reports the bug.

Detected in Version: This field contains the version information of the software application in which the bug was detected.

Closed in Version: This field contains the version information of the software application in which the bug was fixed.

Date Detected: Date at which the bug was detected and reported.

Expected Date of Closure: Date at which the bug is expected to be closed. This depends on the severity of the bug.

Actual Date of Closure: As the name suggests, actual date of closure of the bug i.e. date at which the bug was fixed and retested successfully.

Priority: Priority of the bug fixing. This specifically depends upon the functionality that it is hindering. Generally Medium, Low, High, Urgent are the type of severity that are used.

Severity: This is typically a numerical field, which displays the severity of the bug. It can range from 1 to 5, where 1 is high severity and 5 is the lowest.

Status: This field displays current status of the bug. A status of ‘New’ is automatically assigned to a bug when it is first time reported by the tester, further the status is changed to Assigned, Open, Retest, Pending Retest, Pending Reject, Rejected, Closed, Postponed, Deferred etc. as per the progress of bug fixing process.

Bug ID: This is a unique ID i.e. number created for the bug at the time of reporting, which identifies the bug uniquely.

Attachment: Sometimes it is necessary to attach screen-shots for the tested functionality that can help tester in explaining the testing he had done and it also helps developers in re-creating the similar testing condition.

Test Case Failed: This field contains the test case that is failed for the bug.

Any of above given fields can be made mandatory, in which the tester has to enter a valid data at the time of reporting a bug. Making a field mandatory or optional depends on the company requirements and can take place at any point of time in a Software Testing project.

Page 45: Software Development Life Cycle

Software Testing - Bug Life Cycles

Various life cycles that a bug passes through during a software testing process.

What is a Bug Life Cycle?The duration or time span between the first time bug is found (‘New’) and closed successfully (status: ‘Closed’), rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’.

(Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various statuses used for a bug during a bug life cycle, you can refer to article ‘Software

Page 46: Software Development Life Cycle

Testing – Bug & Statuses Used During A Bug Life Cycle’)

There are seven different life cycles that a bug can passes through:

< I > Cycle I:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) Test lead finds that the bug is not valid and the bug is ‘Rejected’.

< II > Cycle II:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) The bug is verified and reported to development team with status as ‘New’.4) The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of ‘Pending Reject’ before passing it back to the testing team.5) After getting a satisfactory reply from the development side, the test leader marks the bug as ‘Rejected’.

< III > Cycle III:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) The bug is verified and reported to development team with status as ‘New’.4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’.5) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.6) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.7) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.8) The tester retests the bug and it is working fine, so the tester closes the bug and marks it as ‘Closed’.

< IV > Cycle IV:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) The bug is verified and reported to development team with status as ‘New’.4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’.5) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.6) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.7) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.8) The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with ‘Reopen’ status. And the bug is passed back to the development team for fixing.

< V > Cycle V:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) The bug is verified and reported to development team with status as ‘New’.4) The developer tries to verify if the bug is valid but fails in replicate the same scenario as was at the time of testing, but fails in that and asks for help from testing team.5) The tester also fails to re-generate the scenario in which the bug was found. And developer rejects the bug marking it ‘Rejected’.

< VI > Cycle VI:1) After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of

Page 47: Software Development Life Cycle

the bug is postponed for indefinite time and it is marked as ‘Postponed’.

< VII > Cycle VII:1) If the bug does not stand importance and can be/needed to be postponed, then it is given a status as ‘Deferred’.

This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.

Page 48: Software Development Life Cycle

Software Testing - How To Log A Bug (Defect)

A brief introduction to how a bug/defect/error is reported during software testing.

As we already have discussed importance of Software Testing in any software development project (Just to summarize: Software testing helps in improving quality of software and deliver a cost effective solution which meet customer requirements), it becomes necessary to log a defect in a proper way, track the defect, and keep a log of defects for future reference etc.

As a tester tests an application and if he/she finds any defect, the life cycle of the defect starts and it becomes very important to communicate the defect to the developers in order to get it fixed, keep track of current status of the defect, find out if any such defect (similar defect) was ever found in last attempts of testing etc. For this purpose, previously manually created documents were used, which were circulated to everyone associated with the software project (developers and testers), now a days many Bug Reporting Tools are available, which help in tracking and managing bugs in an effective way.

How to report a bug?

It’s a good practice to take screen shots of execution of every step during software testing. If any test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be reported/logged for the same. The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug should be attached to the test case that is failed.

At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project, Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity, Priority and Bug ID etc.) are filled and detailed description of the bug is given along with the expected and actual results. The screen-shots taken at the time of execution of test case are attached to the bug for reference by the developer.

Page 49: Software Development Life Cycle

After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then associated with the failed test case. This Bug ID helps in associating the bug with the failed test case.

After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug fixing process progresses.

If more than one tester are testing the software application, it becomes a possibility that some other tester might already have reported a bug for the same defect found in the application. In such situation, it becomes very important for the tester to find out if any bug has been reported for similar type of defect. If yes, then the test case has to be blocked with the previously raised bug (in this case, the test case has to be executed once the bug is fixed). And if there is no such bug reported previously, the tester can report a new bug and fail the test case for the newly raised bug.

If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a file with four columns containing Test Step No, Test Step Description, Expected Result and Actual Result. The expected and actual results are written for each step and the test case is failed for the step at which the test case fails.

This file containing test case and the screen shots taken are sent to the developers for reference. As the tracking process is not automated, it becomes important keep updated information of the bug that was raised till the time it is closed.

Page 50: Software Development Life Cycle

Software Testing Interview Questions

If you are looking for a job in software testing industry, it is imperative that along with a sound knowledge of the corresponding field, you must also be equipped with the answers for the most likely questions you'll be facing during an interview. We have compiled here a list of some common software testing interview questions. Have a look...

Software testing industry presents a plethora of career opportunities for candidates, who are interested in pursuing a career in the software industry. If you are the kind of a person, who does not enjoy software development, yet very keen about making a career in the software field, then software testing could be the right option for you. Software testing field offers several job positions in testing, Quality Assurance (QA), Quality Control etc. However, you need to have your basics in place, so as to improve your chances of acquiring a job in this particular industry.

Preparing for the Interview

Before applying for any IT job, it is imperative that you have a sound understanding of the field you are hoping to venture in. Besides being technically sound, you should also keep yourself abreast with the latest tools and trends in the software testing industry. Remember, software testing is a volatile field, hence, the things that you learned in your curriculum may have become obsolete by the time you are ready for a job. There are several types of software

testing and software testing methodologies, which you must be thorough with, before going for an interview. Typically, your set of interview questions for software testing would depend upon the particular area of software testing you are interested in. Hence, we have divided the questions into five common categories. More on job interview tips.

Interview Questions for Software Testing

Page 51: Software Development Life Cycle

Software Testing Interview Questions on Product Testing What will be the test cases for product testing? Give an example of test plan

template. What are the advantages of working as a tester for a product based company as

opposed to a service based company? Do you know how a product based testing differs from a project based testing?

Can you give a suitable example? Do you know what is exactly meant by Test Plan? Name its contents? Can you

give a sample Test Plan for a Login Screen? How do you differentiate between testing a product and testing any web-based

application? What is the difference between Web based testing and Client server testing? How to perform SOAP Testing manually? Explain the significance of Waterfall model in developing a product.

Software Testing Interview Questions on Quality Assurance How do you ensure the quality of the product? What do you do when there isn't enough time for thorough testing? What are the normal practices of the QA specialists with perspective of a

software? Can you tell the difference between high level design and low level design? Can you tell us how Quality Assurance differs from Quality Control? You must have heard the term Risk. Can you explain the term in a few words?

What are the major components of the risk? When do you say your project testing is completed? Name the factors. What do you mean by a walk through and inspection? What is the procedure for testing search buttons of a web application both

manually and using Qtp8.2? Explain Release Acceptance Testing. Explain Forced Error Testing. Explain Data

Integrity Testing. Explain System Integration Testing. How does compatibility testing differ while testing in Internet explorer and

testing in Firefox?

Software Testing Interview Questions on Testing Scenarios How do you know that all the scenarios for testing are covered? Can you explain the Testing Scenario? Also explain scenario based testing? Give

an example to support your answer. Consider a yahoo application. What are the test cases you can write? Differentiate between test scenario and test case? Is it necessary to create new Software requirement document, test planning

report, if it is a 'Migrating Project'? Explain the difference between smoke testing and sanity testing? What are all the scenarios to be considered while preparing test reports? What is an 'end to end' scenario? Other than requirement traceability matrix, what are the other factors that we

need to check in order to exit a testing process ? What is the procedure for finding out the length of the edit box through

WinRunner?

Software Testing Interview Questions on Automated Testing

Page 52: Software Development Life Cycle

What automated testing tools are you familiar with? Describe some problems that you encountered while working with an automated

testing tool. What is the procedure for planning test automation? What is your opinion on the question that can a test automation improve test

effectiveness? Can you explain data driven automation? Name the main attributes of test automation? Do you think automation can replace manual testing? How is a tool for test automation chosen? How do you evaluate the tool for test automation? What are the main benefits of test automation according to you? Where can test automation go wrong? Can you describe testing activities? What testing activities you need to automate? Describe common issues of test automation. What types of scripting techniques for test automation are you aware of? Name the principles of good testing scripts for automation? What tools can you use for support of testing during the software development

life cycle? Can you tell us it the activities of a test case design be automated? What are the drawbacks of automated software testing? What skills are needed to be a good software test automator?

Software Testing Interview Questions on Bug Tracking Can you have a defect with high severity and low priority and vice-versa i.e high

priority and low severity? Justify your answer. Can you explain the difference between a Bug and a Defect? Explain the phases

of bug life cycle What are the different types of Bugs we normally see in any of the projects? Also

include their severity. What is the difference between Bug Resolution Meeting and Bug Review

Committee? Who all participate in Bug Resolution Meeting and Bug Review Committee?

Can you name some recent major computer system failures caused by software bugs? What do you mean by 'Reproducing a bug'? What do you do, if the bug was not reproducible?

How can you tell if a bug is reproducible or not? On what basis do we give priority and severity for a bug. Provide an example for

high priority and low severity and high severity and low priority? Explain Defect Life Cycle in Manual Testing? How do you give a BUG Title & BUG Description for ODD Division? Have you ever heard of a build interval period?

Software testing is a vast field and there is really no dearth of software testing interview questions. You can explore the Internet for more software testing interview questions and of course, the solutions. Hope this article helps you to get the job of your dreams. Good Luck!

Page 53: Software Development Life Cycle

Types of Software Testing

Software Testing is a process of executing software in a controlled manner. When the end product is given to the client, it should work correctly according to the specifications and requirements of the software. Defect in software is the variance between the actual and expected results. There are different types of software testing, which when conducted help to eliminate defects from the program.

Testing is a process of gathering information by making observations and comparing them to expectations. – Dale Emery

In our day-to-day life, when we go out, shopping any product such as vegetable, clothes, pens, etc. we do check it before purchasing them for our satisfaction and to get maximum benefits. For example, when we intend to buy a pen, we test the pen before actually purchasing it i.e. if its writing, does it break if it falls, does it work in extreme climatic conditions, etc. So, though its the software, hardware or any product, testing turns to be mandatory.

What is Software Testing? Software Testing is a process of verifying and validating whether the program is performing correctly with no bugs. It is the process of analyzing or operating software for the purpose of finding bugs. It also helps to identify the defects / flaws / errors that may appear in the application code, which needs to be fixed. Testing not only means fixing the bug in the code, but also to check whether the program is behaving according to the given specifications and testing strategies. There are various types of software testing strategies such as white box testing strategy, black box testing

strategy, grey box software testing strategy, etc.

Need of Software Testing TypesTypes of Software Testing, depends upon different types of defects. For example:

Functional testing is done to detect functional defects in a system. Performance Testing is performed to detect defects when the system does not

perform according to the specifications

Usability Testing to detect usability defects in the system.

Page 54: Software Development Life Cycle

Security Testing is done to detect bugs/defects in the security of the system.

The list goes on as we move on towards different layers of testing.

Types of Software TestingVarious software testing methodologies guide you through the consecutive software testing types. Those who are new to this subject, here is some information on software testing - how to go about for beginners. To determine the true functionality of the application being tested, test cases are designed to help the developers. Test cases provide you with the guidelines for going through the process of testing the software. Software testing includes two basic types of software testing, viz. Manual Scripted Testing and Automated Testing.

Manual Scripted Testing : This is considered to be one of the oldest type of software testing methods, in which test cases are designed and reviewed by the team, before executing it.

Automated Testing : This software testing type applies automation in the testing, which can be applied to various parts of a software process such as test case management, executing test cases, defect management, reporting of the bugs/defects. The bug life cycle helps the tester in deciding how to log a bug and also guides the developer to decide on the priority of the bug depending upon the severity of logging it. Software bug testing or software testing to log a bug, explains the contents of a bug that is to be fixed. This can be done with the help of various bug tracking tools such as Bugzilla and defect tracking management tools like the Test Director.

Other Software Testing TypesSoftware testing life cycle is the process that explains the flow of the tests that are to be carried on each step of software testing of the product. The V- Model i.e Verification and Validation Model is a perfect model which is used in the improvement of the software project. This model contains software development life cycle on one side and software testing life cycle on the other hand side. Checklists for software tester sets a baseline that guides him to carry on the day-to-day activities.

Black Box Testing It explains the process of giving the input to the system and checking the output, without considering how the system generates the output. It is also called as Behavioral Testing.

Functional Testing: In this type of testing, the software is tested for the functional requirements. This checks whether the application is behaving according to the specification.

Performance Testing: This type of testing checks whether the system is performing properly, according to the user's requirements. Performance testing depends upon the Load and Stress Testing, that is internally or externally applied to the system.

1. Load Testing : In this type of performance testing, the system is raised beyond the limits in order to check the performance of the system when higher loads are applied.

2. Stress Testing : In this type of performance testing, the system is tested beyond the normal expectations or operational capacity

Usability Testing: This type of testing is also called as 'Testing for User Friendliness'. This testing checks the ease of use of an application. Read more on introduction to usability testing.

Regression Testing: Regression testing is one of the most important types of testing, in which it checks whether a small change in any component of the application does not affect the unchanged components. Testing is done by re-executing the previous versions of the application.

Page 55: Software Development Life Cycle

Smoke Testing: Smoke testing is used to check the testability of the application. It is also called 'Build Verification Testing or Link Testing'. That means, it checks whether the application is ready for further major testing and working, without dealing with the finer details.

Sanity Testing: Sanity testing checks for the behavior of the system. This type of software testing is also called as Narrow Regression Testing.

Parallel Testing: Parallel testing is done by comparing results from two different systems like old vs new or manual vs automated.

Recovery Testing: Recovery testing is very necessary to check how fast the system is able to recover against any hardware failure, catastrophic problems or any type of system crash.

Installation Testing: This type of software testing identifies the ways in which installation procedure leads to incorrect results.

Compatibility Testing: Compatibility Testing determines if an application under supported configurations perform as expected, with various combinations of hardware and software packages. Read more on compatibility testing.

Configuration Testing: This testing is done to test for compatibility issues. It determines minimal and optimal configuration of hardware and software, and determines the effect of adding or modifying resources such as memory, disk drives and CPU.

Compliance Testing: This type of testing checks whether the system was developed in accordance with standards, procedures and guidelines.

Error-Handling Testing: This software testing type determines the ability of the system to properly process erroneous transactions.

Manual-Support Testing: This type of software testing is an interface between people and application system.

Inter-Systems Testing: This type of software testing method is an interface between two or more application systems.

Exploratory Testing: Exploratory Testing is a type of software testing, which is similar to ad-hoc testing, and is performed to explore the software features. Read more on exploratory testing.

Volume Testing: This testing is done, when huge amount of data is processed through the application.

Scenario Testing: This type of software testing provides a more realistic and meaningful combination of functions, rather than artificial combinations that are obtained through domain or combinatorial test design.

User Interface Testing: This type of testing is performed to check, how user-friendly the application is. The user should be able to use the application, without any assistance by the system personnel.

System Testing: System testing is the testing conducted on a complete, integrated system, to evaluate the system's compliance with the specified requirements. This type of software testing validates that the system meets its functional and non-functional requirements and is also intended to test beyond the bounds defined in the software/hardware requirement specifications.

User Acceptance Testing: Acceptence Testing is performed to verify that the product is acceptable to the customer and it's fulfilling the specified requirements of that customer. This testing includes Alpha and Beta testing.

Page 56: Software Development Life Cycle

1. Alpha Testing: Alpha testing is performed at the developer's site by the customer in a closed environment. This testing is done after system testing.

2. Beta Testing: This type of software testing is done at the customer's site by the customer in the open environment. The presence of the developer, while performing these tests, is not mandatory. This is considered to be the last step in the software development life cycle as the product is almost ready.

White Box Testing It is the process of giving the input to the system and checking, how the system processes the input, to generate the output. It is mandatory for a tester to have the knowledge of the source code.

Unit Testing: This type of testing is done at the developer's site to check whether a particular piece/unit of code is working fine. Unit testing deals with testing the unit as a whole.

Static and Dynamic Analysis: In static analysis, it is required to go through the code in order to find out any possible defect in the code. Whereas, in dynamic analysis the code is executed and analyzed for the output.

Statement Coverage: This type of testing assures that the code is executed in such a way that every statement of the application is executed at least once.

Decision Coverage: This type of testing helps in making decision by executing the application, at least once to judge whether it results in true or false.

Condition Coverage: In this type of software testing, each and every condition is executed by making it true and false, in each of the ways at least once.

Path Coverage: Each and every path within the code is executed at least once to get a full path coverage, which is one of the important parts of the white box testing.

Integration Testing: Integration testing is performed when various modules are integrated with each other to form a sub-system or a system. This mostly focuses in the design and construction of the software architecture. Integration testing is further classified into Bottom-Up Integration and Top-Down Integration testing.

1. Bottom-Up Integration Testing: In this type of integration testing, the lowest level components are tested first and then alleviate the testing of higher level components using 'Drivers'.

2. Top-Down Integration Testing: This is totally opposite to bottom-up approach, as it tests the top level modules are tested and the branch of the module are tested step by step using 'Stubs' until the related module comes to an end.

Security Testing: Testing that confirms how well a system protects itself against unauthorized internal or external, or willful damage of code, means security testing of the system. Security testing assures that the program is accessed by the authorized personnel only.

Mutation Testing: In this type of software testing, the application is tested for the code that was modified after fixing a particular bug/defect.

Software testing methodologies and different software testing strategies help to get through this software testing process. These various software testing methods show you the outputs, using the above mentioned software testing types, and helps you check if the software satisfies the requirement of the customer. Software testing is indeed a vast subject and one can make a successful carrier in this field. You could go through some software testing interview questions, to prepare yourself for some software testing tutorials.

Page 57: Software Development Life Cycle

Software Testing - Brief Introduction To Security Testing

Security testing is an important process in order to ensure that the systems/applications that your organization is using meet security policies and are free from any type of loopholes that can cause your organization a big loss.

Security Testing of any developed system (or a system under development) is all about finding out all the potential loopholes and weaknesses of the system, which might result into loss/theft of highly sensitive information or destruction of the system by an intruder/outsider. Security Testing helps in finding out all the possible vulnerabilities of the system and help developers in fixing those problems.

Need of Security Testing Security test helps in finding out loopholes that can cause loss of important information and allow any

intruder enter into the systems. Security Testing helps in improving the current system and also helps in ensuring that the system will

work for longer time (or it will work without hassles for the estimated time).

Security Testing doesn’t only include conformance of resistance of the systems your organization uses, it also ensures that people in your organization understand and obey security policies. Hence adding up to the organization-wide security.

If involved right from the first phase of system development life cycle, security testing can help in eliminating the flaws into design and implementation of the system and in turn help the organization in blocking the potential security loopholes in the earlier stage. This is beneficial to the organization almost in all aspects (financially, security and even efforts point of view).

Who need Security Testing? Now a day, almost all organizations across the world are equipped with hundreds of computers connected to each other through intranets and various types of LANs inside the organization itself and through Internet with the outer world and are also equipped with data storage & handling devices. The information that is stored in these storage devices and the applications that run on the computers are highly important to the organization from the business, security and survival point of view.

Any organization small or big in size, need to secure the information it possesses and the applications it uses in order to protect its customer’s information safe and suppress any possible loss of its business.

Security testing ensures that the systems and applications used by the organizations are secure and not vulnerable to any type of attack.

What are the different types of Security Testing? Following are the main types of security testing:

Security Auditing: Security Auditing includes direct inspection of the application developed and Operating Systems & any system on which it is being developed. This also involves code walk-through.

Security Scanning: It is all about scanning and verification of the system and applications. During security scanning, auditors inspect and try to find out the weaknesses in the OS, applications and network(s).

Page 58: Software Development Life Cycle

Vulnerability Scanning: Vulnerability scanning involves scanning of the application for all known vulnerabilities. This scanning is generally done through various vulnerability scanning software.

Risk Assessment: Risk assessment is a method of analyzing and deciding the risk that depends upon the type of loss and the possibility/probability of loss occurrence. Risk assessment is carried out in the form of various interviews, discussions and analysis of the same. It helps in finding out and preparing possible backup-plan for any type of potential risk, hence contributing towards the security conformance.

Posture Assessment & Security Testing: This is a combination of Security Scanning, Risk Assessment and Ethical Hacking in order to reach a conclusive point and help your organization know its stand in context with Security.

Penetration Testing: In this type of testing, a tester tries to forcibly access and enter the application under test. In the penetration testing, a tester may try to enter into the application/system with the help of some other application or with the help of combinations of loopholes that the application has kept open unknowingly. Penetration test is highly important as it is the most effective way to practically find out potential loopholes in the application.

Ethical Hacking: It’s a forced intrusion of an external element into the system & applications that are under Security Testing. Ethical hacking involves number of penetration tests over the wide network on the system under test.

Page 59: Software Development Life Cycle
Page 60: Software Development Life Cycle

Manual Testing Interview Questions

The following article takes us through some of the most common manual testing interview questions. Read to know more.

Manual testing is one of the oldest and most effective ways in which one can carry out software testing. Whenever a new software is invented, software testing needs to be done to test for its effectiveness and it is for this purpose that manual testing is required. Manual testing is one of the types of software testing which is an important component of the IT job sector and does not use any automation methods and is therefore tedious and laborious.

Manual testing requires a tester who needs to have certain qualities because the job demands it - he needs to be observant, creative, innovative, speculative, open-minded, resourceful, patient, skillful and possess certain other qualities that will help him with his job. In the following article we shall not be concentrating on what a tester is like, but what some of the manual testing interview questions are. So if you have a doubt in this regard, read the following article to know what some interview questions on manual testing are.

Manual Testing Interview Questions for Freshers

The following are some of the interview questions for manual testing. This will give you a fair idea of what these questions are like.

What is the accessibility testing? What is Ad Hoc Testing?

What is the Alpha Testing?

What is Beta Testing?

What is Component Testing?

What is Compatibility Testing?

What is Data Driven Testing?

What is Concurrency Testing?

What is Conformance Testing?

Page 61: Software Development Life Cycle

What is Context Driven Testing?

What is Conversion Testing?

What is Depth Testing?

What is Dynamic Testing?

What is End-to-End testing?

What is Endurance Testing?

What is Installation Testing?

What is Gorilla Testing?

What is Exhaustive Testing?

What is Localization Testing?

What is Loop Testing?

What is Mutation Testing?

What is Positive Testing?

What is Monkey Testing?

What is Negative Testing?

What is Path Testing?

What is Ramp Testing?

What is Performance Testing?

What is Recovery Testing?

What is the Regression testing?

What is the Re-testing testing?

What is Stress Testing?

What is Sanity Testing?

What is Smoke Testing?

What’s the Volume Testing?

What’s the Usability testing?

What is Scalability Testing?

What is Soak Testing?

What’s the User Acceptance testing?

Page 62: Software Development Life Cycle

These were some of the manual testing interview questions for freshers, let us now move on to other forms of manual testing questions.

Software Testing Interview Questions for Freshers

Here are some software testing interview questions that will help you get into the more intricate and complex formats of this form of manual testing.

Can you explain the V model in manual testing? What is the water fall model in manual testing?

What is the structure of bug life cycle?

What is the difference between bug, error and defect?

How does one add objects into the Object Repository?

What are the different modes of recording?

What does 'testing' mean?

What is the purpose of carrying out manual testing for a background process that does not have a user interface and how do you go about it?

Explain with an example what test case and bug report are.

How does one go about reviewing a test case and what are the types that are available?

What is AUT?

What is compatibility testing?

What is alpha testing and beta testing?

What is the V model?

What is debugging?

What is the difference between debugging and testing? Explain in detail.

What is the fish model?

What is port testing?

Explain in detail the difference between smoke and sanity testing.

What is the difference between usability testing and GUI?

Why does one require object spy in QTP?

What is the test case life cycle?

Why does one save .vsb in library files in qtp winrunner?

When do we do update mode in qtp?

What is virtual memory?

Page 63: Software Development Life Cycle

What is visual source safe?

What is the difference between test scenarios and test strategy?

What is the difference between properties and methods in qtp?

Why do these manual testing interview questions help? They help you to prepare for what lies ahead. The career

opportunities that an IT job provides is greater that what many other fields provide, and if you're from this field then you'll know what I'm talking about, right?

Page 64: Software Development Life Cycle

Extreme Programming (XP)

What is Extreme Programming and what's it got to do with testing?

Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with

unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme

Programming Explained' (See the Softwareqatest.com Books page.). Testing ('extreme testing') is a core aspect of

Extreme Programming. Programmers are expected to write unit and functional test code first - before writing the

application code. Test code is under source control along with the rest of the code. Customers are expected to be

an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance

tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA

and test personnel are also required to be an integral part of the project team. Detailed requirements

documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.

Page 65: Software Development Life Cycle

Traceability Matrix

A method used to validate the compliance of a process or product with the requirements for that process or

product. The requirements are each listed in a row of the matrix and the columns of the matrix are used to

identify how and where each requirement has been addressed.

Contents:

Definition

Description

Requirements of Traceability Matrix

Baseline Traceability Matrix

Building a Traceability Matrix

Useful Traceability Matrices

Sample Traceability Matrix

Definition

In a software development process, a traceability matrix is a table that correlates any two baselined documents

that require a many to many relationship to determine the completeness of the relationship. It is often used with

high-level requirements (sometimes known as marketing requirements) and detailed requirements of the software

product to the matching parts of high-level design, detailed design, test plan, and test cases.

Common usage is to take the identifier for each of the items of one document and place them in the left column.

The identifiers for the other document are placed across the top row. When an item in the left column is related to

an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each

row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship

exists. It must be determined if one must be made. Large values imply that the item is too complex and should be

simplified.

To ease with the creation of traceability matrices, it is advisable to add the relationships to the source documents

for both backward traceability and forward traceability. In other words, when an item is changed in one baselined

document, it's easy to see what needs to be changed in the other.

Page 66: Software Development Life Cycle

Description

Description

A table that traces the requirements to the system deliverable component for that stage that responds to the

requirement.

Size and Format

For each requirement, identify the component in the current stage that responds to the requirement. The

requirement may be mapped to such items as a hardware component, an application unit, or a section of a design

specification.

Traceability Matrix Requirements

Traceability matrices can be established using a variety of tools including requirements management software,

databases, spreadsheets, or even with tables or hyperlinks in a word processor.

A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are

associated with the requirements on which they are based and the product tested to meet the requirement.

Page 67: Software Development Life Cycle

Above is a simple traceability matrix structure. There can be more things included in a traceability matrix than

shown. In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or

many-to-many.

Traceability requires unique identifiers for each requirement and product. Numbers for products are established in

a configuration management (CM) plan.

Traceability ensures completeness, that all lower level requirements come from higher level requirements, and

that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage

change and provides the basis for test planning.

Baseline Traceability Matrix

Description

A table that documents the requirements of the system for use in subsequent stages to confirm that all

requirements have been met.

Size and Format

Document each requirement to be traced. The requirement may be mapped to such things as a hardware

component, an application unit, or a section of a design specification.

Building a Traceability Matrix

Use a Traceability Matrix to:

verify and validate system specifications

ensure that all final deliverable documents are included in the system specification, such as process

models and data models

improve the quality of a system by identifying requirements that are not addressed by configuration items

during design and code reviews and by identifying extra configuration items that are not required.

Examples of configuration items are software modules and hardware devices

provide input to change requests and future project plans when missing requirements are identified

Page 68: Software Development Life Cycle

provide a guide for system and acceptance test plans of what needs to be tested.

Need for Relating Requirements to a Deliverable

Taking the time to cross-reference each requirement to a deliverable ensures that a deliverable is consistent with

the system requirements. A requirement that cannot be mapped to a deliverable is an indication that something is

missing from the deliverable. Likewise, a deliverable that cannot be traced back to a requirement may mean the

system is delivering more than required.

Use a Traceability Matrix to Match Requirements to a Deliverable

There are many ways to relate requirements to the deliverables for each stage of the system life cycle.

One method is to:

create a two-dimensional table

allow one row for each requirements specification paragraph (identified by paragraph number from the

requirements document)

allow one column per identified configuration item (such as software module or hardware device)

put a check mark at the intersection of row and column if the configuration item satisfies the stated

requirement

Useful Traceability Matrices

Various traceability matrices may be utilized throughout the system life cycle. Useful ones include:

Functional specification to requirements document: It shows that each requirement (obtained from a

preliminary requirements statement provided by the customer or produced in the Concept Definition

stage) has been covered in an appropriate section of the functional specification.

Top level configuration item to functional specification: For example, a top level configuration item,

Workstation, may be one of the configuration items that satisfies the function Input Order Information.

On the matrix, each configuration item would be written down the left hand column and each function

would be written across the top.

Page 69: Software Development Life Cycle

Low level configuration item to top level configuration item: For example, the top level configuration

item, Workstation, may contain the low level configuration items Monitor, CPU, keyboard, and network

interface card.

Design specification to functional specification verifies that each function has been covered in the design.

System test plan to functional specification ensures you have identified a test case or test scenario for

each process and each requirement in the functional specification.

Although the construction and maintenance of traceability matrices may be time-consuming, they are a quick

reference during verification and validation tasks.

Sample Traceability Matrix

A traceability matrix is a report from the requirements database or repository. What information the report

contains depends on your need. Information requirements determine the associated information that you store

with the requirements. Requirements management tools capture associated information or provide the capability

to add it.

The examples show forward and backward tracing between user and system requirements. User requirement

identifiers begin with "U" and system requirements with "S." Tracing S12 to its source makes it clear this

requirement is erroneous: it must be eliminated, rewritten, or the traceability corrected.

Page 70: Software Development Life Cycle

For requirements tracing and resulting reports to work, the requirements must be of good quality. Requirements

of poor quality transfer work to subsequent phases of the SDLC, increasing cost and schedule and creating disputes

with the customer.

A variety of reports are necessary to manage requirements. Reporting needs should be determined at the start of

the effort and documented in the requirements management plan.

Page 71: Software Development Life Cycle

Boundary Value Analysis

Overview

Boundary value analysis is a software testing design technique to determine test cases covering off-by-one errors.

The boundaries of software component input ranges are areas of frequent problems.

Contents:

Introduction

What is Boundary Value Analysis?

Purpose

Applying Boundary Value Analysis

Performing Boundary Value Analysis

Introduction

Testing experience has shown that especially the boundaries of input ranges to a software component are liable to

defects. A programmer implement e.g. the range 1 to 12 at an input, which e.g. stands for the month January to

December in a date, has in his code a line checking for this range. This may look like:

if (month > 0 && month < 13)

But a common programming error may check a wrong range e.g. starting the range at 0 by writing:

if (month >= 0 && month < 13)

For more complex range checks in a program this may be a problem which is not so easily spotted as in the above

simple example.

Definition

Boundary value analysis is a methodology for designing test cases that concentrates software testing effort on

cases near the limits of valid ranges Boundary value analysis is a method which refines equivalence partitioning.

Boundary value analysis generates test cases that highlight errors better than equivalence partitioning.

Page 72: Software Development Life Cycle

The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points

when input values change from valid to invalid errors are most likely to occur. As well, boundary value analysis

broadens the portions of the business requirement document used to generate tests. Unlike equivalence

partitioning, it takes into account the output specifications when deriving test cases.

Purpose

The purpose of boundary value analysis is to concentrate the testing effort on error prone areas by accurately

pinpointing the boundaries of conditions,

(e.g., a programmer may specify >, when the requirement states > or =).

Applying Boundary Value Analysis

To set up boundary value analysis test cases you first have to determine which boundaries you have at the

interface of a software component. This has to be done by applying the equivalence partitioning technique.

Boundary value analysis and equivalence partitioning are inevitably linked together. For the example of the month

in a date you would have the following partitions:

... -2 -1 0 1 .............. 12 13 14 15 .....

--------------|-------------------|---------------------

invalid partition 1 valid partition invalid partition 2

Applying boundary value analysis you have to select now a test case at each side of the boundary between two

partitions. In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper

boundary. Each of these pairs consists of a "clean" and a "dirty" test case. A "clean" test case should give you a

valid operation result of your program. A "dirty" test case should lead to a correct and specified input error

treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user

interface, it has to lead to warning and request to enter correct data. The boundary value analysis can have 6

testcases.n, n-1,n+1 for the upper limit and n, n-1,n+1 for the lower limit.

A further set of boundaries has to be considered when you set up your test cases. A solid testing strategy also has

to consider the natural boundaries of the data types used in the program. If you are working with signed values

this is especially the range around zero (-1, 0, +1). Similar to the typical range check faults, programmers tend to

have weaknesses in their programs in this range. e.g. this could be a division by zero problem where a zero value

Page 73: Software Development Life Cycle

may occur although the programmer always thought the range started at 1. It could be a sign problem when a

value turns out to be negative in some rare cases, although the programmer always expected it to be positive.

Even if this critical natural boundary is clearly within an equivalence partition it should lead to additional test cases

checking the range around zero. A further natural boundary is the natural lower and upper limit of the data type

itself. E.g. an unsigned 8-bit value has the range of 0 to 255. A good test strategy would also check how the

program reacts at an input of -1 and 0 as well as 255 and 256.

The tendency is to relate boundary value analysis more to the so called black box testing ,which is strictly checking

a software component at its interfaces, without consideration of internal structures of the software. But looking

closer at the subject, there are cases where it applies also to white box testing.

After determining the necessary test cases with equivalence partitioning and subsequent boundary value analysis,

it is necessary to define the combinations of the test cases when there are multiple inputs to a software

component.

Performing Boundary Value Analysis

There are two steps:

1. Identify the equivalence classes.

2. Design test cases.

STEP 1: IDENTIFY EQUIVALENCE CLASSES

Follow the same rules you used in equivalence partitioning. However, consider the output specifications as well.

For example, if the output specifications for the inventory system stated that a report on inventory should indicate

a total quantity for all products no greater than 999,999, then you d add the following classes to the ones you

found previously:

6. The valid class ( 0 < = total quantity on hand < = 999,999 )

7. The invalid class (total quantity on hand <0)

Page 74: Software Development Life Cycle

8. The invalid class (total quantity on hand> 999,999 )

STEP 2: DESIGN TEST CASES

In this step, you derive test cases from the equivalence classes. The process is similar to that of equivalence

partitioning but the rules for designing test cases differ. With equivalence partitioning, you may select any test

case within a range and any on either side of it with boundary analysis, you focus your attention on cases close to

the edges of the range.

Rules for Test Cases

1. If the condition is a range of values, create valid test cases for each end of the range and invalid test cases just

beyond each end of the range. For example, if a valid range of quantity on hand is -9,999 through 9,999, write test

cases that include:

1. the valid test case quantity on hand is -9,999

2. the valid test case quantity on hand is 9,999

3. the invalid test case quantity on hand is -10,000 and

4. the invalid test case quantity on hand is 10,000

You may combine valid classes wherever possible, just as you did with equivalence partitioning, and, once again,

you may not combine invalid classes. Don�t forget to consider output conditions as well. In our inventory

example the output conditions generate the following test cases:

1. the valid test case total quantity on hand is 0

2. the valid test case total quantity on hand is 999,999

3. the invalid test case total quantity on hand is -1 and

4. the invalid test case total quantity on hand is 1,000,000

2. A similar rule applies where the, condition states that the number of values must lie within a certain range select

two valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just

above the acceptable range.

Page 75: Software Development Life Cycle

3. Design tests that highlight the first and last records in an input or output file.

4. Look for any other extreme input or output conditions, and generate a test for each of them.

Page 76: Software Development Life Cycle

Agile Testing

Introduction

Agile software development is a conceptual framework for software engineering that promotes development

iterations throughout the life-cycle of the project.

There are many agile development methods; most minimize risk by developing software in short amounts of time.

Software developed during one unit of time is referred to as an iteration, which may last from one to four weeks.

Each iteration is an entire software project: including planning, requirements analysis, design, coding, testing, and

documentation. An iteration may not add enough functionality to warrant releasing the product to market but the

goal is to have an available release (without bugs) at the end of each iteration. At the end of each iteration, the

team re-evaluates project priorities.

Agile methods emphasize face-to-face communication over written documents. Most agile teams are located in a

single open office sometimes referred to as a bullpen. At a minimum, this includes programmers and their

"customers" (customers define the product; they may be product managers , business analysts, or the clients). The

office may include testers, interaction designers, technical writers, and managers.

Agile methods also emphasize working software as the primary measure of progress. Combined with the

preference for face-to-face communication, agile methods produce very little written documentation relative to

other methods. This has resulted in criticism of agile methods as being undisciplined.

Contents:

History

Principles behind agile methods — The Agile Manifesto

Comparison with other methods

Suitability of agile methods

Agile data

Agile methods and method tailoring

Agile methods

Measuring agility

Page 77: Software Development Life Cycle

Criticism

Agile Principles

History

Agile software development is a conceptual framework for software engineering that promotes development

iterations throughout the life-cycle of the project.

There are many agile development methods; most minimize risk by developing software in short amounts of time.

Software developed during one unit of time is referred to as an iteration, which may last from one to four weeks.

Each iteration is an entire software project: including planning, requirements analysis, design, coding, testing, and

documentation. An iteration may not add enough functionality to warrant releasing the product to market but the

goal is to have an available release (without bugs) at the end of each iteration. At the end of each iteration, the

team re-evaluates project priorities.

Agile methods emphasize face-to-face communication over written documents. Most agile teams are located in a

single open office sometimes referred to as a bullpen. At a minimum, this includes programmers and their

"customers" (customers define the product; they may be product managers, business analysts, or the clients). The

office may include testers, interaction designers, technical writers, and managers.

Agile methods also emphasize working software as the primary measure of progress. Combined with the

preference for face-to-face communication, agile methods produce very little written documentation relative to

other methods. This has resulted in criticism of agile methods as being undisciplined.

The modern definition of agile software development evolved in the mid 1990s as part of a reaction against

"heavyweight" methods, as typified by a heavily regulated, regimented, micro-managed use of the waterfall model

of development. The processes originating from this use of the waterfall model were seen as bureaucratic, slow,

demeaning, and inconsistent with the ways that software developers actually perform effective work. A case can

be made that agile and iterative development methods are a return to development practice seen early in the

history of software development. Initially, agile methods were called "lightweight methods." In 2001, prominent

members of the community met at Snowbird, Utah, and adopted the name "agile methods." Later, some of these

people formed The Agile Alliance, a non-profit organization that promotes agile development.

Methodologies similar to Agile created prior to 2000—include Scrum (1986), Crystal Clear, Extreme Programming

(1996), Adaptive Software Development, Feature Driven Development, and DSDM (1995).

Page 78: Software Development Life Cycle

Extreme Programming (usually abbreviated as "XP") was created by Kent Beck in 1996 as a way to rescue the

struggling Chrysler Comprehensive Compensation (C3) project. While that project was eventually canceled, the

methodology was refined by Ron Jeffries' full-time XP coaching, public discussion on Ward Cunningham's Portland

Pattern Repository wiki and further work by Beck, including a book in 1999. Elements of Extreme Programming

appear to be based on Scrum and Ward Cunningham's Episodes pattern language.

Principles

Agile methods are a family of development processes, not a single approach to software development. In 2001, 17

prominent figures in the field of agile development (then called "light-weight methodologies") came together at

the Snowbird ski resort in Utah to discuss ways of creating software in a lighter, faster, more people-centric way.

They created the Agile Manifesto, widely regarded as the canonical definition of agile development, and

accompanying agile principles.

Some of the principles behind the Agile Manifesto are

Customer satisfaction by rapid, continuous delivery of useful software

Working software is delivered frequently (weeks rather than months)

Working software is the principal measure of progress

Even late changes in requirements are welcomed

Close, daily cooperation between business people and developers

Face-to-face conversation is the best form of communication

Projects are built around motivated individuals, who should be trusted

Continuous attention to technical excellence and good design

Simplicity

Self-organizing teams

Regular adaptation to changing circumstances

The publishing of the manifesto spawned a movement in the software industry known as agile software

development.

Page 79: Software Development Life Cycle

In 2005, Alistair Cockburn and Jim Highsmith gathered another group of people — management experts, this time

— and wrote an addendum, known as the PM Declaration of Interdependence.

Page 80: Software Development Life Cycle

Comparison with other methods

Agile methods are sometimes characterized as being at the opposite end of the spectrum from "plan-driven" or

"disciplined" methodologies. This distinction is misleading, as it implies that agile methods are "unplanned" or

"undisciplined". A more accurate distinction is to say that methods exist on a continuum from "adaptive" to

"predictive". Agile methods exist on the "adaptive" side of this continuum.

Adaptive methods focus on adapting quickly to changing realities. When the needs of a project change, an adaptive

team changes as well. An adaptive team will have difficulty describing exactly what will happen in the future. The

further away a date is, the more vague an adaptive method will be about what will happen on that date. An

adaptive team can report exactly what tasks are being done next week, but only which features are planned for

next month. When asked about a release six months from now, an adaptive team may only be able to report the

mission statement for the release, or a statement of expected value vs. cost.

Predictive methods, in contrast, focus on planning the future in detail. A predictive team can report exactly what

features and tasks are planned for the entire length of the development process. Predictive teams have difficulty

changing direction. The plan is typically optimized for the original destination and changing direction can cause

completed work to be thrown away and done over differently. Predictive teams will often institute a change

control board to ensure that only the most valuable changes are considered.

Agile methods have much in common with the "Rapid Application Development" techniques from the 1980/90s as

espoused by James Martin and others

Contrasted with other iterative development methods

Most agile methods share other iterative and incremental development methods' emphasis on building releasable

software in short time periods.

Agile development differs from other development models as in this model time periods are measured in weeks

rather than months and work is performed in a highly collaborative manner, and most agile methods also differ by

treating their time period as a strict timebox.

Contrasted with the waterfall model

Agile development does not have much in common with the waterfall model. As of 2004, the waterfall model is

still in common use. The waterfall model is the most predictive of the methodologies, stepping through

requirements capture, analysis, design, coding, and testing in a strict, pre-planned sequence. Progress is generally

Page 81: Software Development Life Cycle

measured in terms of deliverable artifacts—requirement specifications, design documents, test plans, code

reviews and the like.

The main problem of the waterfall model is the inflexible nature of the division of a project into separate stages, so

that commitments are made early on, and it is difficult to react to changes in requirements. Iterations are

expensive. This means that the waterfall model is likely to be unsuitable if requirements are not well understood

or are likely to change radically in the course of the project.

Agile methods, in contrast, produce completely developed and tested features (but a very small subset of the

whole) every few weeks or months. The emphasis is on obtaining the smallest workable piece of functionality to

deliver business value early, and continually improving it/adding further functionality throughout the life of the

project.

Some agile teams use the waterfall model on a small scale, repeating the entire waterfall cycle in every iteration.

Other teams, most notably Extreme Programming teams, work on activities simultaneously.

Contrasted with Cowboy Coding

Cowboy coding is the absence of a defined method: team members do whatever they feel is right. Agile

development's frequent re-evaluation of plans, emphasis on face-to-face communication, and relatively sparse use

of documents sometimes causes people to confuse it with cowboy coding. Agile teams, however, do follow defined

(and often very disciplined and rigorous) processes.

As with all methodologies, the skill and experience of the users define the degree of success and/or abuse of such

activity. The more rigid controls systematically embedded within a process offer stronger levels of accountability of

the users. The degradation of well-intended procedures can lead to activities often categorized as cowboy coding.

Page 82: Software Development Life Cycle

Suitability of agile methods

Although agile methods differ in their practices, they share a number of common characteristics, including iterative

development, and a focus on interaction, communication, and the reduction of resource-intensive intermediate

artifacts. The suitability of agile methods in general can be examined from multiple perspectives. From a product

perspective, agile methods are more suitable when requirements are emergent and rapidly changing; they are less

suitable for systems that have high criticality, reliability and safety requirements, though there is no complete

consensus on this point. From an organizational perspective, the suitability can be assessed by examining three key

dimensions of an organization: culture, people, and communication. In relation to these areas a number of key

success factors have been identified (Cohen et al., 2004):

The culture of the organization must be supportive of negotiation

People must be trusted

Fewer staff, with higher levels of competency

Organizations must live with the decisions developers make

Organizations need to have an environment that facilitates rapid communication between team members

The most important factor is probably project size. As size grows, face-to-face communication becomes more

difficult. Therefore, most agile methods are more suitable for projects with small teams, with fewer than 20 to

40 people. Large scale agile software development remains an active research area.

Another serious problem is that initial assumptions or overly rapid requirements gathering up front may result

in a large drift from an optimal solution, especially if the client defining the target product has poorly formed

ideas of their needs. Similarly, given the nature of human behaviour, it's easy for a single "dominant"

developer to influence or even pull the design of the target in a direction not necessarily appropriate for the

project. Historically, the developers can, and often do, impose solutions on a client then convince the client of

the appropriateness of the solution, only to find at the end that the solution is actually unworkable. In theory,

the rapidly iterative nature should limit this, but it assumes that there's a negative feedback, or even

appropriate feedback. If not, the error could be magnified rapidly.

This can be alleviated by separating the requirements gathering into a separate phase (a common element of

Agile systems), thus insulating it from the developer's influence, or by keeping the client in the loop during

development by having them continuously trying each release. The problem there is that in the real world,

Page 83: Software Development Life Cycle

most clients are unwilling to invest this much time. It also makes QAing a product difficult since there are no

clear test goals that don't change from release to release.

In order to determine the suitability of agile methods individually, a more sophisticated analysis is required.

The DSDM approach, for example, provides a so-called ‘suitability-filter’ for this purpose.

The DSDM and Feature Driven Development (FDD) methods, are claimed to be suitable for any agile software

development project, regardless of situational characteristics.

A comparison of agile methods will reveal that they support different phases of a software development life-cycle

to varying degrees. This individual characteristic of agile methods can be used as a selection criterion for selecting

candidate agile methods. In general a sense of project speed, complexity, and challenges will guide you to the best

agile methods to implement and how completely to adopt them.

Agile development has been widely documented (see Experience Reports, below, as well as Beck, and Boehm and

Turner as working well for small (<10 developers) co-located teams. Agile development is expected to be particularly

suitable for teams facing unpredictable or rapidly changing requirements.

Agile development's applicability to the following scenarios is open to question:

Large scale development efforts (>20 developers), though scaling strategies and evidence to the contrary

have been described.

Distributed development efforts (non-co-located teams). Strategies have been described in Bridging the

Distance and Using an Agile Software Process with Offshore Development

Mission- and life-critical efforts

Command-and-control company cultures

It is worth noting that several large scale project successes have been documented by organisations such as BT

which have had several hundred developers situated in the UK, Ireland and India, working collaboratively on

projects and using Agile methodologies. While questions undoubtedly still arise about the suitability of some Agile

methods to certain project types, it would appear that scale or geography, by themselves, are not necessarily

barriers to success.

Barry Boehm and Richard Turner suggest that risk analysis be used to choose between adaptive ("agile") and

predictive ("plan-driven") methods. The authors suggest that each side of the continuum has its own home ground:

Page 84: Software Development Life Cycle

Agile home ground:

Low criticality

Senior developers

Requirements change very often

Small number of developers

Culture that thrives on chaos

Plan-driven home ground:

High criticality

Junior developers

Requirements don't change too often

Large number of developers

Culture that demands order

Agile data

One of the most challenging parts of an agile project is being agile with data. Typically this is where projects hit

legacy systems and legacy requirements. Many times working with data systems requires lengthy requests to

teams of specialists who are not used to the speed of an agile project and insist on exact and complete

specifications. Typically the database world will be at odds with agile development. The agile framework seeks as

much as possible to remove these bottlenecks with techniques such as generative data models making change fast.

Models for data serve another purpose, often a change of one table column can be a critical issue requiring

months to rebuild all the dependent applications.

An agile approach would try to encapsulate data dependencies to go fast and allow change. But ultimately

relational data issues will be important for agile projects and are a common blockage point. As such agile projects

are best suited where projects don't contain big legacy databases. It still isn't the end of the world because if you

can build your data dependencies to be agile regardless of legacy systems you will start to prove the merit of the

approach as all other systems go through tedious changes to catch up with data changes while in the protected

agile data system the change would be trivial.

Page 85: Software Development Life Cycle

Agile methods and method tailoring

In the literature, different terms refer to the notion of method adaptation, including ‘method tailoring’, ‘method

fragment adaptation’ and ‘situational method engineering’. Method tailoring is defined as:

A process or capability in which human agents through responsive changes in, and dynamic interplays between

contexts, intentions, and method fragments determine a system development approach for a specific project

situation.

Potentially, almost all agile methods are suitable for method tailoring. Even the DSDM method is being used for

this purpose and has been successfully tailored in a CMM context. Situation-appropriateness can be considered as

a distinguishing characteristic between agile methods and traditional software development methods, with the

latter being relatively much more rigid and prescriptive. The practical implication is that agile methods allow

project teams to adapt working practices according to the needs of individual projects. Practices are concrete

activities and products which are part of a method framework. At a more extreme level, the philosophy behind the

method, consisting of a number of principles, could be adapted (Aydin, 2004).

In the case of XP the need for method adaptation is made explicit. One of the fundamental ideas of XP is that there

is no process that fits every project as such, but rather practices should be tailored to the needs of individual

projects. There are also no experience reports in which all the XP practices have been adopted. Instead, a partial

adoption of XP practices, as suggested by Beck, has been reported on several occasions.

A distinction can be made between static method adaptation and dynamic method adaptation. The key

assumption behind static method adaptation is that the project context is given at the start of a project and

remains fixed during project execution. The result is a static definition of the project context. Given such a

definition, route maps can be used in order to determine which structured method fragments should be used for

that particular project, based on predefined sets of criteria. Dynamic method adaptation, in contrast, assumes that

projects are situated in an emergent context. An emergent context implies that a project has to deal with

emergent factors that affect relevant conditions but are not predictable. This also means that a project context is

not fixed, but changing during project execution. In such a case prescriptive route maps are not appropriate. The

practical implication of dynamic method adaptation is that project managers often have to modify structured

fragments or even innovate new fragments, during the execution of a project (Aydin et al, 2005).

Page 86: Software Development Life Cycle

Agile Methods

Some of the well-known agile software development methods:

Agile Modeling

Agile Unified Process (AUP)

Agile Data

Daily kickoff and review of goals

short release cycles

Responsive Development

Generalism - Use of generic skill sets which are common across the team not reliance on specific skill sets

which are scarce

Test Driven Development (TDD)

Feature Driven Development (FDD)

Behavior Driven Development (BDD)

Essential Unified Process (EssUP)

Other approaches:

Software Development Rhythms

Agile Documentation

ICONIX Process

Microsoft Solutions Framework (MSF)

Agile Data Method

Database refactoring

Page 87: Software Development Life Cycle

Measuring agility

While agility is seen by many as a means to an end, a number of approaches have been proposed to quantify

agility. Agility Index Measurements (AIM) score projects against a number of agility factors to achieve a total. The

similarly-named Agility Measurement Index, scores developments against five dimensions of a software project

(duration, risk, novelty, effort, and interaction). Other techniques are based on measurable goals. Another study

using fuzzy mathematics has suggested that project velocity can be used as a metric of agility.

While such approaches have been proposed to measure agility, the practical application of such metrics has yet to

be seen.

Criticism

Agile development is sometimes criticized as cowboy coding. Extreme Programming's initial buzz and controversial

tenets, such as pair programming and continuous design, have attracted particular criticism, such as McBreen and

Boehm and Turner. Many of the criticisms, however, are believed by Agile practitioners to be misunderstandings of

agile development.

In particular, Extreme Programming is reviewed and critiqued by Matt Stephens's and Doug Rosenberg's Extreme

Programming Refactored.

Criticisms include:

Lack of structure and necessary documentation

Only works with senior-level developers

Incorporates insufficient software design

Requires too much cultural change to adopt

Can lead to more difficult contractual negotiations

Can be very inefficient -- if the requirements for one area of code change through various iterations, the

same programming may need to be done several times over. Whereas if a plan was there to be followed,

a single area of code is expected to be written once.

Page 88: Software Development Life Cycle

Impossible to develop realistic estimates of work effort needed to provide a quote, because at the

beginning of the project no one knows the entire scope/requirements

Drastically increases the chances of scope creep due to the lack of detailed requirements documentation

The criticisms regarding insufficient software design and lack of documentation are addressed by the Agile

Modeling method which can easily be tailored into agile processes such as XP.

Agile software development has been criticized because it will not bring about the claimed benefits when

programmers of average ability use this methodology, and most development teams are indeed likely to be made

up of people with average (or below) skills.

Agile Principles

There are ten principles of Agile Testing:

Agile Principle #1: Active user involvement is imperative

It's not always possible to have users directly involved in development projects, particularly if the Agile

Development project is to build a product where the real end users will be external customers or consumers.

In this event it is imperative to have a senior and experienced user representative involved throughout.

Not convinced? Here's 16 reasons why!

Requirements are clearly communicated and understood (at a high level) at the outset

Requirements are prioritised appropriately based on the needs of the user and market

Requirements can be clarified on a daily basis with the entire project team, rather than resorting to

lengthy documents that aren't read or are misunderstood

Emerging requirements can be factored into the development schedule as appropriate with the impact

and trade-off decisions clearly understood

Page 89: Software Development Life Cycle

The right product is delivered

As iterations of the product are delivered, that the product meets user expectations

The product is more intuitive and easy to use

The user/business is seen to be interested in the development on a daily basis

The user/business sees the commitment of the team

Developers are accountable, sharing progress openly with the user/business every day

There is complete transparency as there is nothing to hide

The user/business shares responsibility for issues arising in development; it’s not a customer-supplier

relationship but a joint team effort

Timely decisions can be made, about features, priorities, issues, and when the product is ready

Responsibility is shared; the team is responsible together for delivery of the product

Individuals are accountable, reporting for themselves in daily updates that involve the user/business

When the going gets tough, the whole team - business and technical - work together!

Agile Principle #2: Agile Development teams must be empowered

An Agile Development project team must include all the necessary team members to make decisions, and make

them on a timely basis.

Active user involvement is one of the key principles to enable this, so the user or user representative from the

business must be closely involved on a daily basis.

The project team must be empowered to make decisions in order to ensure that it is their responsibility to deliver

the product and that they have complete ownership. Any interference with the project team is disruptive and

reduces their motivation to deliver.

The team must establish and clarify the requirements together, prioritise them together, agree to the tasks

required to deliver them together, and estimate the effort involved together.

Page 90: Software Development Life Cycle

It may seem expedient to skip this level of team involvement at the beginning. It’s tempting to get a subset of the

team to do this (maybe just the product owner and analyst), because it’s much more efficient. Somehow we’ve all

been trained over the years that we must be 100% efficient (or more!) and having the whole team involved in

these kick-off steps seems a very expensive way to do things.

However this is a key principle for me. It ensures the buy-in and commitment from the entire project team from

the outset; something that later pays dividends. When challenges arise throughout the project, the team feels a

real sense of ownership. And then it's doesn't seem so expensive.

Agile Principle #3: Time waits for no man!

In Agile Development,requirements evolve, but timescales are fixed.

This is in stark contrast to a traditional development project, where one of the earliest goals is to capture all known

requirements and baseline the scope so that any other changes are subject to change control.

Traditionally, users are educated that it’s much more expensive to change or add requirements during or after the

software is built. Some organisations quote some impressive statistics designed to frighten users into freezing the

scope. The result: It becomes imperative to include everything they can think of – in fact everything they ever

dreamed of! And what’s more, it’s all important for the first release, because we all know Phase 2’s are invariably

hard to get approved once 80% of the benefits have been realised from Phase 1.

Ironically, users may actually use only a tiny proportion of any software product, perhaps as low as 20% or less, yet

many projects start life with a bloated scope. In part, this is because no-one is really sure at the outset which 20%

of the product their users will actually use. Equally, even if the requirements are carefully analysed and prioritised,

it is impossible to think of everything, things change, and things are understood differently by different people.

Agile Development works on a completely different premise. Agile Development works on the premise that

requirements emerge and evolve, and that however much analysis and design you do, this will always be the case

because you cannot really know for sure what you want until you see and use the software. And in the time you

would have spent analysing and reviewing requirements and designing a solution, external conditions could also

have changed.

Page 91: Software Development Life Cycle

So if you believe that point – that no-one can really know what the right solution is at the outset when the

requirements are written – it’s inherently difficult, perhaps even practically impossible, to build the right solution

using a traditional approach to software development.

Traditional projects fight change, with change control processes designed to minimise and resist change wherever

possible. By contrast, Agile Development projects accept change; in fact they expect it. Because the only thing

that’s certain in life is change.

There are different mechanisms in Agile Development to handle this reality. In Agile Development projects,

requirements are allowed to evolve, but the timescale is fixed. So to include a new requirement, or to change a

requirement, the user or product owner must remove a comparable amount of work from the project in order to

accommodate the change.

This ensures the team can remain focused on the agreed timescale, and allows the product to evolve into the right

solution. It does, however, also pre-suppose that there’s enough non-mandatory features included in the original

timeframes to allow these trade-off decisions to occur without fundamentally compromising the end product.

So what does the business expect from its development teams? Deliver the agreed business requirements, on time

and within budget, and of course to an acceptable quality. All software development professionals will be well

aware that you cannot realistically fix all of these factors and expect to meet expectations. Something must be

variable in order for the project to succeed. In Agile Development, it is always the scope (or features of the

product) that are variable, not the cost and timescale.

Although the scope of an Agile Development project is variable, it is acknowledged that only a fraction of any

product is really used by its users and therefore that not all features of a product are really essential. For this

philosophy to work, it’s imperative to start development (dependencies permitting) with the core, highest priority

features, making sure they are delivered in the earliest iterations.

Unlike most traditional software development projects, the result is that the business has a fixed budget, based on

the resources it can afford to invest in the project, and can make plans based on a launch date that is certain.

Agile Principle #4: Agile requirements are barely sufficient!

Agile Development teams capture requirements at a high level and on a piecemeal basis, just-in-time for each

feature to be developed.

Page 92: Software Development Life Cycle

Agile requirements are ideally visual and should be barely sufficient, i.e. the absolute minimum required to enable

development and testing to proceed with reasonable efficiency. The rationale for this is to minimise the time spent

on anything that doesn’t actually form part of the end product.

Agile Development can be mistaken by some as meaning there’s no process; you just make things up as you go

along – in other words, JFDI! That approach is not so much Agile but Fragile!

Although Agile Development is much more flexible than more traditional development methodologies, Agile

Development does nevertheless have quite a bit of rigour and is based on the fairly structured approach of lean

manufacturing as pioneered by Toyota.

However any requirements captured at the outset should be captured at a high level and in a visual format,

perhaps for example as a storyboard of the user interface. At this stage, requirements should be understood

enough to determine the outline scope of the product and produce high level budgetary estimates and no more.

Ideally, Agile Development teams capture these high level requirements in workshops, working together in a highly

collaborative way so that all team members understand the requirements as well as each other. It is not

necessarily the remit of one person, like the Business Analyst in more traditional projects, to gather the

requirements independently and write them all down; it’s a joint activity of the team that allows everyone to

contribute, challenge and understand what’s needed. And just as importantly, why.

XP (eXtreme Programming) breaks requirements down into small bite-size pieces called User Stories. These are

fundamentally similar to Use Cases but are lightweight and more simplistic in their nature.

An Agile Development team (including a key user or product owner from the business) visualises requirements in

whiteboarding sessions and creates storyboards (sequences of screen shots, visuals, sketches or wireframes) to

show roughly how the solution will look and how the user’s interaction will flow in the solution. There is no lengthy

requirements document or specification unless there is an area of complexity that really warrants it. Otherwise

the storyboards are just annotated and only where necessary.

A common approach amongst Agile Development teams is to represent each requirement, use case or user story,

on a card and use a T-card system to allow stories to be moved around easily as the user/business representative

on the project adjusts priorities.

Requirements are broken down into very small pieces in order to achieve this; and actually the fact it’s going on a

card forces it to be broken down small. The advantage this has over lengthy documentation is that it's extremely

Page 93: Software Development Life Cycle

visual and tangible; you can stand around the T-card system and whiteboard discussing progress, issues and

priorities.

The timeframe of an Agile Development is fixed, whereas the features are variable. Should it be necessary to

change priority or add new requirements into the project, the user/business representative physically has to

remove a comparable amount of work from scope before they can place the new card into the project.

This is a big contrast to a common situation where the business owner sends numerous new and changed

requirements by email and/or verbally, somehow expecting the new and existing features to still be delivered in

the original timeframes. Traditional project teams that don't control changes can end up with the dreaded scope

creep, one of the most common reasons for software development projects to fail.

Agile teams, by contrast, accept change; in fact they expect it. But they manage change by fixing the timescales

and trading-off features.

Cards can of course be backed up by documentation as appropriate, but always the principle of agile development

is to document the bare minimum amount of information that will allow a feature to be developed, and always

broken down into very small units.

Using the Scrum agile management practice, requirements (or features or stories, whatever language you prefer to

use) are broken down into tasks of no more than 16 hours (i.e. 2 working days) and preferably no more than 8

hours, so progress can be measured objectively on a daily basis.

Agile Principle #5: How d'you eat an elephant?

One bite at a time! Likewise, agile software development projects are delivered in small bite-sized pieces,

delivering small, incremental *releases* and iterating.

In more traditional software development projects, the (simplified) lifecycle is Analyse, Develop, Test - first

gathering all known requirements for the whole product, then developing all elements of the software, then

testing that the entire product is fit for release.

In agile software development, the cycle is Analyse, Develop, Test; Analyse, Develop, Test; and so on... doing each

step for each feature, one feature at a time.

Advantages of this iterative approach to software development include:

Page 94: Software Development Life Cycle

Reduced risk: clear visibility of what's completed to date throughout a project

Increased value: delivering some benefits early; being able to release the product whenever it's deemed

good enough, rather than having to wait for all intended features to be ready

More flexibility/agility: can choose to change direction or adapt the next iterations based on actually

seeing and using the software

Better cost management: if, like all-too-many software development projects, you run over budget, some

value can still be realised; you don't have to scrap the whole thing if you run short of funds

For this approach to be practical, each feature must be fully developed, to the extent that it's ready to be shipped,

before moving on.

Another practicality is to make sure features are developed in *priority* order, not necessarily in a logical order by

function. Otherwise you could run out of time, having built some of the less important features - as in agile

software development, the timescales are fixed.

Building the features of the software ”broad but shallow” is also advisable for the same reason. Only when you've

completed all your must-have features, move on to the should-haves, and only then move on to the could-haves.

Otherwise you can get into a situation where your earlier features are functionally rich, whereas later features of

the software are increasingly less sophisticated as time runs out.

Try to keep your product backlog or feature list expressed in terms of use cases, user stories, or features - not

technical tasks. Ideally each item on the list should always be something of value to the user, and always

deliverables rather than activities so you can 'kick the tyres' and judge their completeness, quality and readiness

for release.

Agile Principle #6: Fast but not so furious!

Agile software development is all about frequent delivery of products . In a truly agile world, gone are the days of

the 12 month project. In an agile world, a 3-6 month project is strategic!

Nowhere is this more true than on the web. The web is a fast moving place. And with the luxury of centrally

hosted solutions, there's every opportunity to break what would have traditionally been a project into a list of

features, and deliver incrementally on a very regular basis - ideally even feature by feature.

Page 95: Software Development Life Cycle

On the web, it's increasingly accepted for products to be released early (when they're basic, not when they're

faulty!). Particularly in the Web 2.0 world, it's a kind of perpetual beta. In this situation, why wouldn't you want to

derive some benefits early? Why wouldn't you want to hear real user/customer feedback before you build

'everything'? Why wouldn't you want to look at your web metrics and see what works, and what doesn't, before

building 'everything'?

And this is only really possible due to some of the other important principles of agile development. The iterative

approach, requirements being lightweight and captured just-in-time, being feature-driven, testing integrated

throughout the lifecycle, and so on.

So how frequent is *frequent*?

Scrum says break things into 30 day Sprints.

That's certainly frequent compared to most traditional software development projects.

Consider a major back-office system in a large corporation, with traditional projects of 6-12 months+, and all the

implications of a big rollout and potentially training to hundreds of users. 30 days is a bit too frequent I think. The

overhead of releasing the software is just too large to be practical on such a regular basis.

Agile Principle #7: "done" means "DONE!"

In agile development, "done" should really mean "DONE!".

Features developed within an iteration (Sprint in Scrum), should be 100% complete by the end of the Sprint.

Too often in software development, "done" doesn't really mean "DONE!". It doesn't mean tested. It doesn't

necessarily mean styled. And it certainly doesn't usually mean accepted by the product owner. It just means

developed.

In an ideal situation, each iteration or Sprint should lead to a release of the product. Certainly that's the case on

BAU (Business As Usual) changes to existing products. On projects it's not feasible to do a release after every

Sprint, however completing each feature in turn enables a very precise view of progress and how far complete

the overall project really is or isn't.

Page 96: Software Development Life Cycle

So, in agile development, make sure that each feature is fully developed, tested, styled, and accepted by the

product owner before counting it as "DONE!". And if there's any doubt about what activities should or shouldn't be

completed within the Sprint for each feature, "DONE!" should mean shippable.

The feature may rely on other features being completed before the product could really be shipped. But the

feature on its own merit should be shippable. So if you're ever unsure if a feature is 'done enough', ask one simple

question: "Is this feature ready to be shipped?".

It's also important to really complete each feature before moving on to the next...

Of course multiple features can be developed in parallel in a team situation. But within the work of each

developer, do not move on to a new feature until the last one is shippable. This is important to ensure the overall

product is in a shippable state at the end of the Sprint, not in a state where multiple features are 90% complete or

untested, as is more usual in traditional development projects.

In agile development, "done" really should mean "DONE!".

Page 97: Software Development Life Cycle

Agile Principle #8: Enough's enough!

Pareto's law is more commonly known as the 80/20 rule. The theory is about the law of distribution and how many

things have a similar distribution curve. This means that *typically* 80% of your results may actually come from

only 20% of your efforts!

areto's law can be seen in many situations - not literally 80/20 but certainly the principle that the majority of your

results will often come from the minority of your efforts.

So the really smart people are the people who can see (up-front without the benefit of hind-sight) *which* 20% to

focus on. In agile development, we should try to apply the 80/20 rule, seeking to focus on the important 20% of

effort that gets the majority of the results.

If the quality of your application isn't life-threatening, if you have control over the scope, and if speed-to-market is of

primary importance, why not seek to deliver the important 80% of your product in just 20% of the time? In fact,

in that particular scenario, you could ask why you would ever bother doing the last 20%?

that doesn't mean your product should be fundamentally flawed, a bad user experience, or full of faults. It just

means that developing some features, or the richness of some features, is going the extra mile and has a

diminishing return that may not be worthwhile.

So does that statement conflict with my other recent post: "done means DONE!"? Not really. Because within each

Sprint or iteration, what you *do* choose to develop *does* need to be 100% complete within the iteration.

Agile Principle #9: Agile testing is not for dummies!

In agile development, testing is integrated throughout the lifecycle; testing the software continuously throughout

its development.

Agile development does not have a separate test phase as such. Developers are much more heavily engaged in

testing, writing automated repeatable unit tests to validate their code.

Page 98: Software Development Life Cycle

Apart from being geared towards better quality software, this is also important to support the principle of small,

iterative, incremental releases.

With automated repeatable unit tests, testing can be done as part of the build, ensuring that all features are

working correctly each time the build is produced. And builds should be regular, at least daily, so integration is

done as you go too.

The purpose of these principles is to keep the software in releasable condition throughout the development, so it

can be shipped whenever it's appropriate.

The XPeXtreme Programming) agile methodology goes further still. XP recommends test driven development,

writing tests before writing the software.

But testing shouldn't only be done by developers throughout the development. There is still a very important role

for professional testers, as we all know "developers can't test for toffee!" :-)

The role of a tester can change considerably in agile development, into a role more akin to quality assurance than

purely testing. There are considerable advantages having testers involved from the outset

This is compounded further by the lightweight approach to requirements in agile development, and the emphasis

on conversation and collaboration to clarify requirements more than the traditional approach of specifications and

documentation.

Although requirements can be clarified in some detail in agile development (as long as they are done just-in-time

and not all up-front), it is quite possible for this to result in some ambiguity and/or some cases where not all team

members have the same understanding of the requirements.

So what does this mean for an agile tester? A common concern from testers moving to an agile development

approach - particularly from those moving from a much more formal environment - is that they don't know

precisely what they're testing for. They don't have a detailed spec to test against, so how can they possibly test it?

Even in a more traditional development environment, I always argued that testers could test that software meets a

spec, and yet the product could still be poor quality, maybe because the requirement was poorly specified or

because it was clearly written but just not a very good idea in the first place! A spec does not necessarily make the

product good!

In agile development, there's a belief that sometimes - maybe even often - these things are only really evident

when the software can be seen running. By delivering small incremental releases and by measuring progress only

Page 99: Software Development Life Cycle

by working software, the acid test is seeing the software and only then can you really judge for sure whether or not

it's good quality.

Agile testing therefore calls for more judgement from a tester, the application of more expertise about what's

good and what's not, the ability to be more flexible and having the confidence to work more from your own

knowledge of what good looks like. It's certainly not just a case of following a test script, making sure the software

does what it says in the spec.

And for these reasons, agile testing is not for dummies!

Agile Principle #10: No place for snipers!

Agile development relies on close cooperation and collaboration between all team members and stakeholders.

Agile development principles include keeping requirements and documentation lightweight, and acknowledging

that change is a normal and acceptable reality in software development.

This makes close collaboration particularly important to clarify requirements just-in-time and to keep all team

members (including the product owner) 'on the same page' throughout the development.

You certainly can't do away with a big spec up-front *and* not have close collaboration. You need one of them

that's for sure. And for so many situations the latter can be more effective and is so much more rewarding for all

involved!

In situations where there is or has been tension between the development team and business people, bringing

everyone close in an agile development approach is akin to a boxer keeping close to his opponent, so he can't

throw the big punch! :-)

But unlike boxing, the project/product team is working towards a shared goal, creating better teamwork, fostering

team spirit, and building stronger, more cooperative relationships.

There are many reasons to consider the adoption of agile development, and in the near future I'm going to outline

"10 good reasons to go agile" and explain some of the key business benefits of an agile approach.

If business engagement is an issue for you, that's one good reason to go agile you shouldn't ignore.

Page 100: Software Development Life Cycle

Software Development Life Cycle Models I was asked to put together this high-level and traditional software life cycle information as a favor for a friend of a friend, so I thought I might as well share it with everybody.

The General ModelSoftware life cycle models describe phases of the software cycle and the order in which those phases are executed.  There are tons of models, and many companies adopt their own, but all have very similar patterns.  The general, basic model is shown below:

General Life Cycle Model

Each phase produces deliverables required by the next phase in the life cycle.  Requirements are translated into design.  Code is produced during implementation that is driven by the design.  Testing verifies the deliverable of the implementation phase against requirements.

Requirements

Business requirements are gathered in this phase.  This phase is the main focus of the project managers and stake holders.  Meetings with managers, stake holders and users are held in order to determine the requirements.  Who is going to use the system?  How will they use the system?  What data should be input into the system?  What data should be output by the system?  These are general questions that get answered during a requirements gathering phase.  This produces a nice big list of functionality that the system should provide, which describes functions the system should perform, business logic that processes data, what data is stored and used by the system, and how the user interface should work.  The overall result is the system as a whole and how it performs, not how it is actually going to do it.

Design

The software system design is produced from the results of the requirements phase.  Architects have the ball in their court during this phase and this is the phase in which their focus lies.  This is where the details on how the system will work is produced.  Architecture, including hardware and software, communication, software design (UML is produced here) are all part of the deliverables of a design phase.

Implementation

Code is produced from the deliverables of the design phase during implementation, and this is the longest phase of the software development life cycle.  For a developer, this is the main focus of the life cycle because this is where the code is produced.  Implementation my overlap with both the design and testing phases.  Many tools exists (CASE tools) to actually automate the production of code using information gathered and produced during the design phase.

Page 101: Software Development Life Cycle

Testing

During testing, the implementation is tested against the requirements to make sure that the product is actually solving the needs addressed and gathered during the requirements phase.  Unit tests and system/acceptance tests are done during this phase.  Unit tests act on a specific component of the system, while system tests act on the system as a whole.

So in a nutshell, that is a very basic overview of the general software development life cycle model.   Now lets delve into some of the traditional and widely used variations.

 

Waterfall ModelThis is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model.  It is very simple to understand and use.  In a waterfall model, each phase must be completed in its entirety before the next phase can begin.  At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project.  Unlike what I mentioned in the general model, phases do not overlap in a waterfall model.

Waterfall Life Cycle Model

Advantages

Simple and easy to use. Easy to manage due to the rigidity of the model – each phase has specific deliverables and a

review process.

Phases are processed and completed one at a time.

Works well for smaller projects where requirements are very well understood.

Disadvantages

Adjusting scope during the life cycle can kill a project

Page 102: Software Development Life Cycle

No working software is produced until late during the life cycle.

High amounts of risk and uncertainty.

Poor model for complex and object-oriented projects.

Poor model for long and ongoing projects.

Poor model where requirements are at a moderate to high risk of changing.

 

V-Shaped ModelJust like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes.  Each phase must be completed before the next phase begins.  Testing is emphasized in this model more so than the waterfall model though.  The testing procedures are developed early in the life cycle before any coding is done, during each of the phases preceding implementation.

Requirements begin the life cycle model just like the waterfall model.   Before development is started, a system test plan is created.  The test plan focuses on meeting the functionality specified in the requirements gathering.

The high-level design phase focuses on system architecture and design.  An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.

The low-level design phase is where the actual software components are designed, and unit tests are created in this phase as well.

The implementation phase is, again, where all coding takes place.  Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.

V-Shaped Life Cycle Model

Advantages

Simple and easy to use. Each phase has specific deliverables.

Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.

Page 103: Software Development Life Cycle

Works well for small projects where requirements are easily understood.

Disadvantages

Very rigid, like the waterfall model. Little flexibility and adjusting scope is difficult and expensive.

Software is developed during the implementation phase, so no early prototypes of the software are produced.

Model doesn’t provide a clear path for problems found during testing phases.

 

Incremental ModelThe incremental model is an intuitive approach to the waterfall model.  Multiple development cycles take place here, making the life cycle a “multi-waterfall” cycle.  Cycles are divided up into smaller, more easily managed iterations.  Each iteration passes through the requirements, design, implementation and testing phases.

A working version of software is produced during the first iteration, so you have working software early on during the software life cycle.  Subsequent iterations build on the initial software produced during the first iteration.

Incremental Life Cycle Model

Advantages

Generates working software quickly and early during the software life cycle. More flexible – less costly to change scope and requirements.

Easier to test and debug during a smaller iteration.

Easier to manage risk because risky pieces are identified and handled during its iteration.

Each iteration is an easily managed milestone.

Disadvantages

Each phase of an iteration is rigid and do not overlap each other. Problems may arise pertaining to system architecture because not all requirements are gathered

up front for the entire software life cycle.

Page 104: Software Development Life Cycle

 

Spiral ModelThe spiral model is similar to the incremental model, with more emphases placed on risk analysis.   The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation.  A software project repeatedly passes through these phases in iterations (called Spirals in this model).  The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed.  Each subsequent spirals builds on the baseline spiral.

Requirements are gathered during the planning phase.  In the risk analysis phase, a process is undertaken to identify risk and alternate solutions.  A prototype is produced at the end of the risk analysis phase.

Software is produced in the engineering phase, along with testing at the end of the phase.  The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.

In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.

Spiral Life Cycle Model

Page 105: Software Development Life Cycle

Advantages

High amount of risk analysis Good for large and mission-critical projects.

Software is produced early in the software life cycle.

Disadvantages

Can be a costly model to use. Risk analysis requires highly specific expertise.

Project’s success is highly dependent on the risk analysis phase.

Doesn’t work well for smaller projects.