Software Development Life Cycle (SDLC) is a procedural process, in the development of a software product. The process is carried in a set of steps, which explains the whole idea about how to go through each product. The classification of Software Development Life Cycle process is as follows 1. Planning 2. Analysis 3. Design 4. Software Development 5. Implementation 6. Software Testing 7. Deployment 8. Maintenance Software Testing is an important factor in a product's life cycle, as the product will have greater life, only when it works correctly and efficiently according to the customer's requirements. Introduction to Software Testing Before moving further towards introduction to software testing , we need to know a few concepts that will simplify the definition of software testing. Error: Error or mistake is a human action that produces wrong or incorrect result. Defect (Bug, Fault): A flaw in the system or a product that can cause the component to fail or misfunction. Failure: It is the variance between the actual and expected result. Risk: Risk is a factor that could result in negativity or a chance of loss or damage. Thus Software testing is the process of finding defects/bugs in the system, that occurs due to an error in the application, which could lead to failure of the resultant product and increase in probability of high risk. In short, software testing have different goals and objectives, which often include: 1. finding defects; 2. gaining confidence in and providing information about the level of quality; 3. preventing defects.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Software Development Life Cycle (SDLC) is a procedural process, in the development of a software product. The process is carried in a set of steps, which explains the whole idea about how to go through each product.
The classification of Software Development Life Cycle process is as follows 1. Planning 2. Analysis
3. Design
4. Software Development
5. Implementation
6. Software Testing
7. Deployment
8. Maintenance
Software Testing is an important factor in a product's life cycle, as the product will have greater life, only when it works correctly and efficiently according to the customer's requirements.
Introduction to Software Testing
Before moving further towards introduction to software testing, we need to know a few concepts that will simplify the definition of software testing.
Error: Error or mistake is a human action that produces wrong or incorrect result. Defect (Bug, Fault): A flaw in the system or a product that can cause the component to fail or
misfunction. Failure: It is the variance between the actual and expected result. Risk: Risk is a factor that could result in negativity or a chance of loss or damage.
Thus Software testing is the process of finding defects/bugs in the system, that occurs due to an error in the application, which could lead to failure of the resultant product and increase in probability of high risk. In short, software testing have different goals and objectives, which often include:
1. finding defects; 2. gaining confidence in and providing information about the level of quality;
3. preventing defects.
If you are new to the field of software testing, then the article software testing for beginners will be of great help.
Scope of Software Testing
The primary function of software testing is to detect bugs in order to correct and uncover it. The scope of software testing includes execution of that code in various environment and also to examine the aspects of code - does the software do what it is supposed to do and function according to the specifications? As we move further we come across some questions such as "When to start testing?" and "When to stop testing?" It is recommended to start testing from the initial stages of the software development. This not only helps in rectifying tremendous errors before the last stage, but also reduces the rework of finding the bugs in the initial stages every now and then. It also saves the cost of the defect required to find it. Software testing is an ongoing process, which is potentially
endless but has to be stopped somewhere, due to the lack of time and budget. It is required to achieve maximum profit with good quality product, within the limitations of time and money. The tester has to follow some procedural way through which he can judge if he covered all the points required for testing or missed out any. To help testers to carry out these day-to-day activities, a baseline has to be set, which is done in the form of checklists. Read more on checklists for software tester.
Software Testing Key Concepts o Defects and Failures: As we discussed earlier, defects are not caused only due to the coding
errors, but most commonly due to the requirement gaps in the non-functional requirement, such as usability, testability, scalability, maintainability, performance and security. A failure is caused due to the deviation between an actual and an expected result. But not all defects result to failures. A defect can turn into a failure due to the change in the environment and or the change in the configuration of the system requirements.
o Input Combination and Preconditions: Testing all combination of inputs and initial state (preconditions), is not feasible. This means finding large number of infrequent defects is difficult.
o Static and Dynamic Analysis: Static testing does not require execution of the code for finding defects, whereas in dynamic testing, software code is executed to demonstrate the results of running tests.
o Verification and Validation: Software testing is done considering these two factors. Verification: This verifies whether the product is done according to the specification? Validation: This checks whether the product meets the customer requirement?
o Software Quality Assurance: Software testing is an important part of the software quality assurance. Quality assurance is an activity, which proves the suitability of the product by taking care of the quality of a product and ensuring that the customer requirements are met.
Software Testing Types:
Software test type is a group of test activities that are aimed at testing a component or system focused on a specific test objective; a non-functional requirement such as usability, testability or reliability. Various types of software testing are used with the common objective of finding defects in that particular component.
Software testing is classified according to two basic types of software testing: Manual Scripted Testing and Automated Testing.
Manual Scripted Testing: Black Box Testing White Box Testing
Gray Box Testing
The levels of software testing life cycle includes : Unit Testing Integration Testing
Other types of software testing are: Functional Testing Performance Testing
1. Load Testing
2. Stress Testing
Smoke Testing
Sanity Testing
Regression Testing
Recovery Testing
Usability Testing
Compatibility Testing
Configuaration Testing
Exploratory Testing
For further explanation of these concepts, read more on types of software testing.
Automated Testing: Manual testing is a time consuming process. Automation testing involves automating a manual process. Test automation is a process of writing a computer program in the form of scripts to do a testing which would otherwise need to be done manually. Some of the popular automation tools are Winrunner, Quick Test Professional (QTP), LoadRunner, SilkTest, Rational Robot, etc. Automation tools category also includes maintenance tool such as TestDirector and many other.
Software Testing Methodologies
The software testing methodologies or process includes various models that built the process of working for a particular product. These models are as follows:
Waterfall Model V Model
Spiral Model
Rational Unified Process(RUP)
Agile Model
Rapid Application Development(RAD)
These models are elaborated briefly in software testing methodologies.
Software Testing Artifacts
Software testing process can produce various artifacts such as:
Test Plan: A test specification is called a test plan. A test plan is documented so that it can be used to verify and ensure that a product or system meets its design specification.
Traceability matrix: This is a table that correlates or design documents to test documents. This verifies that the test results are correct and is also used to change tests when the source documents are changed.
Test Case: Test cases and software testing strategies are used to check the functionality of individual component that is integrated to give the resultant product. These test cases are developed with the objective of judging the application for its capabilities or features.
Test Data: When multiple sets of values or data are used to test the same functionality of a particular feature in the test case, the test values and changeable environmental components are collected in separate files and stored as test data.
Test Scripts: The test script is the combination of a test case, test procedure and test data. Test Suite: Test suite is a collection of test cases.
Software Testing Process
Software testing process is carried out in the following sequence, in order to find faults in the software system: 1. Create Test Plan 2. Design Test Case
3. Write Test Case
4. Review Test Case
5. Execute Test Case
6. Examine Test Results
7. Perform Post-mortem Reviews
8. Budget after Experience
Here is a sample Test Case for you:
# Software Test Case for Login Page: Purpose : The user should be able to go to the Home page. Pre-requisite :
1. S/w should be compatible with the Operating system.
2. Login page should appear.
3. User Id and Password textboxes should be available with appropriate labels.
4. Submit and Cancel buttons with appropriate captions should be available.
Test Data : Required list of variables and their values should be available.eg: User Id:{Valid UserId, Invalid UserId, empty}, Password:{Valid, Invalid, empty}.
User views the page to check whether it includes UserId and Password textboxes with appropriate labels. Also expects that Submit and Cancel buttons are available with appropriate captions
Screen dispalys user interface requirements according to the user.
2. TC2.
Textbox for UserId should:i)allow only alpha-numeric characters{a-z, A-Z}ii)not allow special characters like{'$','#','!','~','*',...} iii)not allow numeric characters like{0-9}
i)User types numbers into the textbox. i)Error message is displayed for numeric data.
ii)User types alphanumeric data in the textbox.
ii)Text is accepted when user enters alpha-numeric data into the textbox.
3. TC3.
Checking functionality of the Password textbox:i)Textbox for Password should accept more than six characters.ii)Data should be displayed in encrypted format.
i)User enters only two characters in the password textbox.
i)Error message is displayed when user enters less than six characters in the password textbox.
ii)User enters more than six characters in the password textbox.
System accepts data when user enters more than six characters into the password textbox.
ii)User checks whether his data is displayed in the encrypted format.
System accepts data in the encrypted format else displays an error message.
4. TC4. Checking functionality of 'SUBMIT' button.
i)User checks whether 'SUBMIT' button is enabled or disabled.
i)System displays 'SUBMIT' button as enabled
ii)User clicks on the 'SUBMIT' button and expects to view the 'Home' page of the application.
ii)System is redirected to the 'Home' page of the application as soon as he clicks on the 'SUBMIT' button.
5. TC5. Checking functionality of 'CANCEL' button.
i)User checks whether 'CANCEL' button is enabled or disabled.
i)System displays 'CANCEL' button as enabled.
ii)User checks whether the textboxes for UserId and Password are reset to blank by clicking on the 'CANCEL' button.
ii)System clears the data available in the UserId and Password textbox when user clicks on the 'CANCEL' button.
Fault Finding Techniques in Software Testing
Finding of a defect or fault in the earlier parts of the software not only saves time and money, but is also efficient in terms of security and profitability. As we move forward towards the different levels of the software, it becomes difficult and tedious to go back for finding the problems in the initial conditions of the components. The cost of finding the defect also increases. Thus it is recommended to start testing from the initial stages of the life cycle.
There are various techniques involved alongwith the types of software testing. There is a procedure that is to be followed for finding a bug in the application. This procedure is combined into the life cycle of the bug in the form of
contents of a bug, depending upon the severity and priority of that bug. This life cycle is named as the bug life cycles, which helps the tester in answering the question - how to log a bug?
Measuring Software Testing
There arises a need of measuring the software, both, when the software is under development and after the system is ready for use. Though it is difficult to measure such an abstract constraint, it is essential to do so. The elements that are not able to be measured, needs to be controlled. There are some important uses of measuring the software:
Software metrics helps in avoiding pitfalls such as 1. cost overruns,
2. in identifying where the problem has raised,
3. clarifying goals.
It answers questions such as:
1. What is the estimation of each process activity?,
2. How is the quality of the code that has been developed?,
3. How can the under developed code be improved?, etc.
It helps in judging the quality of the software, cost and effort estimation, collection of data, productivity and performance evaluation.
Some of the common software metrics are: Code Coverage Cyclomatic complexity
Cohesion
Coupling
Function Point Analysis
Execution time
Source lines of code
Bug per lines of code
In short, measurement of a software is for understanding, controlling and improvement of the software system. Software is subject to changes, with respect to, changing environmental conditions, varying user requirements, as well as configuration and compatibility issues. This gives rise to the development of newer and updated versions of software. But, there should be some source of getting back to the older versions easily and working on them efficiently. Testers play a vital role in this. Here is where change management comes into picture.
Software Testing as a Career
Software testing is a good career opportunity for those who are interested in the software industry. Video game testing is an offshoot of software testing. There are many industries specializing in this field. Believe it or not, you can actually get paid to test video games. You can read more on how to become a video game tester.
I hope this article has helped you gain a deeper insight into software testing. If you are planning to choose the software testing industry as your career ground, you might like to go through this extensive list of software testing
interview questions. Before you step out for a job in the testing field or before you take your first step towards becoming a software tester, you can acquire these software testing certifications.
Software Testing Certifications
Software testing certifications will not only boost up ones knowledge, but also prove to be beneficial for his academic performance. There are some software testing certification programs that can support the professional aspirations of software testers and quality assurance specialists.
ISTQB- International Software Testing Qualifications Board CSTE- Certified Software Tester
Software testing is indeed a vast field and accurate knowledge is crucial to ensure the quality of the software developed. I consider that this article on software testing tutorial must have given you a clearer idea on various software testing types, methodologies and different software testing strategies.
Software Development Life CycleWhat is Software Development Life Cycle?
The Software Development Life Cycle is a step-by-step process involved in the development of a software product. It is also denoted as Software Development process in certain parts of the world. The whole process is generally classified into a set of steps and a specific operation will be carried out in each of the steps.
ClassificationThe basic classification of the whole process is as follows
Planning Analysis
Design
Development
Implementation
Testing
Deployment
Maintenance
Each of the steps of the process has its own importance and plays a significant part in the product development. The description of each of the steps can give a better understanding.
PlanningThis is the first and foremost stage in the development and one of the most important stages. The basic motive is to plan the total project and to estimate the merits and demerits of the project. The Planning phase includes the definition of the intended system, development of the project plan, and Parallel management of the plan throughout the proceedings of the development.
A good and matured plan can create a very good initiative and can positively affect the complete project.
AnalysisThe main aim of the analysis phase is to perform statistics and requirements gathering. Based on the analysis of the project and due to the influence of the results of the planning phase, the requirements for the project are decided and gathered.
Once the requirements for the project are gathered, they are prioritized and made ready for further use. The
decisions taken in the analysis phase are out and out due to the requirements analysis. Proceedings after the current phase are defined.
DesignOnce the analysis is over, the design phase begins. The aim is to create the architecture of the total system. This is one of the important stages of the process and serves to be a benchmark stage since the errors performed until this stage and during this stage can be cleared here.
Most of the developers have the habit of developing a prototype of the entire software and represent the software as a miniature model. The flaws, both technical and design, can be found and removed and the entire process can be redesigned.
Development and Implementation
The development and implementation phase is the most important phase since it is the phase where the main part of the project is done. The basic works include the design of the basic technical architecture and the maintenance of the database records and programs related to the development process.
One of the main scenarios is the implementation of the prototype model into a full-fledged working environment, which is the final product or software.
TestingThe testing phase is one of the final stages of the development process and this is the phase where the final adjustments are made before presenting the completely developed software to the end-user.
In general, the testers encounter the problem of removing the logical errors and bugs. The test conditions which are decided in the analysis phase are applied to the system and if the output obtained is equal to the intended output, it means that the software is ready to be provided to the user.
MaintenanceThe toughest job is encountered in the maintenance phase which normally accounts for the highest amount of money. The maintenance team is decided such that they monitor on the change in organization of the software and report to the developers, in case a need arises.
The information desk is also provided with in this phase. This serves to maintain the relationship between the user and the creator.
Software Testing - Check Lists For Software Tester
I would like to make a note that the following checklists are defined in most generic form and do not promise to cover all processes that you are be required to go through and follow during your work. There may be some processes which are completely missed out from the lists or it may also contain processes which you don’t need to follow in your form of work.
First Things First Check the scripts assigned to you: This is the first and foremost process in the list. There is no
specific logic used to assign scripts to testers who should execute them all, but you may come across practices where you will be assigned script based on your work load for the day or your skill to understand and execute it in least possible time.
Check the status/comments of the defect in the Test Report Tool: Once you unveil a bug, its very important to keep track of the status of it as you will have to re-test the bug once it is fixed by a developer. Most of the times, general practice is to confirm if any fix to a bug is successful as it also makes it sure that the tester can proceed with other tests involving deeper side of that particular functionality. Sometimes, it also addresses issues related to understanding of functionality of the system for example: if a tester registered a defect, which is not an actual bug as per the programming/business logic. Then in that case, a comment from developer might help in understanding the mistake committed by the tester.
Checks while executing scrips: Update the test data sheet with all values which are required such as user name, functionality,
test code etc. Use naming conventions defined as testing standards to define a bug appropriately. Take screen prints for the script executed using naming conventions and provide test data that
you used for the testing. The screen prints will help other testers and developers to understand how the test was executed and it will also serve as a proof for you. If possible, try to explain the procedure you followed, choice of data and your understanding etc.
If your team is maintaining any type of tracking sheet, do not forget to update all the tracking sheets for the bug, it’s status, time and date found, severity etc.
If you are using a test reporting tool, do not forget to execute the script in the tool. Many test reporting tools require scripts to be executed in order to initiate the life cycle of a bug. For example Test Director needs script to be executed till the step where it the test script failed, other test steps before failed test step are declared as passed.
Update the tracking sheets with current status, status in reporting tools etc. if it is required to be updated after you execute the script in the reporting tool.
Check if you have executed all the scripts properly and updated the test reporting tool. After you complete your days work, it is better to do a peer to peer review. This step is very
important and often helps in finding out missing steps/processes.
Checks while logging defects First of all, confirm with your test lead if the defect is valid. Follow the appropriate naming conventions while logging defects. Before submitting the defect, get it reviewed by Work Lead/Team Lead. Give appropriate description and names in the defect screen prints as per naming conventions. After submitting defects attach the screen prints for the defect on Test Reporting Tool. Note down the defect number/unique identifier and update the test tracking sheet with
appropriate information.
Maintain a defect log, defect tracking sheet, screen prints dump folder etc. for a backup.
Checks for blocking and unblocking scripts
Blocking or unblocking of a script relates to a bug which affects execution of a script. For example if there is a bug on login screen, which is not allowing anyone enter the account after entering valid username and password and pressing OK button, there is no way you can execute any test script which requires the account screen that comes after the login screen.
Confirm with your test lead/work lead if the scripts are really blocked due to an existing bug. Block scripts with an active defect (Defect status: New/Assigned/Fixed/Reopen) Update the current script/defect in test reporting tool and tracking sheets with the defect
number/unique identifier, which is blocking the execution of the script or testing of the defect. If a defect is retested successfully, then unblock all scripts/defects blocked by it.
At the end of day, send an update mail to your Team Lead/Work Lead which should include the following: Scripts executed (Number) Defects raised/closed (Number) If any comments added on defects Issues/queries if any
Software Testing - Black Box Testing Strategy
What is a Black Box Testing Strategy?
Black Box Testing is not a type of testing; it instead is a testing strategy, which does not need any knowledge of internal design or code etc. As the name "black box" suggests, no knowledge of internal logic or code structure is required. The types of testing under this strategy are totally based/focused on the testing for requirements and functionality of the work product/software application. Black box testing is sometimes also called as "Opaque Testing", "Functional/Behavioral Testing" and "Closed Box Testing".
The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. Now a days, it is becoming common to route the Testing work to a third party as the developer of the system knows too much of the internal logic and coding of the system, which makes it unfit to test the application by the developer.
In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action.
Various testing types that fall under the Black Box Testing strategy are: functional testing, stress testing, recovery testing, volume testing, User Acceptance Testing (also known as UAT), system testing, Sanity or Smoke testing, load testing, Usability testing, Exploratory testing, ad-hoc testing, alpha testing, beta testing etc.
These testing types are again divided in two groups: a) Testing in which user plays a role of tester and b) User is not required.
Testing method where user is not required:
Functional Testing:
In this type of testing, the software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected.
Stress Testing:
The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand.
Load Testing:
The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades.
Ad-hoc Testing:
This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing.
Exploratory Testing:
This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.
Usability Testing:
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user.
Smoke Testing:
This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level.
Recovery Testing:
Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications.
Volume Testing:
Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system.
Testing where user plays a role/user is required:
User Acceptance Testing:
In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to.
Alpha Testing:
In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.
Beta Testing:
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.
Software Testing - White Box Testing Strategy
What is a White Box Testing Strategy?
White box testing strategy deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code etc.
In order to implement white box testing, the tester has to deal with the code and hence is needed to possess knowledge of coding and logic i.e. internal working of the code. White box test also needs the tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning.
Advantages of White box testing are:
i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
ii) The other advantage of white box testing is that it helps in optimizing the codeiii) It helps in removing the extra lines of code, which can bring in hidden defects.
Disadvantages of white box testing are:
i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.
Types of testing under White/Glass Box Testing Strategy:
Unit Testing:
The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.
Static and dynamic Analysis:
Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.
Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.
Branch Coverage:
No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.
Security Testing:
Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.
Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.
Besides all the testing types given above, there are some more types which fall under both Black box and White box testing strategies such as: Functional testing (which deals with the code in order to check its functional performance), Incremental integration testing (which deals with the testing of newly added code in the application), Performance and Load testing (which helps in finding out how the particular code manages resources and give performance etc.) etc.
Software Testing - Acceptance Testing
Acceptance testing (also known as user acceptance testing) is a type of testing carried out in order to verify if the product is developed as per the standards and specified criteria and meets all the requirements specified by customer. This type of testing is generally carried out by a user/customer where the product is developed externally by another party.
Acceptance testing falls under black box testing methodology where the user is not very much interested in internal working/coding of the system, but evaluates the overall functioning of the system and compares it with the requirements specified by them. User acceptance testing is considered to be one of the most important testing by user before the system is finally delivered or handed over to the end user.
Acceptance testing is also known as validation testing, final testing, QA testing, factory acceptance testing and application testing etc. And in software engineering, acceptance testing may be carried out at two different levels; one at the system provider level and another at the end user level (hence called user acceptance testing, field acceptance testing or end-user testing).
Acceptance testing in software engineering generally involves execution of number test cases which constitute to a particular functionality based on the requirements specified by the user. During acceptance testing, the system has to pass through or operate in a computing environment that imitates the actual operating environment existing with user. The user may choose to perform the testing in an iterative manner or in the form of a set of varying parameters (for example: missile guidance software can be tested under varying payload, different weather conditions etc.).
The outcome of the acceptance testing can be termed as success or failure based on the critical operating conditions the system passes through successfully/unsuccessfully and the user’s final evaluation of the system.
The test cases and test criterion in acceptance testing are generally created by end user and cannot be achieved without business scenario criteria input by user. This type of testing and test case creation involves most experienced people from both sides (developers and users) like business analysts, specialized testers, developers, end users etc.
Process involved in Acceptance Testing 1. Test cases are created with the help of business analysts, business customers (end users), developers, test
specialists etc. 2. Test cases suites are run against the input data provided by the user and for the number of iterations that
the customer sets as base/minimum required test runs. 3. The outputs of the test cases run are evaluated against the criterion/requirements specified by user. 4. Depending upon the outcome if it is as desired by the user or consistent over the number of test suites
run or non conclusive, user may call it successful/unsuccessful or suggest some more test case runs. 5. Based on the outcome of the test runs, the system may get rejected or accepted by the user with or
Acceptance testing is done in order to demonstrate the ability of system/product to perform as per the expectations of the user and induce confidence in the newly developed system/product. A sign-off on contract stating the system as satisfactory is possible only after successful acceptance testing.
Types of Acceptance Testing
User Acceptance Testing: User acceptance testing in software engineering is considered to be an essential step before the system is finally accepted by the end user. In general terms, user acceptance testing is a process of testing the system before it is finally accepted by user.
Alpha Testing & Beta Testing: Alpha testing is a type of acceptance testing carried out at developer’s site by users (internal staff). In this type of testing, the user goes on testing the system and the outcome is noted and observed by the developer simultaneously.
Beta testing is a type of testing done at user’s site. The users provide their feedback to the developer for the outcome of testing. This type of testing is also known as field testing. Feedback from users is used to improve the system/product before it is released to other users/customers.
Operational Acceptance Testing: This type of testing is also known as operational readiness/preparedness testing. It is a process of ensuring all the required components (processes and procedures) of the system are in place in order to allow user/tester to use it.
Contact and Regulation Acceptance Testing: In contract and regulation acceptance testing, the system is tested against the specified criteria as mentioned in the contract document and also tested to check if it meets/obeys all the government and local authority regulations and laws and also all the basic standards.
Software Testing - Stress Testing
Stress testing has different meaning for different industries where it is used. For a financial industry/sector, stress testing means a process of testing financial instruments to find out their robustness and level of accuracy they can maintain under extreme conditions such as sudden or continuous market crash at a certain level, sudden or extreme change in various parameters, for example interest rates, repo and reverse repo used in the financial sector, sudden rise or decline in the price of materials that can affect financial projections etc. For the manufacturing industry, stress testing may include different parameters and operating process for testing of different systems. For medical industry, stress testing means a process that can help understand a patient’s condition, etc.
Stress Testing in IT Industry
Stress testing in IT industry (hardware as well as software sectors) means testing of software/hardware for its effectiveness in giving consistent or satisfactory performance under extreme and unfavorable conditions such as heavy network traffic, heavy processes load, under or over clocking of underlying hardware, working under maximum requests for resource utilization of the peripheral or in the system etc.
In other words, stress testing helps find out the level of robustness and consistent or satisfactory performance even when the limits for normal operation for the system (software/hardware) is crossed.
Most important use of stress testing is found in testing of software and hardware that are supposed to be operating in critical or real time situation. Such as a website will always be online and the server hosting the website must be able to handle the traffic in all possible ways (even if the traffic increases manifold), a mission critical software or hardware that works in real time scenario etc. Stress testing in connection with websites or certain software is considered to be an effective process of determining the limit, at which the system/software/hardware/website shows robustness, is always available to perform its task, effectively manages the load than the normal scenario and even shows effective error management under extreme conditions.
Need for Stress Testing
Stress testing is considered to be important because of following reasons: 1. Almost 90% of the software/systems are developed with an assumption that they will be operating under
normal scenario. And even if it is considered that the limit of normal operating conditions will be crossed, it is not considerably as high as it really could be.
2. The cost or effect of a very important (critical) software/system/website failure under extreme conditions in real time can be huge (or may be catastrophic for the organization or entity owning the software/system).
3. It is always better to be prepared for extreme conditions rather than letting the system/software/website crash, when the limit of normal operation is crossed.
4. Testing carried out by the developer of the system/software/website may not be sufficient to help unveil conditions which will lead to crash of the system/software when it is actually submitted to the operating environment.
5. It's not always possible to unveil possible problems or bugs in a system/software, unless it is subjected to such type of testing.
To help overcome problems like denial of service attacks, in case of web servers for a web site, security breach related problems due to spamming, hacking and viruses etc., problems arising out of conditions where software/system/website need to handle requests for resource allocation for requesting processes at the time when all the required resources are already allocated to some other process that needs some more resources to complete its work (which is called as deadlock situation), memory leak, race condition etc.
This type of testing is mostly done with the help of various stress testing softwares available in market. These tools are configured to automate a process of increasing stress (i.e. creation and increasing degree of adverse environment) on a system/software/website and capturing values of various parameters that help confirm the robustness, availability and performance of the system/software/website being tested. Few of the actions involved in stress testing are bombarding a website with huge number of requests, running of many resource hungry applications in a computer, making numerous attempts to access ports of a computer in order to hack it and use it for various purposes such as spamming, spreading virus etc.
Intensity of the adverse conditions is increased slowly while measuring all the parameters till the point where the system/software/website crashes. The collected data (observation and parameter values) are used for further improvement of the system/software/website.
Software Testing - An Introduction To Usability Testing
Usability Testing:
As the term suggest, usability means how better something can be used over the purpose it has been created for. Usability testing means a way to measure how people (intended/end user) find it (easy, moderate or hard) to interact with and use the system keeping its purpose in mind. It is a standard statement that "Usability testing measures the usability of the system".
Why Do We Need Usability Testing?
Usability testing is carried out in order to find out if there is any change needs to be carried out in the developed system (may it be design or any specific procedural or programmatic change) in order to make it more and more user friendly so that the intended/end user who is ultimately going to buy and use it receives the system which he can understand and use it with utmost ease.
Any changes suggested by the tester at the time of usability testing, are the most crucial points that can change the stand of the system in intended/end user’s view. Developer/designer of the system need to incorporate the feedbacks (here feedback can be a very simple change in look and feel or any complex change in the logic and functionality of the system) of usability testing into the design and developed code of the system (the word system may be a single object or an entire package consisting more than one objects) in order to make system more and more presentable to the intended/end user.
Developer often try to make the system as good looking as possible and also tries to fit the required functionality, in this endeavor he may have forgotten some error prone conditions which are uncovered only when the end user is using the system in real time.
Usability testing helps developer in studying the practical situations where the system will be used in real time. Developer also gets to know the areas that are error prone and the area of improvement.
In simple words, usability testing is an in-house dummy-release of the system before the actual release to the end users, where developer can find and fix all possible loop holes.
How Usability Test Is Carried Out?
Usability test, as mentioned above is an in-house dummy release before the actual release of the system to the intended/end user. Hence, a setup is required in which developer and testers try to replicate situations as realistic as possible to project the real time usage of the system. The testers try to use the system in exactly the same manner that any end user can/will do. Please note that, in this type of testing also, all the standard instruction of testing are followed to make it sure that the testing is done in all the directions such as functional testing, system integration testing, unit testing etc.
The outcome/feedback is noted down based on observations of how the user is using the system and what are all the possible ways that also may come into picture, and also based on the behavior of the system and how
easy/hard it is for the user to operate/use the system. User is also asked for his/her feedback based on what he/she thinks should be changed to improve the user interaction between the system and the end user.
Usability testing measures various aspects such as:
How much time the tester/user and system took to complete basic flow?
How much time people take to understand the system (per object) and how many mistakes they make while performing any process/flow of operation?
How fast the user becomes familiar with the system and how fast he/she can recall the system’s functions?And the most important: how people feel when they are using the system?
Over the time period, many people have formulated various measures and models for performing usability testing. Any of the models can be used to perform the test.
Advantages of Usability Testing Usability test can be modified to cover many other types of testing such as functional testing, system
integration testing, unit testing, smoke testing etc. (with keeping the main objective of usability testing in mind) in order to make it sure that testing is done in all the possible directions.
Usability testing can be very economical if planned properly, yet highly effective and beneficial. If proper resources (experienced and creative testers) are used, usability test can help in fixing all the
problems that user may face even before the system is finally released to the user. This may result in better performance and a standard system.
Usability testing can help in uncovering potential bugs and potholes in the system which generally are not visible to developers and even escape the other type of testing.
Usability testing is a very wide area of testing and it needs fairly high level of understanding of this field along with creative mind. People involved in the usability testing are required to possess skills like patience, ability to listen to the suggestions, openness to welcome any idea, and the most important of them all is that they should have good observation skills to spot and fix the problems on fly.
Software Testing - Compatibility Testing
Compatibility testing is a non-functional software testing that helps evaluate a system/application's performance in connection with the operating environment. Read on to know more about compatibility testing.
Compatibility testing is one of the several types of software testing performed on a system that is built based on certain criteria and which has to perform specific functionality in an already existing setup/environment. Compatibility of a system/application being developed with, for example, other systems/applications, OS, Network, decide many things such as use of the system/application in that environment, demand of the system/application etc. Many a times, users prefer not to opt for an application/system just because it is not compatible with any other system/application, network, hardware or OS they are already using. This leads to a situation where the development efforts taken by developers prove to be in vain.
What is Compatibility Testing
Compatibility testing is a type of testing used to ensure compatibility of the system/application/website built with various other objects such as other web browsers, hardware platforms, users (in case if its very specific type of requirement, such as a user who speaks and can read only a particular language), operating systems etc. This type of testing helps find out how well a system performs in a particular environment that includes hardware, network, operating system and other software etc.
Compatibility testing can be automated using automation tools or can be performed manually and is a part of non-functional software testing.
Developers generally lookout for the evaluation of following elements in a computing environment (environment in which the newly developed system/application is tested and which has similar configuration as the actual environment in which the system/application is supposed to fit and start working).
Hardware: Evaluation of the performance of system/application/website on a certain hardware platform. For example: If an all-platform compatible game is developed and is being tested for hardware compatibility, the developer may choose to test it for various combinations of chipsets (such as Intel, Macintosh GForce), motherboards etc.
Browser: Evaluation of the performance of system/website/application on a certain type of browser. For example: A website is tested for compatibility with browsers like Internet Explorer, Firefox etc. (usually browser compatibility testing is also looked at as a user experience testing, as it is related to user’s experience of the application/website, while using it on different browsers).
Network: Evaluation of the performance of system/application/website on network with varying parameters such as bandwidth, variance in capacity and operating speed of underlying hardware etc., which is set up to replicate the actual operating environment.
Peripherals: Evaluation of the performance of system/application in connection with various systems/peripheral devices connected directly or via network. For example: printers, fax machines, telephone lines etc.
Compatibility between versions: Evaluation of the performance of system/application in connection with its own predecessor/successor versions (backward and forward compatibility). For example: Windows 98 was developed with backward compatibility for Windows 95 etc.
Softwares: Evaluation of the performance of system/application in connection with other softwares. For example: Software compatibility with operating tools for network, web servers, messaging tools etc.
Operating System: Evaluation of the performance of system/application in connection with the underlying operating system on which it will be used.
Databases: Many applications/systems operate on databases. Database compatibility testing is used to evaluate an application/system’s performance in connection to the database it will interact with.
How helpful is it?
Compatibility testing can help developers understand the criteria that their system/application needs to attain and fulfill, in order to get accepted by intended users who are already using some OS, network, software and hardware etc. It also helps the users to find out which system will better fit in the existing setup they are using.
The most important use of the compatibility testing is as already mentioned above: to ensure its performance in a computing environment in which it is supposed to operate. This helps in figuring out necessary changes/modifications/additions required to make the system/application compatible with the computing environment.
Software Testing - Brief Introduction To Exploratory Testing
Exploratory Software Testing, even though disliked by many, has found its place in the Software Testing world. Exploratory testing is the only type of testing that can help in uncovering bugs that stand more chance of being ignored by other testing strategies.
What is an Exploratory Testing?
Bach’s Definition: ‘Any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.’
Which simply can be put as: A type of testing where we explore software, write and execute the test scripts simultaneously.
Exploratory testing is a type of testing where tester does not have specifically planned test cases, but he/she does the testing more with a point-of-view to explore the software features and tries to break it in order to find out unknown bugs.
A tester who does exploratory testing, does it only with an idea to more and more understand the software and appreciate its features. During this process, he/she also tries to think of all possible scenarios where the software may fail and a bug can be revealed.
Why do we need exploratory testing? At times, exploratory testing helps in revealing many unknown and un-detected bugs, which is very hard
to find out through normal testing. As exploratory testing covers almost all the normal type of testing, it helps in improving our productivity
in terms of covering the scenarios in scripted testing and those which are not scripted as well. Exploratory Testing is a learn and work type of testing activity where a tester can at least learn more and
understand the software if at all he/she was not able to reveal any potential bug. Exploratory testing, even though disliked by many, helps testers in learning new methods, test strategies,
and also think out of the box and attain more and more creativity.
Who Does Exploratory Testing?
Any software tester knowingly or unknowingly does it!
While testing, if a tester comes across a bug, as a general practice, tester registers that bug with the programmer. Along with registering the bug, tester also tries to make it sure that he/she has understood the scenario and functionality properly and can reproduce the bug condition. Once programmer fixes the bug, tester runs a test case with the same scenario replication in which the bug had occurred previously. If tester finds that the bug is fixed, he/she again tries to find out if the fix can handle any such same type of scenario with different inputs.
For an example, lets consider that a tester finds a bug related to an input text field on a form, where the field is
supposed to accept any digit other than the digits from 1 to 100, which it fails to and accepts the number 100. Tester logs this bug to the programmer and now is waiting for the fix. Once programmer fixes the bug, it sends it across to the tester so as to get it tested. Tester now will try to test the bug with same input value (100: as he/she had found that this condition causes application to fail) in the field. If application rejects the number (100) entered by the tester, he/she can safely close the defect.
Now, along with the above given test input value, which had revealed the bug, tester tries to check if there is any other value from this set (0 to 100), which can cause the application to fail. He/she may try to enter values from 0 to 100, or may be some characters or a combination of character and numbers in any order. All these test cases are thought by the tester as a variation of the type of value he/she had entered previously and represent only one test scenario. This testing is called exploratory testing, as the tester tried to explore and find out the possibility of revealing a bug by using any possible way.
What qualities I need to posses to be able to perform an Exploratory Testing?
As I mentioned above, any software tester can perform exploratory testing.The only limit to the extent to which you can perform exploratory testing is your imagination and creativity, more you can think of ways to explore, understand the software, more test cases you will be able write and execute simultaneously.
Advantages of Exploratory Testing Exploratory testing can uncover bugs, which are normally ignored (or hard to find) by other testing
strategies. It helps testers in learning new strategies, expand the horizon of their imagination that helps them in
understanding & executing more and more test cases and finally improve their productivity. Exploratory testing helps tester in confirming that he/she understands the application and its functionality
properly and has no confusion about the working of even a smallest part of it, hence covering the most important part of requirement understanding.
As in case of exploratory testing, we write and execute the test cases simultaneously. It helps in collecting result oriented test scripts and shading of load of unnecessary test cases which do not yield and result.
Exploratory testing covers almost all type of testing, hence tester can be sure of covering various scenarios once exploratory testing is performed at the highest level (i.e. if the exploratory testing performed can ensure that all the possible scenarios and test cases are covered).
Waterfall Model in Testing
The waterfall model in testing is one of the most widely used and popular process module for developing and designing software programs to meet specific customer needs. The linear sequential properties of the model make it a universally accepted software development process model.
Every piece of software that is released needs to undergo a rigorous and extreme testing procedure before it can be released for general use. The waterfall model in testing is one such testing procedure that has gained immense popularity over the years, owing to its simple understandability and elementary design. The development of software is a long and arduous process, and without proper testing and checking, it cannot be sold anywhere. The process of software development is known as 'Software Development Process Models', and these define the many stages of developing a software, right from the initial idea conception to the final usage.
Waterfall model in testing envelopes the use of a design concept that is also known as System Developing Life Cycle (SDLC) Model, or the Linear Sequential Model. Thus, waterfall model in software engineering defines the various stages that a software developer must undertake in order to ensure that the software is meeting customer requirements, and is working glitch free.
Waterfall Model Life-cycle
A lot of research and development is enforced during the various growth stages of any particular software. The idea of generating a standardized model of testing (like the waterfall model in testing) is to ensure that a software engineer follows the correct sequence of process development, and does not get too far ahead too soon. Each line of the program needs to be checked and double checked, and each stage of the waterfall model is required to follow a standard protocol. The various waterfall model phases are as follows.
The waterfall model diagram shown above illustrates the various stages of this process.
The first and most obvious step in software development is the gathering of all requirements of the customer. The primary purpose of the final program is to serve the user, so therefore all his needs and requirements need to be known in detail, before the development process actually begins. The purpose of the model and a basic specifications and requirements chart is made after careful consultation with the user, and this is incorporated into the development process. Waterfall model in testing begins primarily with the gathering of all pertinent and necessary data from the customer.
Requirements Analysis
Next up, these requirements are studied and analyzed closely, and the developer takes a decision regarding which platform, which computer language, and what kind of databases are necessary for the development process. A feasibility study is then carried out to ensure that all resources are available and the actual programming of the software is possible. A projected blueprint of sorts is created of the software.
Designing and Coding
This is where the real work begins, and the algorithms and the flowcharts of the software are devised. Based on the data collected and the feasibility study carried out, the actual coding of the program commences. Without the information gathered in the previous two stages, the design of the program would be impossible. This is the most important stage of the model, and the use of the waterfall model in testing would be impossible without something to actually test. It goes without saying that the final design has to meet all the necessary requirements of the customer.
Testing Now comes the litmus test of the code developed. This stage defines the actual transition of the program from a mere hypothesis to a real usable software. Without testing the functionality of the code, all the possible bugs cannot be detected. Moreover, use of waterfall model in testing also ensures that all the requirements of the customer are satisfactorily met, and there are no loose ends anywhere in the code developed. If any flaws or bugs are detected, the software is reverted to the designing stage and all the deficiencies are fixed.
The designing process is divided into smaller parts known as units, and unit testing needs to be carried out for each of these divisions individually. Once the units are declared to be flaw free, they are integrated into the final system and then this system is tested to ensure proper integration and compatibility between the various units. Waterfall model in testing can only be done by dividing up the coded program into various manageable parts. Thus, the importance of the testing phase in waterfall model is universally known and undoubted.
Final Acceptance
Once the design has been tried and tested by the testing team, the customers are given a demo version of the final program. Now they must use the program and indicate whether they are satisfied with the product or not. If they accept that the software is satisfactory and as per their demands and requirements, the process is completed. On the other hand, if he is dissatisfied with certain aspects of the software, or feels that an integral component is missing, the design team proceeds to solve this problem. The benefits of dividing the work into these various stages is that everyone knows what they are doing, and are specifically trained to carry out their responsibility.
Waterfall model in testing ensures that a high degree of professionalism is met within the development process, and that all the parties involved in this development process are specialists in their respective fields.
Advantages of Waterfall Model in Testing
The primary advantage is that it is a linear model, and follows a proper sequence and order. This is a crucial factor in determining the model's effectiveness and suitability. Also, since the process is following a linear sequence, and documentation is produced at every stage, it is easy to track down mistakes, deficiencies and any other problems that may arise. The cost of resources at each stage get minimized due to the linear sequencing as well.
Disadvantages of Waterfall Model in Testing
As is the case with all other models, if the customer is ambiguous about his needs, the design process can go horribly wrong. This factor is further highlighted by the fact that if some mistake is made in a certain stage and is not detected or tracked, all the subsequent steps will go wrong. Therefore the need for testing is very intense. Customers often have a complaint that if they could get a sample of the software in the early stages, they could find out whether it is suitable or not. Since they do not receive the program till it is almost completed, it becomes a little more complicated for them to offer feedback. Thus, a situation of complete trust from the client is essential.
Software Verification & Validation Model - An Introduction
An introduction to ‘Verification & Validation Model’ used in improvement of software project development life cycle.
A perfect software product is built when every step is taken with full consideration that ‘A right product is developed in a right manner’. ‘Software Verification & Validation’ is one such model, which helps the system designers and test engineers to confirm that a right product is build right way throughout the development process and improve the quality of the software product.
‘Verification & Validation Model’ makes it sure that, certain rules are followed at the time of development of a software product and also makes it sure that the product that is developed fulfills the required specifications. This reduces the risk associated with any software project up to certain level by helping in detection and correction of errors and mistakes, which are unknowingly done during the development process.
What is Verification?
The standard definition of Verification goes like this: "Are we building the product RIGHT?" i.e. Verification is a process that makes it sure that the software product is developed the right way. The software should confirm to its predefined specifications, as the product development goes through different stages, an analysis is done to ensure that all required specifications are met.
Methods and techniques used in the Verification and Validation shall be designed carefully, the planning of which starts right from the beginning of the development process. The Verification part of ‘Verification and Validation Model’ comes before Validation, which incorporates Software inspections, reviews, audits, walkthroughs, buddy checks etc. in each phase of verification (every phase of Verification is a phase of the Testing Life Cycle)
During the Verification, the work product (the ready part of the Software being developed and various documentations) is reviewed/examined personally by one ore more persons in order to find and point out the defects in it. This process helps in prevention of potential bugs, which may cause in failure of the project.
Few terms involved in Verification:
Inspection:Inspection involves a team of about 3-6 people, led by a leader, which formally reviews the documents and work product during various phases of the product development life cycle. The work product and related documents are presented in front of the inspection team, the member of which carry different interpretations of the presentation. The bugs that are detected during the inspection are communicated to the next level in order to take care of them.
Walkthroughs:Walkthrough can be considered same as inspection without formal preparation (of any presentation or documentations). During the walkthrough meeting, the presenter/author introduces the material to all the participants in order to make them familiar with it. Even when the walkthroughs can help in finding potential bugs, they are used for knowledge sharing or communication purpose.
Buddy Checks:
This is the simplest type of review activity used to find out bugs in a work product during the verification. In buddy check, one person goes through the documents prepared by another person in order to find out if that person has made mistake(s) i.e. to find out bugs which the author couldn’t find previously.
The activities involved in Verification process are: Requirement Specification verification, Functional design verification, internal/system design verification and code verification (these phases can also subdivided further). Each activity makes it sure that the product is developed right way and every requirement, every specification, design code etc. is verified!
What is Validation?
Validation is a process of finding out if the product being built is right?i.e. whatever the software product is being developed, it should do what the user expects it to do. The software product should functionally do what it is supposed to, it should satisfy all the functional requirements set by the user. Validation is done during or at the end of the development process in order to determine whether the product satisfies specified requirements.
Validation and Verification processes go hand in hand, but visibly Validation process starts after Verification process ends (after coding of the product ends). Each Verification activity (such as Requirement Specification Verification, Functional design Verification etc.) has its corresponding Validation activity (such as Functional Validation/Testing, Code Validation/Testing, System/Integration Validation etc.).
All types of testing methods are basically carried out during the Validation process. Test plan, test suits and test cases are developed, which are used during the various phases of Validation process. The phases involved in Validation process are: Code Validation/Testing, Integration Validation/Integration Testing, Functional Validation/Functional Testing, and System/User Acceptance Testing/Validation.
Terms used in Validation process:
Code Validation/Testing:
Developers as well as testers do the code validation. Unit Code Validation or Unit Testing is a type of testing, which the developers conduct in order to find out any bug in the code unit/module developed by them. Code testing other than Unit Testing can be done by testers or developers.
Integration Validation/Testing:
Integration testing is carried out in order to find out if different (two or more) units/modules co-ordinate properly. This test helps in finding out if there is any defect in the interface between different modules.
Functional Validation/Testing:
This type of testing is carried out in order to find if the system meets the functional requirements. In this type of testing, the system is validated for its functional behavior. Functional testing does not deal with internal coding of the project, in stead, it checks if the system behaves as per the expectations.
User Acceptance Testing or System Validation:
In this type of testing, the developed product is handed over to the user/paid testers in order to test it in real time scenario. The product is validated to find out if it works according to the system specifications and satisfies all the user requirements. As the user/paid testers use the software, it may happen that bugs that are yet undiscovered, come up, which are communicated to the developers to be fixed. This helps in improvement of the final product.
Spiral Model - A New Approach Towards Software Development
The Waterfall model is the most simple and widely accepted/followed software development model, but like any other system, Waterfall model does have its own pros and cons. Spiral Model for software development was designed in order to overcome the disadvantages of the Waterfall Model.
In last article we discussed about "Waterfall Model", which is one of the oldest and most simple model designed and followed during software development process. But "Waterfall Model" has its own disadvantages such as there is no fair division of phases in the life cycle, not all the errors/problems related to a phase are resolved during the same phase, instead all those problems related to one phase are carried out in the next phase and are needed to be resolved in the next phase, this takes much of time of the next phase to solve them. The risk factor is the most important part, which affects the success rate of the software developed by following "The Waterfall Model".
In order to overcome the cons of "The Waterfall Model", it was necessary to develop a new Software Development Model, which could help in ensuring the success of software project. One such model was developed which
incorporated the common methodologies followed in "The Waterfall Model", but it also eliminated almost every possible/known risk factors from it. This model is referred as "The Spiral Model" or "Boehm’s Model".
There are four phases in the "Spiral Model" which are: Planning, Evaluation, Risk Analysis and Engineering. These four phases are iteratively followed one after other in order to eliminate all the problems, which were faced in "The Waterfall Model". Iterating the phases helps in understating the problems associated with a phase and dealing with those problems when the same phase is repeated next time, planning and developing strategies to be followed while iterating through the phases. The phases in "Spiral Model" are:
Plan: In this phase, the objectives, alternatives and constraints of the project are determined and are documented. The objectives and other specifications are fixed in order to decide which strategies/approaches to follow during the project life cycle.
Risk Analysis: This phase is the most important part of "Spiral Model". In this phase all possible (and available) alternatives, which can help in developing a cost effective project are analyzed and strategies are decided to use them. This phase has been added specially in order to identify and resolve all the possible risks in the project development. If risks indicate any kind of uncertainty in requirements, prototyping may be used to proceed with the available data and find out possible solution in order to deal with the potential changes in the requirements.
Engineering: In this phase, the actual development of the project is carried out. The output of this phase is passed through all the phases iteratively in order to obtain improvements in the same.
Customer Evaluation: In this phase, developed product is passed on to the customer in order to receive customer’s comments and suggestions which can help in identifying and resolving potential problems/errors in the software developed. This phase is very much similar to TESTING phase.
The process progresses in spiral sense to indicate iterative path followed, progressively more complete software is built as we go on iterating through all four phases. The first iteration in this model is considered to be most important, as in the first iteration almost all possible risk factors, constraints, requirements are identified and in the next iterations all known strategies are used to bring up a complete software system. The radical dimensions indicate evolution of the product towards a complete system.
However, as every system has its own pros and cons, "The Spiral Model" does have its pros and cons too. As this model is developed to overcome the disadvantages of the "Waterfall Model", to follow "Spiral Model", highly skilled people in the area of planning, risk analysis and mitigation, development, customer relation etc. are required. This along with the fact that the process needs to be iterated more than once demands more time and is somehow expensive task.
Rational Unified Process (RUP) Methodology
The rational unified process (RUP) is a software process product designed as an object-oriented and web-enabled program development methodology by Rational Software Corporation, a division acquired by IBM, since 2003. This article provides a brief overview of the rational unified process (RUP) methodology.
Rational Unified Process (RUP) methodology is an software engineering tool which compounds development aspects such as manuals, documents, codes, models, etc. with the procedural aspects of development such as techniques, mechanics, defined stages, and practices within a unified framework.
What is RUP?
Rational Unified Process (RUP) methodology is fast becoming a popular software development to map business process and practices. Development is phased into four stages. RUP methodology is highly flexible in its developmental path, as any stage can be updated at any time. The first stage or inception centers on assessing needs, requirements, viability and feasibility of the program or project. The second step or elaboration measures the architecture of the system's appropriateness based on the project needs. The third stage is the construction phase, wherein the actual software system is made, by developing components and features. This phase also includes the first release of the developed software. The final stage is that of transition, and marks the end of the development cycle, if all objectives are met. This phase deals with the training of the end users, beta testing and the final implementation of the system.
Understanding RUP: Six Best Industry Practices of RUP
RUP is designed to incorporate the six best software industry practices for software development, while stressing strongly on object-oriented design. They are basically six ideas, when followed while designing any software project, will reduce errors and faults and ensure optimal productivity. The practices are listed below:
Develop Iteratively
Loops are created to add extra information or to facilitate processes that are added later in the development stage.
RequirementsGathering requirements is essential to the success of any project. The end users' needs have to be built into the system completely.
ComponentsLarge projects, when split into components, are easier to test and can be more methodically integrated into a larger system. Components allow the use of code reuse through the use of object-oriented programming.
Design Model Visual
Many projects use Unified Modeling Language (UML) to perform object-oriented analysis and designs, which consist of diagrams to visually represent all major components.
Testing for quality and defects is an integral part of software development. There are also a number of testing patterns that should be developed, to gauge the readiness of the project for its release.
Synchronized Changes
All components created by separate teams, either from different locations or on different platforms need to be synchronized and verified constantly.
Rational Unified Process (RUP) methodology's developmental approach has proved to be very resourceful and successful for a number of reasons. The entire development process takes into account the changing requirements and integrates them. Risks and defects can, not only be discovered but addressed, and reduced or eliminated in the middle of integration process. As defects are detected along the process, errors and performance bottlenecks can be rectified by making use of the several iterations (loops). RUP provides a prototype at the completion of each iteration, which make it easier for the developers to synchronize and implement changes.
Rational Unified Process (RUP) methodology is designed to work as an online help that provides content, guidelines, processes templates, and examples for all stages of program development. To be a certified solution designer, authorized to use this methodology, one needs to get a minimum of 62% in IBM RUP certification examination.
Rational Unified Process (RUP) is a comprehensive software engineering process. It features a disciplined approach towards industry-tested practices for designing softwares and systems within a development organization. Continue reading, if you want to know what is rational unified process (RUP)?
The concept of Rational Unified Process (RUP) came from the Rational Software Corporation, a division of IBM (International Business Machines Corporation). It keeps a check on effective project management and high-quality production of software. The basic methodology followed in RUP is based on a comprehensive web-enabled program development and object-oriented approach.The 'Rational Unified Process' adopts the 'Unified Modeling Language' and provides the best practiced guidelines, templates and illustrations of all aspects for program development. Here is a simple breakdown of all the aspects related to this concept, so as to give you a brief understanding as to what is rational unified process (RUP)?
There are primarily four phases or stages of development that is concluded with a release in RUP. Here is a quick review of all the four stages or cycles.
Inception Phase
In the inception phase, the goal is to develop the parent idea into a product vision by defining its scope and the business case. The business case includes business context, factors influencing success, risk assessment and financial forecast. This is to get an understanding of the business drivers and to justify the launch of the project. This phase is to identify the work flows required by the project.
Elaboration Phase
Here the architectural foundation, project plan and high-risk factors of the project are determined, after analyzing the problem domain. For establishing these objectives, an in-and-out knowledge of the system is a must. In other words, the performance requirements, scope and functionality of the system, influence the deciding factor in the architectural concept of the project. Architectural and planning decisions are governed by the most critical use-cases. So, a perfect understanding of the use-cases and an articulated vision is what this phase of elaboration looks forward to achieve. This is an important phase. Since, after this phase the project is carried on to a level where any changes might cause disastrous outcome for the entire operation.
Construction Phase
As the name suggests, the phase involves construction of the software system or project. Here, development of the remaining components and application features is performed. Thereafter, they are integrated into the product which is moved from an architectural baseline to a completed system. In short, the source code and the application design is created for the software for its transition to the user community. The construction phase is the first external release of the software, wherein, adequate quality with optimization of resources is achieved rapidly.
Transition Phase
Transition phase marks the transition of the project from development to production. This stage is to ensure that the user requirements have been satisfied and met by the product. The initiative is done by testing the product
before its release as a beta version. This beta version is enhanced by bug fixing, site preparation, manual completion, defect identification and improving performance and usability. Other objectives are also taken up. They include
Training users and maintainers for successful operation of the system Purchasing hardware
Converting data from old to new systems
Arranging for activities for successful launch of the product
Holding sessions of learning lessons for improving future process and tool environment.
Rational Unified Process mentions six best practices, which have to be kept in mind when designing any software. These practices help prevent flaws in the project development and create more scope for efficient productivity. These six practices are as follows.
1. An iterative (executing the same set of instructions a given number of times or until a specified result is obtained) approach towards the software development.
2. Managing user requirements. 3. Use and test individual components before being integrated into a larger system.
Use 'Unified Modeling Language' tool to get a visual model of the components, users and their interaction relating to the project. Constant testing of the software quality is considered one of the best practices in any software development.
4. For a successful iterative development, monitoring, tracking and controlling changes made to a system is essential for a team to work together as a single unit.
The concept of rational unified process has endless explanation and description. Each and every important and essential considerations in a software development has been defined to its root. RUP results in a reduced IT costs, improved IT business, higher quality, higher service level and sharper adaptability, and most importantly, higher ROI (return on investments), and many other benefits. The above is just a theoretical brief explanation to the question as to what is RUP? However, a clearer and elaborated idea can be achieved once the process is put into practical use.
The process of quality assurance helps in testing the products and services as per the desired standards and the needs of customers. The quality assurance certifications for the software industry, organic food, and many other products are discussed in the following article.
In short, the activity or process that proves the suitability of a product for the intended purpose could be described as quality assurance. The quality assurance process takes care of the quality of the products and ensures that customer requirements pertaining to the products are met. The certifications used to assess different products have different parameters which should be understood thoroughly. Total quality management is vital for the survival and profitability of business nowadays.
Certifications like the ISO and CMMi, P-CMM, etc., are some of the most sought after quality certifications in the IT-ITES sector. The quality assurance procedures are implemented in software testing.
International Standards Organization (ISO): It is a European standard used for quality assurance. The ISO 9000 systems makes use of different documents or procedures for quality assurance, namely, ISO 9001, ISO 9002, ISO 9003. The ISO 9000 and ISO 9003 documents contain supporting guidelines. The ISO 9001 looks after the design, production, installation and maintenance/servicing, while the ISO 9002 is used for production and installation only. The final inspection is done with the help of the ISO 9003 model.
Capability Maturity Model Integration (CMMi): The CMMi acts as a guiding force in the improvement of the processes of an organization. The management of development, acquisition and maintenance of the services and products of a company is also improved with the help of CMMi. A set of proven practices are placed in a structure to assess the process area capability and organizational maturity of a company. The priorities for improvement are established and it is seen that these priorities are implemented properly with the help of CMMi.
People Capability Maturity Model (P-CMM): The P-CMM model is similar to SW-CMM (Software Capability Maturity Model) in its approach. The objective of P-CMM is to improve the software development capability of an organization by means of attracting, developing, motivating, organizing and retaining the manpower or the
required talent. The management and development of the workforce of a company is guided by the P-CMM model. The P-CMM model makes use of the best current practices used in organizational and human resource development to achieve its objectives.
e Services Capability Model (eSCM): The eSCM model serves the needs of the BPO/ITES industries. This model is used to assist the customers in measuring the service provider's capability. The measurement is needed for establishing and managing the outsourcing relationships which improve continually.
BS 7799: It is a security standard which originated in the mid-nineties, and till the year 2000 it evolved into a model known known as BS EN ISO17799. It is difficult to comply with the requirements/standards of this model since it covers the security issues comprehensively and contains the control requirements that are significant in number.
QAI Certification Program
The Quality Assurance International (QAI) is an agency which awards the organic certification to the Producers, Private labelers, Processors, Retailers, Distributors and other 'links' involved in the production of organic food.
Food and Drug Administration Certification
The Food and Drug Administration (FDA) of the US awards quality assurance certification for the food products which comply with the performance and safety standards. The FDA certifies different types of products like the dietary supplements, drugs & vaccines, medical devices, animal drugs & food, cosmetic products, etc.
Canadian Standards Association (CSA) International
The various products certified under the CSA are building products, heating & cooling equipment, concrete products, home equipment, health care equipment, gas appliances, etc. Rigorous tests are conducted in order to award the quality assurance certificates.
The quality control and quality assurance certifications help in developing the trust of customers in a particular product. The quality assurance certificates awarded by various agencies also act as a motivational force for industries to maintain the required standards. The short account of various agencies awarding the certifications would help the concerned people in the industries.
What are test cases in software testing, how they are designed and why they are so important to the entire testing scenario, read through to know more..
What is a Test Case?
A test case is a set of conditions or variables and inputs that are developed for a particular goal or objective to be achieved on a certain application to judge its capabilities or features.
It might take more than one test case to determine the true functionality of the application being tested. Every requirement or objective to be achieved needs at least one test case. Some software development methodologies like Rational Unified Process (RUP) recommend creating at least two test cases for each requirement or objective; one for performing testing through positive perspective and the other through negative perspective.
Test Case Structure
A formal written test case comprises of three parts -1. Information
Information consists of general information about the test case. Information incorporates Identifier, test case creator, test case version, name of the test case, purpose or brief description and test case dependencies.
2. ActivityActivity consists of the actual test case activities. Activity contains information about the test case environment, activities to be done at test case initialization,
activities to be done after test case is performed, step by step actions to be done while testing and the input data that is to be supplied for testing.
3. ResultsResults are outcomes of a performed test case. Results data consist of information about expected results and the actual results.
Designing Test Cases
Test cases should be designed and written by someone who understands the function or technology being tested. A test case should include the following information -
Purpose of the test Software requirements and Hardware requirements (if any)
Specific setup or configuration requirements
Description on how to perform the test(s)
Expected results or success criteria for the test
Designing test cases can be time consuming in a testing schedule, but they are worth giving time because they can really avoid unnecessary retesting or debugging or at least lower it. Organizations can take the test cases approach in their own context and according to their own perspectives. Some follow a general step way approach while others may opt for a more detailed and complex approach. It is very important for you to decide between the two extremes and judge on what would work the best for you. Designing proper test cases is very vital for your software testing plans as a lot of bugs, ambiguities, inconsistencies and slip ups can be recovered in time as also it helps in saving your time on continuous debugging and re-testing test cases.
Software Testing - Contents of a Bug
Complete list of contents of a bug/error/defect that are needed at the time of raising a bug during software testing. These fields help in identifying a bug uniquely.
When a tester finds a defect, he/she needs to report a bug and enter certain fields, which helps in uniquely identifying the bug reported by the tester. The contents of a bug are as given below:
Project: Name of the project under which the testing is being carried out.
Subject: Description of the bug in short which will help in identifying the bug. This generally starts with the project identifier number/string. This string should be clear enough to help the reader in anticipate the problem/defect for which the bug has been reported.
Description: Detailed description of the bug. This generally includes the steps that are involved in the test case and the actual results. At the end of the summary, the step at which the test case fails is described along with the actual result obtained and expected result.
Summary: This field contains some keyword information about the bug, which can help in minimizing the number of records to be searched.
Detected By: Name of the tester who detected/reported the bug.
Assigned To: Name of the developer who is supposed to fix the bug. Generally this field contains the name of developer group leader, who then delegates the task to member of his team, and changes the name accordingly.
Test Lead: Name of leader of testing team, under whom the tester reports the bug.
Detected in Version: This field contains the version information of the software application in which the bug was detected.
Closed in Version: This field contains the version information of the software application in which the bug was fixed.
Date Detected: Date at which the bug was detected and reported.
Expected Date of Closure: Date at which the bug is expected to be closed. This depends on the severity of the bug.
Actual Date of Closure: As the name suggests, actual date of closure of the bug i.e. date at which the bug was fixed and retested successfully.
Priority: Priority of the bug fixing. This specifically depends upon the functionality that it is hindering. Generally Medium, Low, High, Urgent are the type of severity that are used.
Severity: This is typically a numerical field, which displays the severity of the bug. It can range from 1 to 5, where 1 is high severity and 5 is the lowest.
Status: This field displays current status of the bug. A status of ‘New’ is automatically assigned to a bug when it is first time reported by the tester, further the status is changed to Assigned, Open, Retest, Pending Retest, Pending Reject, Rejected, Closed, Postponed, Deferred etc. as per the progress of bug fixing process.
Bug ID: This is a unique ID i.e. number created for the bug at the time of reporting, which identifies the bug uniquely.
Attachment: Sometimes it is necessary to attach screen-shots for the tested functionality that can help tester in explaining the testing he had done and it also helps developers in re-creating the similar testing condition.
Test Case Failed: This field contains the test case that is failed for the bug.
Any of above given fields can be made mandatory, in which the tester has to enter a valid data at the time of reporting a bug. Making a field mandatory or optional depends on the company requirements and can take place at any point of time in a Software Testing project.
Software Testing - Bug Life Cycles
Various life cycles that a bug passes through during a software testing process.
What is a Bug Life Cycle?The duration or time span between the first time bug is found (‘New’) and closed successfully (status: ‘Closed’), rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’.
(Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various statuses used for a bug during a bug life cycle, you can refer to article ‘Software
Testing – Bug & Statuses Used During A Bug Life Cycle’)
There are seven different life cycles that a bug can passes through:
< I > Cycle I:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) Test lead finds that the bug is not valid and the bug is ‘Rejected’.
< II > Cycle II:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) The bug is verified and reported to development team with status as ‘New’.4) The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of ‘Pending Reject’ before passing it back to the testing team.5) After getting a satisfactory reply from the development side, the test leader marks the bug as ‘Rejected’.
< III > Cycle III:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) The bug is verified and reported to development team with status as ‘New’.4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’.5) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.6) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.7) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.8) The tester retests the bug and it is working fine, so the tester closes the bug and marks it as ‘Closed’.
< IV > Cycle IV:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) The bug is verified and reported to development team with status as ‘New’.4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’.5) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.6) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.7) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.8) The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with ‘Reopen’ status. And the bug is passed back to the development team for fixing.
< V > Cycle V:1) A tester finds a bug and reports it to Test Lead.2) The Test lead verifies if the bug is valid or not.3) The bug is verified and reported to development team with status as ‘New’.4) The developer tries to verify if the bug is valid but fails in replicate the same scenario as was at the time of testing, but fails in that and asks for help from testing team.5) The tester also fails to re-generate the scenario in which the bug was found. And developer rejects the bug marking it ‘Rejected’.
< VI > Cycle VI:1) After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of
the bug is postponed for indefinite time and it is marked as ‘Postponed’.
< VII > Cycle VII:1) If the bug does not stand importance and can be/needed to be postponed, then it is given a status as ‘Deferred’.
This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.
Software Testing - How To Log A Bug (Defect)
A brief introduction to how a bug/defect/error is reported during software testing.
As we already have discussed importance of Software Testing in any software development project (Just to summarize: Software testing helps in improving quality of software and deliver a cost effective solution which meet customer requirements), it becomes necessary to log a defect in a proper way, track the defect, and keep a log of defects for future reference etc.
As a tester tests an application and if he/she finds any defect, the life cycle of the defect starts and it becomes very important to communicate the defect to the developers in order to get it fixed, keep track of current status of the defect, find out if any such defect (similar defect) was ever found in last attempts of testing etc. For this purpose, previously manually created documents were used, which were circulated to everyone associated with the software project (developers and testers), now a days many Bug Reporting Tools are available, which help in tracking and managing bugs in an effective way.
How to report a bug?
It’s a good practice to take screen shots of execution of every step during software testing. If any test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be reported/logged for the same. The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug should be attached to the test case that is failed.
At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project, Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity, Priority and Bug ID etc.) are filled and detailed description of the bug is given along with the expected and actual results. The screen-shots taken at the time of execution of test case are attached to the bug for reference by the developer.
After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then associated with the failed test case. This Bug ID helps in associating the bug with the failed test case.
After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug fixing process progresses.
If more than one tester are testing the software application, it becomes a possibility that some other tester might already have reported a bug for the same defect found in the application. In such situation, it becomes very important for the tester to find out if any bug has been reported for similar type of defect. If yes, then the test case has to be blocked with the previously raised bug (in this case, the test case has to be executed once the bug is fixed). And if there is no such bug reported previously, the tester can report a new bug and fail the test case for the newly raised bug.
If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a file with four columns containing Test Step No, Test Step Description, Expected Result and Actual Result. The expected and actual results are written for each step and the test case is failed for the step at which the test case fails.
This file containing test case and the screen shots taken are sent to the developers for reference. As the tracking process is not automated, it becomes important keep updated information of the bug that was raised till the time it is closed.
Software Testing Interview Questions
If you are looking for a job in software testing industry, it is imperative that along with a sound knowledge of the corresponding field, you must also be equipped with the answers for the most likely questions you'll be facing during an interview. We have compiled here a list of some common software testing interview questions. Have a look...
Software testing industry presents a plethora of career opportunities for candidates, who are interested in pursuing a career in the software industry. If you are the kind of a person, who does not enjoy software development, yet very keen about making a career in the software field, then software testing could be the right option for you. Software testing field offers several job positions in testing, Quality Assurance (QA), Quality Control etc. However, you need to have your basics in place, so as to improve your chances of acquiring a job in this particular industry.
Preparing for the Interview
Before applying for any IT job, it is imperative that you have a sound understanding of the field you are hoping to venture in. Besides being technically sound, you should also keep yourself abreast with the latest tools and trends in the software testing industry. Remember, software testing is a volatile field, hence, the things that you learned in your curriculum may have become obsolete by the time you are ready for a job. There are several types of software
testing and software testing methodologies, which you must be thorough with, before going for an interview. Typically, your set of interview questions for software testing would depend upon the particular area of software testing you are interested in. Hence, we have divided the questions into five common categories. More on job interview tips.
Software Testing Interview Questions on Product Testing What will be the test cases for product testing? Give an example of test plan
template. What are the advantages of working as a tester for a product based company as
opposed to a service based company? Do you know how a product based testing differs from a project based testing?
Can you give a suitable example? Do you know what is exactly meant by Test Plan? Name its contents? Can you
give a sample Test Plan for a Login Screen? How do you differentiate between testing a product and testing any web-based
application? What is the difference between Web based testing and Client server testing? How to perform SOAP Testing manually? Explain the significance of Waterfall model in developing a product.
Software Testing Interview Questions on Quality Assurance How do you ensure the quality of the product? What do you do when there isn't enough time for thorough testing? What are the normal practices of the QA specialists with perspective of a
software? Can you tell the difference between high level design and low level design? Can you tell us how Quality Assurance differs from Quality Control? You must have heard the term Risk. Can you explain the term in a few words?
What are the major components of the risk? When do you say your project testing is completed? Name the factors. What do you mean by a walk through and inspection? What is the procedure for testing search buttons of a web application both
manually and using Qtp8.2? Explain Release Acceptance Testing. Explain Forced Error Testing. Explain Data
Integrity Testing. Explain System Integration Testing. How does compatibility testing differ while testing in Internet explorer and
testing in Firefox?
Software Testing Interview Questions on Testing Scenarios How do you know that all the scenarios for testing are covered? Can you explain the Testing Scenario? Also explain scenario based testing? Give
an example to support your answer. Consider a yahoo application. What are the test cases you can write? Differentiate between test scenario and test case? Is it necessary to create new Software requirement document, test planning
report, if it is a 'Migrating Project'? Explain the difference between smoke testing and sanity testing? What are all the scenarios to be considered while preparing test reports? What is an 'end to end' scenario? Other than requirement traceability matrix, what are the other factors that we
need to check in order to exit a testing process ? What is the procedure for finding out the length of the edit box through
WinRunner?
Software Testing Interview Questions on Automated Testing
What automated testing tools are you familiar with? Describe some problems that you encountered while working with an automated
testing tool. What is the procedure for planning test automation? What is your opinion on the question that can a test automation improve test
effectiveness? Can you explain data driven automation? Name the main attributes of test automation? Do you think automation can replace manual testing? How is a tool for test automation chosen? How do you evaluate the tool for test automation? What are the main benefits of test automation according to you? Where can test automation go wrong? Can you describe testing activities? What testing activities you need to automate? Describe common issues of test automation. What types of scripting techniques for test automation are you aware of? Name the principles of good testing scripts for automation? What tools can you use for support of testing during the software development
life cycle? Can you tell us it the activities of a test case design be automated? What are the drawbacks of automated software testing? What skills are needed to be a good software test automator?
Software Testing Interview Questions on Bug Tracking Can you have a defect with high severity and low priority and vice-versa i.e high
priority and low severity? Justify your answer. Can you explain the difference between a Bug and a Defect? Explain the phases
of bug life cycle What are the different types of Bugs we normally see in any of the projects? Also
include their severity. What is the difference between Bug Resolution Meeting and Bug Review
Committee? Who all participate in Bug Resolution Meeting and Bug Review Committee?
Can you name some recent major computer system failures caused by software bugs? What do you mean by 'Reproducing a bug'? What do you do, if the bug was not reproducible?
How can you tell if a bug is reproducible or not? On what basis do we give priority and severity for a bug. Provide an example for
high priority and low severity and high severity and low priority? Explain Defect Life Cycle in Manual Testing? How do you give a BUG Title & BUG Description for ODD Division? Have you ever heard of a build interval period?
Software testing is a vast field and there is really no dearth of software testing interview questions. You can explore the Internet for more software testing interview questions and of course, the solutions. Hope this article helps you to get the job of your dreams. Good Luck!
Software Testing is a process of executing software in a controlled manner. When the end product is given to the client, it should work correctly according to the specifications and requirements of the software. Defect in software is the variance between the actual and expected results. There are different types of software testing, which when conducted help to eliminate defects from the program.
Testing is a process of gathering information by making observations and comparing them to expectations. – Dale Emery
In our day-to-day life, when we go out, shopping any product such as vegetable, clothes, pens, etc. we do check it before purchasing them for our satisfaction and to get maximum benefits. For example, when we intend to buy a pen, we test the pen before actually purchasing it i.e. if its writing, does it break if it falls, does it work in extreme climatic conditions, etc. So, though its the software, hardware or any product, testing turns to be mandatory.
What is Software Testing? Software Testing is a process of verifying and validating whether the program is performing correctly with no bugs. It is the process of analyzing or operating software for the purpose of finding bugs. It also helps to identify the defects / flaws / errors that may appear in the application code, which needs to be fixed. Testing not only means fixing the bug in the code, but also to check whether the program is behaving according to the given specifications and testing strategies. There are various types of software testing strategies such as white box testing strategy, black box testing
strategy, grey box software testing strategy, etc.
Need of Software Testing TypesTypes of Software Testing, depends upon different types of defects. For example:
Functional testing is done to detect functional defects in a system. Performance Testing is performed to detect defects when the system does not
perform according to the specifications
Usability Testing to detect usability defects in the system.
Security Testing is done to detect bugs/defects in the security of the system.
The list goes on as we move on towards different layers of testing.
Types of Software TestingVarious software testing methodologies guide you through the consecutive software testing types. Those who are new to this subject, here is some information on software testing - how to go about for beginners. To determine the true functionality of the application being tested, test cases are designed to help the developers. Test cases provide you with the guidelines for going through the process of testing the software. Software testing includes two basic types of software testing, viz. Manual Scripted Testing and Automated Testing.
Manual Scripted Testing : This is considered to be one of the oldest type of software testing methods, in which test cases are designed and reviewed by the team, before executing it.
Automated Testing : This software testing type applies automation in the testing, which can be applied to various parts of a software process such as test case management, executing test cases, defect management, reporting of the bugs/defects. The bug life cycle helps the tester in deciding how to log a bug and also guides the developer to decide on the priority of the bug depending upon the severity of logging it. Software bug testing or software testing to log a bug, explains the contents of a bug that is to be fixed. This can be done with the help of various bug tracking tools such as Bugzilla and defect tracking management tools like the Test Director.
Other Software Testing TypesSoftware testing life cycle is the process that explains the flow of the tests that are to be carried on each step of software testing of the product. The V- Model i.e Verification and Validation Model is a perfect model which is used in the improvement of the software project. This model contains software development life cycle on one side and software testing life cycle on the other hand side. Checklists for software tester sets a baseline that guides him to carry on the day-to-day activities.
Black Box Testing It explains the process of giving the input to the system and checking the output, without considering how the system generates the output. It is also called as Behavioral Testing.
Functional Testing: In this type of testing, the software is tested for the functional requirements. This checks whether the application is behaving according to the specification.
Performance Testing: This type of testing checks whether the system is performing properly, according to the user's requirements. Performance testing depends upon the Load and Stress Testing, that is internally or externally applied to the system.
1. Load Testing : In this type of performance testing, the system is raised beyond the limits in order to check the performance of the system when higher loads are applied.
2. Stress Testing : In this type of performance testing, the system is tested beyond the normal expectations or operational capacity
Usability Testing: This type of testing is also called as 'Testing for User Friendliness'. This testing checks the ease of use of an application. Read more on introduction to usability testing.
Regression Testing: Regression testing is one of the most important types of testing, in which it checks whether a small change in any component of the application does not affect the unchanged components. Testing is done by re-executing the previous versions of the application.
Smoke Testing: Smoke testing is used to check the testability of the application. It is also called 'Build Verification Testing or Link Testing'. That means, it checks whether the application is ready for further major testing and working, without dealing with the finer details.
Sanity Testing: Sanity testing checks for the behavior of the system. This type of software testing is also called as Narrow Regression Testing.
Parallel Testing: Parallel testing is done by comparing results from two different systems like old vs new or manual vs automated.
Recovery Testing: Recovery testing is very necessary to check how fast the system is able to recover against any hardware failure, catastrophic problems or any type of system crash.
Installation Testing: This type of software testing identifies the ways in which installation procedure leads to incorrect results.
Compatibility Testing: Compatibility Testing determines if an application under supported configurations perform as expected, with various combinations of hardware and software packages. Read more on compatibility testing.
Configuration Testing: This testing is done to test for compatibility issues. It determines minimal and optimal configuration of hardware and software, and determines the effect of adding or modifying resources such as memory, disk drives and CPU.
Compliance Testing: This type of testing checks whether the system was developed in accordance with standards, procedures and guidelines.
Error-Handling Testing: This software testing type determines the ability of the system to properly process erroneous transactions.
Manual-Support Testing: This type of software testing is an interface between people and application system.
Inter-Systems Testing: This type of software testing method is an interface between two or more application systems.
Exploratory Testing: Exploratory Testing is a type of software testing, which is similar to ad-hoc testing, and is performed to explore the software features. Read more on exploratory testing.
Volume Testing: This testing is done, when huge amount of data is processed through the application.
Scenario Testing: This type of software testing provides a more realistic and meaningful combination of functions, rather than artificial combinations that are obtained through domain or combinatorial test design.
User Interface Testing: This type of testing is performed to check, how user-friendly the application is. The user should be able to use the application, without any assistance by the system personnel.
System Testing: System testing is the testing conducted on a complete, integrated system, to evaluate the system's compliance with the specified requirements. This type of software testing validates that the system meets its functional and non-functional requirements and is also intended to test beyond the bounds defined in the software/hardware requirement specifications.
User Acceptance Testing: Acceptence Testing is performed to verify that the product is acceptable to the customer and it's fulfilling the specified requirements of that customer. This testing includes Alpha and Beta testing.
1. Alpha Testing: Alpha testing is performed at the developer's site by the customer in a closed environment. This testing is done after system testing.
2. Beta Testing: This type of software testing is done at the customer's site by the customer in the open environment. The presence of the developer, while performing these tests, is not mandatory. This is considered to be the last step in the software development life cycle as the product is almost ready.
White Box Testing It is the process of giving the input to the system and checking, how the system processes the input, to generate the output. It is mandatory for a tester to have the knowledge of the source code.
Unit Testing: This type of testing is done at the developer's site to check whether a particular piece/unit of code is working fine. Unit testing deals with testing the unit as a whole.
Static and Dynamic Analysis: In static analysis, it is required to go through the code in order to find out any possible defect in the code. Whereas, in dynamic analysis the code is executed and analyzed for the output.
Statement Coverage: This type of testing assures that the code is executed in such a way that every statement of the application is executed at least once.
Decision Coverage: This type of testing helps in making decision by executing the application, at least once to judge whether it results in true or false.
Condition Coverage: In this type of software testing, each and every condition is executed by making it true and false, in each of the ways at least once.
Path Coverage: Each and every path within the code is executed at least once to get a full path coverage, which is one of the important parts of the white box testing.
Integration Testing: Integration testing is performed when various modules are integrated with each other to form a sub-system or a system. This mostly focuses in the design and construction of the software architecture. Integration testing is further classified into Bottom-Up Integration and Top-Down Integration testing.
1. Bottom-Up Integration Testing: In this type of integration testing, the lowest level components are tested first and then alleviate the testing of higher level components using 'Drivers'.
2. Top-Down Integration Testing: This is totally opposite to bottom-up approach, as it tests the top level modules are tested and the branch of the module are tested step by step using 'Stubs' until the related module comes to an end.
Security Testing: Testing that confirms how well a system protects itself against unauthorized internal or external, or willful damage of code, means security testing of the system. Security testing assures that the program is accessed by the authorized personnel only.
Mutation Testing: In this type of software testing, the application is tested for the code that was modified after fixing a particular bug/defect.
Software testing methodologies and different software testing strategies help to get through this software testing process. These various software testing methods show you the outputs, using the above mentioned software testing types, and helps you check if the software satisfies the requirement of the customer. Software testing is indeed a vast subject and one can make a successful carrier in this field. You could go through some software testing interview questions, to prepare yourself for some software testing tutorials.
Software Testing - Brief Introduction To Security Testing
Security testing is an important process in order to ensure that the systems/applications that your organization is using meet security policies and are free from any type of loopholes that can cause your organization a big loss.
Security Testing of any developed system (or a system under development) is all about finding out all the potential loopholes and weaknesses of the system, which might result into loss/theft of highly sensitive information or destruction of the system by an intruder/outsider. Security Testing helps in finding out all the possible vulnerabilities of the system and help developers in fixing those problems.
Need of Security Testing Security test helps in finding out loopholes that can cause loss of important information and allow any
intruder enter into the systems. Security Testing helps in improving the current system and also helps in ensuring that the system will
work for longer time (or it will work without hassles for the estimated time).
Security Testing doesn’t only include conformance of resistance of the systems your organization uses, it also ensures that people in your organization understand and obey security policies. Hence adding up to the organization-wide security.
If involved right from the first phase of system development life cycle, security testing can help in eliminating the flaws into design and implementation of the system and in turn help the organization in blocking the potential security loopholes in the earlier stage. This is beneficial to the organization almost in all aspects (financially, security and even efforts point of view).
Who need Security Testing? Now a day, almost all organizations across the world are equipped with hundreds of computers connected to each other through intranets and various types of LANs inside the organization itself and through Internet with the outer world and are also equipped with data storage & handling devices. The information that is stored in these storage devices and the applications that run on the computers are highly important to the organization from the business, security and survival point of view.
Any organization small or big in size, need to secure the information it possesses and the applications it uses in order to protect its customer’s information safe and suppress any possible loss of its business.
Security testing ensures that the systems and applications used by the organizations are secure and not vulnerable to any type of attack.
What are the different types of Security Testing? Following are the main types of security testing:
Security Auditing: Security Auditing includes direct inspection of the application developed and Operating Systems & any system on which it is being developed. This also involves code walk-through.
Security Scanning: It is all about scanning and verification of the system and applications. During security scanning, auditors inspect and try to find out the weaknesses in the OS, applications and network(s).
Vulnerability Scanning: Vulnerability scanning involves scanning of the application for all known vulnerabilities. This scanning is generally done through various vulnerability scanning software.
Risk Assessment: Risk assessment is a method of analyzing and deciding the risk that depends upon the type of loss and the possibility/probability of loss occurrence. Risk assessment is carried out in the form of various interviews, discussions and analysis of the same. It helps in finding out and preparing possible backup-plan for any type of potential risk, hence contributing towards the security conformance.
Posture Assessment & Security Testing: This is a combination of Security Scanning, Risk Assessment and Ethical Hacking in order to reach a conclusive point and help your organization know its stand in context with Security.
Penetration Testing: In this type of testing, a tester tries to forcibly access and enter the application under test. In the penetration testing, a tester may try to enter into the application/system with the help of some other application or with the help of combinations of loopholes that the application has kept open unknowingly. Penetration test is highly important as it is the most effective way to practically find out potential loopholes in the application.
Ethical Hacking: It’s a forced intrusion of an external element into the system & applications that are under Security Testing. Ethical hacking involves number of penetration tests over the wide network on the system under test.
Manual Testing Interview Questions
The following article takes us through some of the most common manual testing interview questions. Read to know more.
Manual testing is one of the oldest and most effective ways in which one can carry out software testing. Whenever a new software is invented, software testing needs to be done to test for its effectiveness and it is for this purpose that manual testing is required. Manual testing is one of the types of software testing which is an important component of the IT job sector and does not use any automation methods and is therefore tedious and laborious.
Manual testing requires a tester who needs to have certain qualities because the job demands it - he needs to be observant, creative, innovative, speculative, open-minded, resourceful, patient, skillful and possess certain other qualities that will help him with his job. In the following article we shall not be concentrating on what a tester is like, but what some of the manual testing interview questions are. So if you have a doubt in this regard, read the following article to know what some interview questions on manual testing are.
Manual Testing Interview Questions for Freshers
The following are some of the interview questions for manual testing. This will give you a fair idea of what these questions are like.
What is the accessibility testing? What is Ad Hoc Testing?
What is the difference between test scenarios and test strategy?
What is the difference between properties and methods in qtp?
Why do these manual testing interview questions help? They help you to prepare for what lies ahead. The career
opportunities that an IT job provides is greater that what many other fields provide, and if you're from this field then you'll know what I'm talking about, right?
Software Development Life Cycle Models I was asked to put together this high-level and traditional software life cycle information as a favor for a friend of a friend, so I thought I might as well share it with everybody.
The General ModelSoftware life cycle models describe phases of the software cycle and the order in which those phases are executed. There are tons of models, and many companies adopt their own, but all have very similar patterns. The general, basic model is shown below:
General Life Cycle Model
Each phase produces deliverables required by the next phase in the life cycle. Requirements are translated into design. Code is produced during implementation that is driven by the design. Testing verifies the deliverable of the implementation phase against requirements.
Requirements
Business requirements are gathered in this phase. This phase is the main focus of the project managers and stake holders. Meetings with managers, stake holders and users are held in order to determine the requirements. Who is going to use the system? How will they use the system? What data should be input into the system? What data should be output by the system? These are general questions that get answered during a requirements gathering phase. This produces a nice big list of functionality that the system should provide, which describes functions the system should perform, business logic that processes data, what data is stored and used by the system, and how the user interface should work. The overall result is the system as a whole and how it performs, not how it is actually going to do it.
Design
The software system design is produced from the results of the requirements phase. Architects have the ball in their court during this phase and this is the phase in which their focus lies. This is where the details on how the system will work is produced. Architecture, including hardware and software, communication, software design (UML is produced here) are all part of the deliverables of a design phase.
Implementation
Code is produced from the deliverables of the design phase during implementation, and this is the longest phase of the software development life cycle. For a developer, this is the main focus of the life cycle because this is where the code is produced. Implementation my overlap with both the design and testing phases. Many tools exists (CASE tools) to actually automate the production of code using information gathered and produced during the design phase.
Testing
During testing, the implementation is tested against the requirements to make sure that the product is actually solving the needs addressed and gathered during the requirements phase. Unit tests and system/acceptance tests are done during this phase. Unit tests act on a specific component of the system, while system tests act on the system as a whole.
So in a nutshell, that is a very basic overview of the general software development life cycle model. Now lets delve into some of the traditional and widely used variations.
Waterfall ModelThis is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project. Unlike what I mentioned in the general model, phases do not overlap in a waterfall model.
Waterfall Life Cycle Model
Advantages
Simple and easy to use. Easy to manage due to the rigidity of the model – each phase has specific deliverables and a
review process.
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well understood.
Disadvantages
Adjusting scope during the life cycle can kill a project
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Poor model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Poor model where requirements are at a moderate to high risk of changing.
V-Shaped ModelJust like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing is emphasized in this model more so than the waterfall model though. The testing procedures are developed early in the life cycle before any coding is done, during each of the phases preceding implementation.
Requirements begin the life cycle model just like the waterfall model. Before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified in the requirements gathering.
The high-level design phase focuses on system architecture and design. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.
The low-level design phase is where the actual software components are designed, and unit tests are created in this phase as well.
The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.
V-Shaped Life Cycle Model
Advantages
Simple and easy to use. Each phase has specific deliverables.
Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.
Works well for small projects where requirements are easily understood.
Disadvantages
Very rigid, like the waterfall model. Little flexibility and adjusting scope is difficult and expensive.
Software is developed during the implementation phase, so no early prototypes of the software are produced.
Model doesn’t provide a clear path for problems found during testing phases.
Incremental ModelThe incremental model is an intuitive approach to the waterfall model. Multiple development cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles are divided up into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases.
A working version of software is produced during the first iteration, so you have working software early on during the software life cycle. Subsequent iterations build on the initial software produced during the first iteration.
Incremental Life Cycle Model
Advantages
Generates working software quickly and early during the software life cycle. More flexible – less costly to change scope and requirements.
Easier to test and debug during a smaller iteration.
Easier to manage risk because risky pieces are identified and handled during its iteration.
Each iteration is an easily managed milestone.
Disadvantages
Each phase of an iteration is rigid and do not overlap each other. Problems may arise pertaining to system architecture because not all requirements are gathered
up front for the entire software life cycle.
Spiral ModelThe spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.
Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Spiral Life Cycle Model
Advantages
High amount of risk analysis Good for large and mission-critical projects.
Software is produced early in the software life cycle.
Disadvantages
Can be a costly model to use. Risk analysis requires highly specific expertise.
Project’s success is highly dependent on the risk analysis phase.