A SmartBear White Paper Everyday, Agile development teams are challenged to deliver high quality software as quickly as possible. Yet testing can slow-down the go-to-market process. This white paper suggests time-saving techniques that make the work of Agile testing easier and more productive. Comprised of precise and targeted solutions to common Agile testing challenges, Smart Agile Testing offers tips and ad- vice to ensure adequate test coverage and traceability, avoid build-induced code breakage, identify and resolve defects early in the development process, improve API code quality, and ensure that new releases don’t cause performance bottlenecks. Five Challenges for Agile Testing Teams Solutions to Improve Agile Testing Results Contents What Are the Most Common Challenges Facing Agile Testing Teams? .................................................................................................... 2 Challenge 1: Inadequate Test Coverage ................................................................................................................................................................ 3 Challenge 2: Accidental Broken Code Due to Frequent Builds ...................................................................................................................... 6 Challenge 3: Detecting Defects Early, When They’re Easier and Cheaper to Fix.................................................................................. 8 Challenge 4: Inadequate Testing for Your Published API ............................................................................................................................. 11 Challenge 5: Ensure That New Releases Don’t Create Performance Bottlenecks .............................................................................. 12 About SmartBear Software ....................................................................................................................................................................................... 16 Ensuring Software Success SM www.smartbear.com/agile
16
Embed
SM Five Challenges for Agile Testing Teams...requires a flexible and streamlined approach that complements the speed of agile. Smart Agile Testing is a set of timesaving techniques
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
A SmartBear White Paper
Everyday, Agile development teams are challenged to deliver high quality software as quickly as possible. Yet testing can slow-down the go-to-market process. This white paper suggests time-saving techniques that make the work of Agile testing easier and more productive. Comprised of precise and targeted solutions to common Agile testing challenges, Smart Agile Testing offers tips and ad-vice to ensure adequate test coverage and traceability, avoid build-induced code breakage, identify and resolve defects early in the development process, improve API code quality, and ensure that new releases don’t cause performance bottlenecks.
Five Challenges for Agile Testing Teams Solutions to Improve Agile Testing Results
Contents
What Are the Most Common Challenges Facing Agile Testing Teams? .................................................................................................... 2 Challenge 1: Inadequate Test Coverage ................................................................................................................................................................3 Challenge 2: Accidental Broken Code Due to Frequent Builds ......................................................................................................................6 Challenge 3: Detecting Defects Early, When They’re Easier and Cheaper to Fix ..................................................................................8 Challenge 4: Inadequate Testing for Your Published API ............................................................................................................................. 11 Challenge 5: Ensure That New Releases Don’t Create Performance Bottlenecks .............................................................................. 12 About SmartBear Software ....................................................................................................................................................................................... 16
Inadequate test coverage can cause big problems. It’s often the result of too few tests written for each user
story and lacking visibility into code that was changed unexpectedly. As we all know, developers sometimes
change code beyond the scope of the features being released. They do it for many reasons such as to fix defects,
to refactor the code, or because developers are bothered by the way the code works and just wants to improve
it. Often these code changes are not tested, particularly when you are only writing tests for planned new fea-
tures in a release.
To eliminate this problem, it’s important to have visibility into all the code being checked in. By seeing the code
check-ins, you can easily spot any missing test coverage and protect your team from unpleasant surprises once
the code goes into production.
How Can You Ensure Great Test Coverage of New Features?
Before you can have adequate test coverage you first must have a clear understanding of the features being de-
livered in the release. For each feature, you must understand how the feature is supposed to work, its constraints
and validations, and its ancillary functions (such as logging and auditing). Agile developers build features based
on a series of user stories (sometimes grouped by themes and epics). Creating your test scenarios at the user
story level gives you the best chance of achieving optimal test coverage.
Once your QA and development teams agree on the features to be delivered, you can begin creating tests for
each feature. Coding and test development should be done in parallel to ensure the team is ready to test each
feature as soon as it’s published to QA.
Be sure to design a sufficient number of tests to ensure comprehensive results:
¿ Positive Tests: Ensure that the feature is working as designed, with full functionally, is cosmetically correct, and has user-friendly error messages.
¿ Negative Tests: Users often (okay, usually) start using software without first reading the manual. As a result, they may make mistakes or try things you never intended. For example, they may enter invalid dates, key-in characters such as dollar signs or commas into numeric fields, or enter too many characters (e.g., enter 100 characters onto a field designed for no more than 50). Users also may attempt to save records without completing all mandatory fields, delete records that have established relationships with other records (such as master/detail scenarios), or enter duplicate records. When you design tests, it’s important to understand the constraints and validations for each requirement and create enough negative tests to ensure that each constraint and validation is fully vetted. The goal is to make the code dummy proof.
¿ Performance Tests: As we will see later in this paper, it’s a good idea to test new features under duress. There are ways to automate this but you should conduct some manual tests that perform timings with large datasets to ensure that performance doesn’t suffer too much when entering a large amount of data or when many there are many concurrent users.
¿ Ancillary Tests: Most well-designed systems write errors to log files, record changes to records via audits, and use referential integrity so whenever a master/child record is deleted, both records are deleted simultaneously. Many systems regularly run purge routines; be certain that as you add new features they
are covered by ancillary tests. Finally, most systems have security features that limit access to applications so specific people only have rights to specific functions. Ancillary tests ensure that log files and audits are written, referential integrity is preserved, security is embraced, and purge routines cleanly remove all related data as needed.
Applying a sufficient number of tests to each feature to fully cover all the scenarios above is called traceability.
Securing traceability is as simple as making a list of features including the number and breadth of tests
that cover positive, negative, performance, and ancillary test scenarios. Listing these by feature ensures that
no feature is insufficiently tested or not tested at all.
To learn more about traceability, view this video.
How Can You Detect Changes to Code Made Outside the Scope of New Features?
It’s common for developers to make changes to code that go beyond the scope of features being released.
However, if your testing team is unaware of changes made, you may end up with an unexpected defect because
you couldn’t test them.
So, what’s the solution? One approach is to branch the code using a source control management (SCM) sys-
tem to ensure that any code outside the target release remains untouched. For source code changes, you need
visibility into each module changed and the ability to link each module with a feature. By doing this, you can
quickly identify changes made to features that aren’t covered by your testing.
Consider assigning a testing team member to inspect all code changes to your source control system daily and
ferret out changes made to untested areas. This can be cumbersome; it requires diligence and time. A good
alternative is to have your source control system send daily code changes to a central place so your team can
review them. Putting these changes in a central location helps implement rules that associate specific modules
with specific features. That way, you’ll see when a change is made outside of a feature that’s being developed
in the current release, and check off the reviewed changes so you can be certain each check-in is covered.
One way to achieve this is to set up a trigger in your SCM that alerts you whenever a file is changed. Alternative-
ly you can use a feature that inspects your source control system and sends all check-ins to a central repository
for review. If you haven’t already built something like this, consider SmartBear’s QAComplete; it has a feature
built specifically for this important task. Through its OpsHub connector, QAComplete can send all source code
changes into the Agile Tasks area of the software, which you can then use to build rules that link source modules
with features and to flag check-ins as being reviewed.
What Are the Most Important Test Coverage Metrics?
Metrics for adequate test coverage focus on traceability, test run progress, defect discovery, and defect fix rate.
¿ Traceability Coverage: Count the number of tests you have for each requirement (user story). Organize the counts by test type (positive, negative, performance, and ancillary). Reviewing by user story shows if you have sufficient test coverage, so you can be confident of the results.
¿ Blocked Tests: Use this technique to identify requirements that cannot be fully tested because of defects or unresolved issues.
¿ Test Runs by Requirement (User Story): Count the number of tests you have run for each requirement, as well as how many have passed, failed, and are still awaiting run. This metric indicates how close you are to test completion for each requirement.
¿ Test Runs by Configuration: If you’re testing in different operating systems and browsers, it’s important to know how many tests you have run against each supported browser and OS. These counts indicate how much coverage you have.
¿ Daily Test Run Trending: Test run trending helps you visualize, day-by-day, how many tests have passed, failed, and are waiting to be run. This shows whether you can complete all testing before the test cycle is complete. If it shows you’re falling behind, run your highest priority tests first.
¿ Defects by Requirement (User Story): Understanding the number of defects discovered by requirement can trigger special focus on specific features so that you can concentrate on those with the most bugs. If you find that specific features tend to be most buggy, you’ll be able to run those more often to ensure full coverage of the buggy areas.
¿ Daily Defect Trending: Defect trending helps you visualize, day-by-day, how many defects are found and resolved. It also shows whether you can complete all high-priority defects before the testing cycle is complete. If you know you are lagging, focus the team on the most severe, highest priority defects first.
¿ Defect Duration: This shows how quickly defects are being fixed. Separating them by priority ensures that that the team addresses the most crucial items first. A long duration on high priority items also signals slow or misaligned development resources, which your team and the development team should resolve jointly.
How Can You Ensure Your Test Coverage Team Is Working Optimally?
As a best practice, we recommend that each day your testing team:
¿ Participates in Standup Meetings: Discuss impediments to test progress.
¿ Reviews Daily Metrics: When you spot issues such as high-priority defects becoming stale, work with the development leader to bring attention to them. If your tests are not trending to completion by the sprint end date, mitigate your risk by focusing on the highest priority tests. When you discover code changes that are not covered by tests, immediately appoint someone to create them.
Challenge 2: Accidentally Broken Code Due to Frequent Builds
Performing daily builds introduces the risk of breaking existing code. If you rely solely on manual test runs, it’s
not practical to fully regress your existing code each day. A better approach is to use an automated testing tool
that records and runs tests automatically. This is a great way to test more stable features to ensure that new
code has not broken them.
Most agile teams perform continuous integration, which simply means that they check source code frequently
(typically several times a day). Upon code check-in, they have an automated process for creating a software
build. An automated testing tool can perform regression testing whenever you launch a new build. There
are many tools on the market for continuous integration, including SmartBear’s Automated Build Studio, Cruise
Control, and Hudson. It’s a best practice to have the build system automatically launch automated tests to
detect the stability and integrity of the build.
How Can You Get Started with Automated Testing?
The best way to get started is to proceed with baby steps. Don’t try to create automated tests for every feature.
Focus on the tests that provide the biggest bang for your buck. Here are some proven methods:
¿ Assign a Dedicated Resource: Few manual testers can do double duty and create both manual and automated regression tests. Automated testing requires a specialist with both programming and analytical skills. Optimize your efforts by dedicating a person to work solely on automation.
¿ Start Small: Create positive automated tests that are simple. For example, imagine you are creating an automated test to ensure that the order processing software can add a new order. Start by creating the test so that it adds a new order with all valid data (positive test). You’ll drive yourself crazy if you try to create a set of automated tests to perform every negative scenario that you can imagine. Don’t sweat it. You can always add more tests later. Focus on proving that your customers can add a valid order and that new code doesn’t break that feature.
¿ Conduct High-Use Tests: Create tests that cover the most frequently used software features. For example, in an order processing system, users create, modify, and cancel orders every day; be sure you have tests for that. However, if orders are exported rarely, don’t waste time automating the export process until you complete all the high-use tests.
¿ Automate Time-Intensive Tests/Test Activities: Next, focus on tests that require a long setup time. For example, you may have tests that require you to set up the environment (i.e., create a virtual machine instance, install a database, enter data into the database, and run a test). Automating the setup process saves substantial time during a release cycle. You may also find that a single test takes four hours to run by hand. Imagine the amount of time you will recoup by automating that test so you can run it by clicking a button!
¿ Prioritize Complex Calculation Tests: Focus on tests that are hard to validate. For example, maybe your mortgage software has complex calculations that are very difficult to verify because the formulas for producing the calculation are error-prone if done manually. By automating this test, you eliminate the manual calculations. This speeds up testing, ensures the calculation is repeatable, reduces the chance of human error, and raises confidence in the test results.
¿ Use Source Control: Store the automated tests you create in a source control system. This safeguards
against losing your work due to hard drive crashes and prevents overwriting of completed tests. Source control systems provide a safeguard by allowing you to check code in and out and retain prior test versions without fear of accidental overwriting.
Once you create a base set of automated tests, schedule them to run on each build. Daily, identify tests that
failed. Confirm if they flag a legitimate issue or if the failure is due to an unexpected change to the code. When
a defect is identified, you should be very pleased that your adoption of test automation is paying dividends.
Remember, start small and build your automated test arsenal over time. You’ll be very pleased by how much of
your regression testing has been automated, which frees you and your team to perform deeper functional test-
ing of new features.
Reliable automated testing requires a proven tool. As you assess options, remember that SmartBear’s
TestComplete is easy to learn and offers the added benefit of integrating with QAComplete, so you can schedule
your automated tests to run unattended and view the run results on a browser.
To learn more about traceability, see this video.
What is Test-Driven Development?
Agile practitioners sometimes use test-driven development (TDD) to improve unit testing. Using this approach,
the agile developer writes code by using automated testing as the driver to code completion.
Imagine a developer is designing an order entry screen. She might start by creating a prototype of the screen
without connecting any logic, and then create an automated test of steps for adding an order. The automated
test would validate field values, ensure that constraints were being enforced properly, etc. The test would be run
before any logic was written into the order entry screen. The developer would then write code for the order entry
screen and run automated tests to see if it passes. She would only consider the screen to be “done” when the
automated test runs to completion without errors.
To illustrate further, let’s say you’re writing an object
that when called with a specific input, produces a specific
output. By implementing a TDD approach, you can write
code, run the automated tests, and continue that process
recursively until attaining the expected input and output.
Which Metrics Are Most Important for Successful Auto-
mated Testing?
As you grapple with this challenge, focus on metrics that
analyze automated test coverage, automated test run
progress, defect discovery, and defect fix rate, including:
¿ Feature Coverage: Count the number of automated tests for each feature. You’ll know when you have enough tests to be confident that you are fully covered from a regression perspective.
¿ Requirement/Feature Blocked: Use this metric to identify what requirements are blocking automation. For example, third-party controls require custom coding; current team members may lack the expertise to write them.
¿ Daily Test Run Trending: This shows you, day-by-day, the number of automated tests that are run, passed, and failed. Inspect each failed test and post defects for issues you find.
¿ Daily Test Runs by Host: When running automated tests on different host machines (i.e., machines with different operating systems or browser combinations), analyzing your runs by host alerts you to specific OS or browser combinations that introduce new defects.
What Can You Do to Ensure Your Automated Test Team Is Working Optimally?
We recommend that testing teams perform these tasks every day:
¿ Review Automated Run Metrics: When overnight automated test runs flag defects, do an immediate manual retest to rule out false positives. Then log real defects for resolution.
¿ Use Source Control: Review changes you’ve made to your automated tests and check them into your source control system for protection.
¿ Continue to build on your Automated Tests: Work on adding more automated tests to your arsenal following the guidelines described previously.
Challenge 3: Detecting Defects Early, When They’re Easier and Cheaper to Fix
You know that defects found late in the development cycle require
more time and money to fix. And defects not found until production
are an even bigger problem. A primary goal of development and
testing teams is to identify defects as early as possible, reducing
the time and cost of rework. There are two ways to accomplish
this: Implement peer reviews, and use static analysis tools to scan
code to identify defects as early as possible. Is there value in using
more than one approach? Capers Jones, the noted software quality
expert, explains the need for multiple techniques in a recent white
paper available for download on SmartBear’s website.
What Should You Review?
As you’re developing requirements and breaking them into user
stories, also conduct team reviews. You need to ensure every story is:
¿ Clear
¿ Supports the requirement
¿ Identifies constraints and validations the programmer and testers need to know
“A synergistic combination of formal inspections, static analysis, and formal testing can achieve combined defect removal efficiency levels of 99%. Bet-ter, this synergistic combination will lower development costs and sched-ules and reduce technical debt by more than 80% compared to testing alone.” 1
- Capers JonesWhitepaper from smartbear.com
1 Capers Jones, Combining Inspections, Static Analysis, and Testing to Achieve Defect Removal Efficiency Above 95%, January 2012. Complimentary download from www.smartbear.com
¿ Defects Discovered by Peer Review: Reports the number of defects discovered by peer reviews. You may categorize them by type of review (user story review, manual test review, etc.).
What Can You Do Each Day to Ensure Your Testing Team Is Working Optimally?
Each day, your testing team should:
¿ Perform Peer Reviews of user stories, manual tests, automated tests and source code. Log defects found during the review so you can analyze the result of using this strategy.
¿ Automatically Run Static Analysis to detect coding issues. Review the identified issues, configure the tolerance of your static review to ignore false positives, and log any true defects.
¿ Review Defect Metrics related to peer reviews to determine how well this strategy helps you reduce defects early in the coding phase.
Challenge 4: Inadequate Testing for Your Published API
Many testers focus on testing the user interface and miss the opportunity to perform API testing. If your soft-
ware has a published API, your testing team needs a solid strategy for testing it.
API testing often is omitted because of the misperception that it takes programming skills to call the properties
and methods of your API. While programming skill can be helpful for both automated and API testers, it’s not
essential if you have tools that allow you to perform testing without programming.
How Do You Get Started with API Testing?
Similar to automated testing, the best way to get started with API testing is to take baby steps. Don’t try to
create tests for every API function. Focus on the tests that provide the biggest bang for your buck. Here are some
guidelines to help you focus:
¿ Dedicated Resource: Don’t have your manual testers develop API tests. Have your automation engineer double as an API tester; the skill set is similar.
¿ High Use Functions: Create tests that cover the most frequently called API functions. The best way to determine the most called functions is to log the calls for each API function.
¿ Usability Tests: When developing API tests, be sure to create negative tests that force the API function to spit out an error. Because APIs are a black box to the end user, they often are difficult to debug. Therefore, if a function is called improperly, it’s important that the API returns a friendly and actionable message that explains what went wrong and how to fix it.
¿ Security Tests: Build tests that attempt to call functions without the proper security rights. Create tests that exercise the security logic. It can be easy for developers to enforce security constraints in the user interface but forget to enforce them in the API.
¿ Stopwatch-level Performance Tests: Time methods (entry and exit points) to analyze which methods take longer to process than anticipated.
Once you create a base set of API tests, schedule them to run automatically on each build. Every day, identify
any tests that failed to confirm that they’re legitimate issues and not just an expected change you weren’t
aware of. If a test identifies a real issue, be happy that your efforts are paying off.
API testing can be done by writing code to exercise each function, but if you want to save time and effort, use a
tool. Remember, our mission is to get the most out of testing efforts with the least amount of work.
When considering tools, take a look at SmartBear’s soapUI Pro. It’s easy to learn and has scheduling capabili-
ties so your API tests can run unattended, and you can view the results easily.
Focus on API function coverage, API test run prog-
ress, defect discovery, and defect fix rate. Here are
some metrics to consider:
¿ Function Coverage: Identifies which functions API tests cover. Focus on the functions that are called most often. This metric enables you to determine if your testing completely covers your high-use functions.
¿ Blocked Tests: Identify API tests that are blocked by defects or external issues (for example, compatibility with the latest version of .NET).
¿ Coverage within Function: Most API functions contain several properties and methods. This metric identifies which properties and methods your tests cover to ensure that all functions are fully tested (or at least the ones used most often).
¿ Daily API Test Run Trending: This shows, day-by-day, how many API tests are run, passed, and failed.
What Can You Do Each Day to Ensure Your API Testing Team Is Working Optimally?
Testing teams should perform these things every day:
¿ Review API Run Metrics: Review your key metrics. If the overnight API tests found defects, retest them manually to rule out a false positive. Log all real defects for resolution.
¿ Continue to build on your API Tests: Work on adding more API tests to your arsenal using the guidelines described above.
Challenge 5: Ensure That New Releases Don’t Create Performance Bottlenecks
In a perfect world, adding new features in your current release would not cause any performance issues. But
we all know that as software starts to mature with the addition of new features, the possibility of performance
issues increase substantially. Don’t wait until your customers complain before you begin testing performance.
That’s a formula for very unhappy customers.
System slowdowns can be introduced in multiple places—your user interface, batch processes, and API—
create processes that ensure all performance is monitored and issues are mitigated. Last but by no means
least, you also should implement automatic production monitoring to check the speed systems for speed, which
provides valuable statistics that enable you to improve performance.
How Do You Get Started with Application Load Testing?
Your user interface is the most visible place for performance issues to crop up. Users are very aware when they
are waiting “too long” for a new record to be added.
Fortunately, you can address all of these critical questions by implementing production-monitoring tools. With
them you can:
¿ Access website performance
¿ Receive an automatic e-mail or other notification if your website crashes
¿ Detect API, e-mail, and FTP issues
¿ Compare your website’s performance to your competitors’ sites
When searching for the right performance-monitoring tool, consider SmartBear’s AlertSite products, the best on
the market.
What Are Some Metrics to Watch?
Performance monitoring metrics need to focus on
performance statistics and peer code review status.
Here are some to consider:
Load Test Metrics
¿ Basic Quality: Shows the effect of ramping up the number of users and what happens with the additional load.
¿ Load Time: Identifies how long your pages take to load.
¿ Throughput: Identifies the average response time for key actions taken.
¿ Server Side Metrics: Isolates the time your server takes to respond to requests.
Production Monitoring Metrics
¿ Response Time Summary: Shows the response time your clients are receiving from your website. Also separates the DNS, redirect, first byte, and content download times so that you can better understand where time is being spent.
¿ Waterfall: Shows the total response time with detailed information by asset (images, pages, etc.).
¿ Click Errors: Errors your clients see when they click on specific links on your web page to make it easier to identify when a user goes down a broken path.