Top Banner
Mar 27, 2022 Test Management Summit 2007 2 Why? When we test we are trying to answer a big question: Can we release this (system, feature, code)? This question can be broken down into many sub-questions: Have we tested all the code? Have we tested all the features? Have we found all the bugs we expected to find? Have we fixed all the bugs we need to fix? Definition: the exercise of some system, feature, component, or code such as to exhibit a bug
12

22-Oct-15 Test Management Summit 2007 (c) Alphabite Ltd.1 Test Coverage Peter Farrell-Vinay Alphabite Ltd.

Jan 02, 2016

Download

Documents

Mariah Atkins
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • *Test Management Summit 2007 (c) Alphabite Ltd.*Test CoveragePeter Farrell-VinayAlphabite Ltd.

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Why?When we test we are trying to answer a big question:Can we release this (system, feature, code)?This question can be broken down into many sub-questions:Have we tested all the code?Have we tested all the features?Have we found all the bugs we expected to find?Have we fixed all the bugs we need to fix?Definition: the exercise of some system, feature, component, or code such as to exhibit a bug

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Feature coverageQuestion: what is a feature?A1: anything stakeholders say it isA2: something users can use to do a jobA3 some collection of screens and screen icons which users use to fulfil some unique task

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Feature definitionsFeature definitions vary from the ludicrous (verbal assertions) through user stories, to the highly-structured (UML). Somewhere in between is text. Any definition is good if it lets you:Measure:how many there arehow big each ishow important each is to the userTest them in any combination such as to exhibit bugsIdentify inconsistenciesKnow the limits of coverage

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Problems (you knew wed get here sometime)The existence and extent of the feature depends on:Who is defining it (management, users, sales staff, business analysts, tech. writers. Er, testers?)How they are defining it (text, UML, memoranda, backs of envelopes, user stories)How it is expected to be used, and by whomLehman: the existence of the tool changes the nature of the job

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Types of coverageStructureFeatureGUI iconInstrumentationScenarioTransitionWeb script, web page, application, and component

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Test coverage by structureYou need to be sure the developers have exercised some minimum part of the code using several approaches. Code coverage has been the subject of much academic interest: there are lots of measures:LinesDecision pointsDefinition-Use pathsLinear Code Sequence and JumpsGreat hammers for great nails

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Test coverage by featureDo we have a test (set) for every feature plus installation, deinstallation, start-up and shut-down?Can we decompose every feature into testable sub-bits? Has every one got a test? Does every test include at least one negative case?Objection:No we dont have any spec. worthy of the name nor any time to write it.

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Test coverage by GUI iconThe user interface has a number of screens, buttons, pull-downs, tabs, menus etc. Do we have them all listed, and with tests which exercise every one? Objections: Q: dyou know how many such icons there are in the application? A: if its that big, you need a list to be sure youve hit them allQ: just pulling down icons, hitting buttons and entering text in fields doesnt exercise the system. A: true but if any of those dont work you need to know asap.

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Test coverage by scenarioUsers have goals they want to achieve.They achieve them using a number of (parts of) features. This sets up subtle feature interactions which no other coverage approach will mimic.Problems:P: The system is new and the users dont know how they will use it. S: use a model office or a simulator.P: The release adds new features - have we got to test all the old ones as well? S: What is the risk if they interact in unexpected ways and you havent tested all these ways? (aka yes)P: User management doesnt want us to talk to users. S: Have management define a manager as a model user and put him or her in the model office.

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Test coverage by transitionWeb and conventional applications have paths a user may take through the system to achieve a goal.Identify the paths in the form of a state transition diagram (typically from URL to URL in the case of a web test) such that a minimum number of paths can be identified and traversed.Objections:O: Far too many paths to do this. R: Whats the risk of something going wrong? Model your paths at whatever level you find useful to test againstO: The whole point of a web app. is to be agile. R: It doesnt matter how agile your development is, if users click out because the app.s unusable, because you havent tested all the paths. If the app is well-made there wont be many paths.

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *User paths through a simple application

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Test coverage by web script, web page, application, and componentWeb sites are built from bits.Testing each bit returns us to an older, component-based test modelHaving identified the risk level of the web site, decide the level of coverage of each component.It doesnt matter how stable the web site is if the user experience is (to use a technical term) crap

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *Web application structure and scripts

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *More questionsQuestion: where does a feature start and end?Question: how dyou measure the size of a feature? (See the IFPUG counting practices manual)Question: the number of states the system can be in is astronomic - how do we decide what sets of variables to use?Question: are there coverage questions in user interface testing?

    Test Management Summit 2007 *

  • *Test Management Summit 2007 *ConclusionsCoverage must:answer a question conclusivelyhave a baseline,be user-, and business-relevant at some level Coverage is very definition-dependant:Invent a new coverage type and every manager will want to know why you havent used it.If the features are ill-defined, the coverage will be.Coverage remains a major question

    Test Management Summit 2007 *

    Test management Summit 2007Test management Summit 2007*Test management Summit 2007*Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*This is a strategic view of testing. There are lots of other questions they can be broken down into smaller questions. Ultimately the smallest questions need to be answered with a set of tests.

    How we estimate the number of bugs to be found is off-topic but doable. We can discuss this in another session.

    Fixing the need-to-fix bugs is a management issue and also off-topic.Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*For management read salespeople, users, or trainers.

    We can use function points for sizing but theres no simple definition against which you can say:

    Thats sufficient to test against.Here is the limit of the featureHeres what distinguishes feature A from feature B

    Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*So requirements specs are central to testing - what else is new?Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*Aphorism: the number of people determined to have their say in feature definition is inversely proportional to the number prepared to specify it with sufficient accuracy to be used by all the stakeholders. The means used to define the feature will materially affect the ability to both design and test it; if its just text then therell b inconsistencies, missing parts, and contradictions. If the feature has been modelled then some part of the feature will prove impossible to model and will need to be stated in text. Models cant tell the whole truth. Text can lie.Features have perspective: the closer you get to them the bigger and more complex they are.Features interact in ways you havent thought of (but users had). Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*Why arent these measures enough?

    Three reasons: Unit tests are just that: they test units not systems.Even with all the code unit-tested, bugs will remain; typically in the user interface which unit testing cannot exercise sufficiently if at allSubtle feature interactions can be observed only when the system is deployed in as realistic an environment as possible.The world changes. With most developers taking unit testing seriously test managers can at last concentrate on system and integration testing.Beware of anyone claiming code coverage when all they are doing is running Ncover when building: they may have filtered out unexercised lines and will only have exercised at best all the statements in the unit. Decision, branch, DU-paths etc., will probably not have been covered.Unit test principles are highly relevant to system testing particularly web testing.

    Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*Answer: write it yourself as you discover the interface. Alternatively look for another job - the company is going no-where.Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*If you dont have all the screen elements listed how can you be sure youve tested them all?Everyone laughs at anoracks with lists and tickboxes, until they find out that the anoracks remember vital things that way.

    See the Test coverage handout.pdf handout for a discussion of how to assure test coverage by GUI icon.Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*New systems can be experimented with, using model offices or simulatorsUse user action logs (if necessary) to validate your proposed scenarios, and user profiles to identify scenario sets.

    See the handout: Seeing the windows handoutTest management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*This is just scenarios in a different guise.

    See the example on the next slide.Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*The purely system-test coverage of a web site should be very thorough since the site should be very simple. All the various components of the site will need to be component-tested first and each of these component tests will in effect be a system test.

    Nowhere in this discussion do we touch on user interface testing since (I believe) this is not a coverage issue per se.Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*Suggestions:A feature is a (sub) goal of some user activity.Distinguish between common features (which help users to use other features) and the othersA feature involves some set of screen icons which are used for that feature and no other. Consider what things the feature generates: is this set of things very big and can it be reduced or classified in some way? Once you have a set of things the feature produces you can identify the variables you need to create each and then start with that mix adding extreme and impossible variables to taste.If the risk is high and number of states that great, consider modelling the system with a theorem-provable tool such as Prover.Test management Summit 2007Test management Summit 2007Test management Summit 2007*Test management Summit 2007*If the features are ill-defined and the risk is high enough write the spec yourself.Test management Summit 2007