This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Cem Kaner, J.D., Ph.D.Professor of Software Engineering
Florida Institute of Technologyand
James BachPrincipal, Satisfice Inc.
Copyright (c) Cem Kaner & James Bach, 2000-2004This work is licensed under the Creative Commons Attribution-ShareAlike License. To view acopy of this license, visit http://creativecommons.org/licenses/by-sa/2.0/ or send a letter toCreative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
These notes are partially based on research that was supported by NSF Grant EIA-0113539ITR/SY+PE: "Improving the Education of Software Testers." Any opinions, findings andconclusions or recommendations expressed in this material are those of the author(s) and do notnecessarily reflect the views of the National Science Foundation.
Information objectives• Find defects• Maximize bug count• Block premature product releases• Help managers make ship / no-ship
decisions• Minimize technical support costs• Assess conformance to specification• Conform to regulations• Minimize safety-related lawsuit risk• Find safe scenarios for use of the product• Assess quality• Verify correctness of the product• Assure quality
– Learn about the product– Learn about the market– Learn about the ways the product could fail– Learn about the weaknesses of the product– Learn about how to test the product– Test the product– Report the problems– Advocate for repairs– Develop new tests based on what you have learned so
Exploratory Testing• Every competent tester does some exploratory testing.
For example -- bug regression:Report a bug, the programmer claims she fixed the bug, soyou test the fix.– Start by reproducing the steps you used in the bug report to
expose the failure.– Then vary your testing to search for side effects.
• These variations are not predesigned. This is an example ofchartered exploratory testing.
(In a chartered exploratory testing session, you start the session by selectinga "charter" -- a focus or objective that will guide your work. The sessiondoesn't have to stick to the charter--you follow up interesting leads as youcome to them--but the charter helps you set your strategy, and wheneveryou ask, "what am I going to do next?" the place you look is your charter.)
The Exploratory Approach is Heuristic• A heuristic is a fallible method for:
– attempting to solve a problem, or– attempting to reach a decision.
• Both meanings are common in computing.• Heuristics can come into play consciously or unconsciously.
– A heuristic may be used as a conscious tool in order to address a problem forwhich no deterministic solution is available or to address the problem or reach adecision more quickly or cheaply than the available deterministic solution.
– Unconscious heuristics are often called biases. They change the probability of aperception or decision or behavior in response to a stimulus or situation.
• It can be impossible for an observer to tell whether a heuristic is operatingconsciously or unconsciously. (Example, racial profiling).
• Whether conscious or unconscious, heuristics are fallible. They do notguarantee a correct solution, they may contradict other heuristics, and theiracceptance or application depends on the immediate context, not on generalrules.
Heuristics• Definition of the engineering method: "the strategy for causing the
best change in a poorly understood or uncertain situation within theavailable resources." (p. 5)
• "A heuristic is anything that provides a plausible aid or direction inthe solution of a problem but is in the final analysis unjustified,incapable of justification, and fallible. It is used to guide, to discoverand to reveal. This statement is a first approximation, or as theengineer would say, a first cut, at defining a heuristic.
• "Although difficult to define, a heuristic has four signatures that makeit easy to recognize:– A heuristic does not guarantee a solution;– It may contradict other heuristics;– It reduces the search time in solving a problem; and– Its acceptance depends on the immediate context instead of on an
absolute absolute standard." (p. 16-17)Billy Vaughn Koen, Definition of the Engineering Method, ASEE, 1985.
Key challenges of exploratory testing• Learning (How do we get to know the program?)• Visibility (How to see below the surface?)• Control (How to set internal data values?)• Risk / selection (Which are the best tests to run?)• Execution (What’s the most efficient way to run the tests?)• Logistics (What environment is needed to support test execution?)• The oracle problem (How do we tell if a test result is correct?)• Reporting (How can we replicate a failure and report it effectively?)• Documentation (What test documentation do we need?)• Measurement (What metrics are appropriate?)• Stopping (How to decide when to stop testing?)• Training and Supervision (How to help testers become effective,
Active Reading of Reference Materials• Active readers commonly
– approach what they are reading with specific questions– summarize and reorganize what they read– describe what they’ve read to others– use a variety of other tactics to integrate what they are reading with
what they already know or want to learn• You can gather information from explicit specifications and implicit
ones (see the discussion of reference documents in our section onspecification-based testing).
• There are plenty of discussions / tutorials on how to do activereading on the net (just search google for “active reading.”)
• The classic reference on active reading is Adler's "How to Read aBook"
• We talk about using Bach's Test Strategy Model to structure activereading of specifications in the section on specification-based testing.
Active Reading• Exploratory testers, on approaching a product, often look for the following types of
information:– Who are the stakeholders– What the designers and programmers intend– What customers and users expect– Interoperability requirements and other third-party technical expectations– Ways the product can be used– What benefits it provides including unintended benefits that it can be used to provide– What functions the product has– What the variables are (what can change, and how)– How functions or variables are related to each other or influence each other– How to navigate the product (where things are and how to get to them)– How products like this have failed in the past, or how this product is likely to fail– What oracles are available or can be readily constructed– How the product has evolved over time– What the testers' clients (e.g. the development team) want from you– What the testing staff are good at; how they have excelled in the past.– What tools or data are available that might help you test.
Active reading• As with the information we collect while running early tests, we use
these documents to generate test ideas and testing notes (artifacts)• Our notes might include lists or descriptions of functions, variables,
potential error conditions (and anything else we can list), notes onhow things can interact with or constrain each other, possible userscenarios, traceability matrices, or anything else that might guidetesting.
• So how is this different from traditional test planning?– We create and execute tests as we read, rather than creating notes
and trying the tests later. The tests help us understand the documents(understand-by-example) along with exposing bugs in the product.
– The intent of the research is development of personal insight, notdevelopment of artifacts. The artifacts support the individual explorer,but may or may not be of any use to the next reader (if they are shared atall)
Ambiguity analysis• Ambiguities, confusion, and errors in the company-written documents,
and in other source documents the company has relied on, point toopportunities for software error (and thus for tests). Whenever there isambiguity, the author, the reader, the programmer and the tester mightall have different opinions about the meaning, and thus about theintended or correct behavior of the product.
• There are many sources of ambiguity in software design anddevelopment.– In wording or interpretation of specifications or standards– In expected response of the program to invalid or unusual input– In behavior of undocumented features– In conduct and standards of regulators / auditors– In customers’ interpretation of their needs and the needs of the users they
represent– In definitions of compatibility among 3rd party products
Richard Bender teaches this well. If you can’t take his course, you can find notes based onhis work in Rodney Wilson’s Software RX: Secrets of Engineering Quality Software
An interesting workbook: Cecile Spector, Saying One Thing, Meaning Another
Attacks• An attack is a stereotyped class of tests, optimized around a
specific type of error.• We studied Whittaker & Jorgenson's attacks, and a few
additional ones, in the section on Risk-Based Testing.Elisabeth Hendrickson, in her course on Creative SoftwareTesting, provides another superb set of attacks and testingheuristics.
• Of course, attacks are very useful for finding bugs. This is whywe accept them as attacks.
• In addition, attacks are tester-education tools. Seeing whichcommon mistakes a group of programmers do or don't makegives the tester insight into the level of experience, care andskill with which the program was developed.
Failure mode lists / Risk catalogs / Bug taxonomies• We studied risk catalogs in the section on Risk-Based Testing. Bach's Test
Strategy Model is a vehicle for generating and organizing the catalog.• In our experience, this is a powerful model for organizing exploratory testing.
The more you use it, the more readily you can apply it to new programs andnew contexts. The model helps you bring details to mind about– what you're testing (and how that might fail)– what quality attributes are important in this project (and so, what types of problems
to look for most closely, in anything you test)– what aspects of the project might facilitate or constrain the testing you can do.
• Thinking along all three of these lines together yields a lot of interesting testideas that have the potential to spark interest and are can be implementedwithin the project's resources.
• Exploratory testing is sometimes described as a search for previously-seenerrors under new circumstances. This is only part of the story. If we addpreviously-imagined or just-imagined errors to the list, we get more of the story,but only one dimension (the product elements and how they fail). The teststrategy model puts these potential failures into context (what failures are mostimportant) (what does our project setting make easy or inexpensive to do), andfrom the context, we can prioritize potential and select among potential testtechniques that we could use to hunt them.
Project Risk Lists• The project risk lists point to challenges or high stakes issues that will affect
the development of part of the product.– At LAWST 7, Microsoft's former Director of Software Testing told us about a
study they made of testers who sustained an unusually high rate of bug finding.The common factor they was that the testers gossiped about the project with manyof the programmers and other development staff. They learned about areas thatthe programmers were having trouble with, about late changes to the code, designthrashes, and so on. Given that knowledge, they targeted their tests toward therisks.
• When you're doing this kind of testing, old scripts don't help much. Thesearen't the kinds of risks (see the list in our discussion of Risk-Based Testing)that you typically plan for at the start of the project.
• Given the insight, you making guesses about how the specific challenge thatyou've become aware of might have caused weaknesses in the product'sdesign or implementation. As you try different types of tests, and review failurepatterns in the database associated with this section of the code, this author(or whatever the risk factor is), you pull together new ideas for focusingtesting.
• This is much like follow-up testing when you find a few bugs in an area. Onceyou see a few bugs, you spend more time, look more carefully, and either findmore problems or build confidence in a conclusion that there are few or noneleft to find.
Scenario testing• We can use any technique in the course of exploratory testing. As we
get past the introductory tests, we need strategies for gaining adeeper knowledge of the program and applying that to tests.Scenario testing is a classic method for this.
• The ideal scenario has several characteristics:– The test is based on a story about how the program is used, including
information about the motivations of the people involved.– The story is motivating. A stakeholder with influence would push to fix
a program that failed this test.– The story is credible. It not only could happen in the real world;
stakeholders would believe that something like it probably will happen.– The story involves a complex use of the program or a complex
environment or a complex set of data.– The test results are easy to evaluate. This is valuable for all tests, but is
especially important for scenarios because they are complex.
Why use scenario tests?• Learn the product• Connect testing to documented requirements• Expose failures to deliver desired benefits• Explore expert use of the program• Make a bug report more motivating• Bring requirements-related issues to the surface, which might
involve reopening old requirements discussions (with newdata) or surfacing not-yet-identified requirements.
Scenarios• Designing scenario tests is much like doing a requirements
analysis, but is not requirements analysis. They rely on similarinformation but use it differently.– The requirements analyst tries to foster agreement about the
system to be built. The tester exploits disagreements to predictproblems with the system.
– The tester doesn’t have to reach conclusions or makerecommendations about how the product should work. Her task isto expose credible concerns to the stakeholders.
– The tester doesn’t have to make the product design tradeoffs. Sheexposes the consequences of those tradeoffs, especiallyunanticipated or more serious consequences than expected.
– The tester doesn’t have to respect prior agreements. (Caution:testers who belabor the wrong issues lose credibility.)
– The scenario tester’s work need not be exhaustive, just useful.
Sixteen ways to create good scenarios1. Write life histories for objects in the system. How was the object created, what happens to
it, how is it used or modified, what does it interact with, when is it destroyed or discarded?2. List possible users, analyze their interests and objectives.3. Consider disfavored users: how do they want to abuse your system?4. List system events. How does the system handle them?5. List special events. What accommodations does the system make for these?6. List benefits and create end-to-end tasks to check them.7. Look at the specific transactions that people try to complete, such as opening a bank
account or sending a message. What are all the steps, data items, outputs, displays, etc.?8. What forms do the users work with? Work with them (read, write, modify, etc.)9. Interview users about famous challenges and failures of the old system.10. Work alongside users to see how they work and what they do.11. Read about what systems like this are supposed to do. Play with competing systems.12. Study complaints about the predecessor to this system or its competitors.13. Create a mock business. Treat it as real and process its data.14. Try converting real-life data from a competing or predecessor application.15. Look at the output that competing applications can create. How would you create these
reports / objects / whatever in your application?16. Look for sequences: People (or the system) typically do task X in an order. What are the
most common orders (sequences) of subtasks in achieving X?
Models• If you can develop a model of the product:
– you can test the model as you develop it– you can draw implications from the model
• For the explorer, the modeling effort is successful if it leads tointeresting tests– in a reasonable time frame, or– that would be too hard to think of in other ways
• Examples of models:– architecture diagram– state-based– dataflow
• Work from a high level design (map) of the system– pay primary attention to interfaces between components or groups of
components. We’re looking for cracks that things might have slippedthrough
– what can we do to screw things up as we trace the flow of data or theprogress of a task through the system?
• You can build the map in an architectural walkthrough– Invite several programmers and testers to a meeting. Present the
programmers with use cases and have them draw a diagram showing themain components and the communication among them. For a while, thediagram will change significantly with each example. After a few hours, itwill stabilize.
– Take a picture of the diagram, blow it up, laminate it, and you can use dryerase markers to sketch your current focus.
– Planning of testing from this diagram is often done jointly by severaltesters who understand different parts of the system.
Key challenges of exploratory testing• Software testing poses several core challenges to the skilled practitioner / manager. The
same challenges apply to exploratory testers:– Learning (How do we get to know the program?)– Visibility (How to see below the surface?)
• See the discussion in testability– Control (How to set internal data values?)
• See the discussion in testability– Risk / selection (Which are the best tests to run?)
• See the section on risk-based testing
– Execution (What’s the most efficient way to run the tests?)• See also the discussions of automation and of paired exploratory testing
– Logistics (What environment is needed to support test execution?)– The oracle problem (How do we tell if a test result is correct?)– Reporting (How can we replicate a failure and report it effectively?)– Documentation (What test documentation do we need?)– Measurement (What metrics are appropriate?)– Stopping (How to decide when to stop testing?)– Training and Supervision (How to help testers become effective, and how to tell
Some Heuristics of Exploratory Execution• You know all the techniques you need to do good exploratory testing.• The challenge is to develop your thinking about testing, so that you
apply useful techniques at appropriate times.– Work with a charter.– Plunge in and quit.– Let yourself fork.– Use lateral thinking, but manage it.– Work in sessions. Use time boxes.– Generate ideas by asking questions.– Ask questions about causes.– Use the test design model as a structure to help you generate questions– If you have no questions, treat that as a thinking trigger– Keep these straight: Observations versus inference– Pay attention to coverage
Work with a Charter• Pick a task or a question or a theme that you think you can cover in a
day or less. This is your charter. You might pick this from a moredetailed project outline or a function list or any other list of projectelements or product risks. Or you might generate the chartering ideain the moment.
• Add some detail to the charter.– Don't work down to the level of individual tests (record test ideas if you
have good ones in the moment that you want to remember, butgenerating individual ideas isn't your goal at this point)
– A charter for a session might include what to test, what tools to use,what testing tactics to use, what risks are involved, what bugs to lookfor, what documents to examine, what outputs are desired, etc.
• The charter will be the primary guide for one or a few exploratorytesting sessions (see below).
• This is a tool for overcoming fear of complexity.• Often, a testing problem isn’t as hard as it looks.• Sometimes it is as hard as it looks; you need to quit for a while to
consider how to tackle the problem.• It may take several plunge & quit cycles to do the job.
Whenever you are called upon to testsomething very complex or frightening, plunge in!
After a little while, if you are very confusedor find yourself stuck, quit!
but within sessions, step back a few times to takestock of your status within your charterand across sessions, review your set of chartersagainst your project mission.
Let yourself be distracted…Follow up new ideas within sessions and in setting charters for new sessions...
Work in Sessions. Use Time Boxes.• As typical testing session is about 60 to 90 minutes. This is about as
long as you can sustain full concentration.– At the end of the session, take a break. Make a few notes on what you've
learned and what issues you think are still open. Report your bugs. Goto meetings, talk to people, etc.
• A charter defines the goal of your work for one or a few sessions. Atthe end of each session, ask whether you've learned enough to havesatisfied your charter. Even if you don't do all the tests that youimagined, when defining your charter, you might decide to stopbecause you feel that:– you're creating/executing strong tests and they're not exposing any
problems, or– other tasks have higher priority
• Allocate time for any work that you're going to do, and rethink yourtask (make a new time commitment) rather than simply overrunningyour time allocation.
Develop test ideas and tactics by asking questions• Product
– What is this product?– What can I control and observe?– What should I test?
• Tests– What would constitute a diversified and practical test strategy?– How can I improve my understanding of how well or poorly this product works?– If there were an important problem here, how would I uncover it?– What document to load? Which button to push? What number to enter?– How powerful is this test?– What have I learned from this test that helps me perform powerful new tests?– What just happened? How do I examine that more closely?
• Problems– What quality criteria matter?– What kinds of problems might I find in this product?– Is what I see, here, a problem? If so, why?– How important is this problem? Why should it be fixed?
Ask Questions About Causes• Context: What about this situation makes this risk exist at all?• Probability: Given that there is a risk, what influences the probability
that a problem will occur?– Threat: What even or entity could precipate the problem (e.g. bad data)– Vulnerability: What is it about the product that allows the threat to
cause a problem? (e.g. Fault or feature in the code)– Safeguards: What measures can we take (have we taken) to eliminate
the vulnerability or threats (e.g. input filters)• Problem: What form would the problem / failure take?
– Failing components: What part(s) of the product would fail?– Detection: What would be required to detect the problem if it occurs?
• Impact: So, what if the problem occurs?– Damage: What is the loss? What are the downstream effects?– Repair: Can the damage be fixed? Who would fix it?
Test Design Model• We've used this model to organize failure-modes and to analyze
specifications.• We can also use it to guide test design overall.
– If you're out of ideas about what to test,• Pick the focus of your tests by sampling your Product's Elements and
considering which ones (or which interesting combinations) haven't yet beensufficiently tested, OR
• Pick a theme for a set of tests by reviewing the Quality Attributes andconsidering which ones are important for the project that haven't yet beenthoroughly tested. (Then pick the product elements that are interesting to testfor this attribute, such as testing the product Preferences dialog (an element)for accessibility (a quality attribute).
– If you're not sure what to test for• Review a failure mode catalog for examples of ways your product might fail.
– When considering how to test,• Look carefully at your Project Factors, for suggestions of factors that might
make one approach easier or less expensive or in some other way moreappropriate than another.
Keep Straight: Observation vs. Inference• Observation and inference are easily confused.• We don't have direct access to our sensory data.
– Even the simplest perceptions are selective interpretations of thephysiological events.
• We miss things that occur right in front of us. Seehttp://www.apa.org/monitor/apr01/blindness.html on inattentionalblindness. It is easy to miss bugs that occur in front of your eyes.
• Most of what we sense is lost (fades out of short term memory) within a fewseconds.
• Some things you think you see in one instance may be confused withmemories of other things you saw at other times.
• It’s easy to think you “saw” a thing when in fact you merelyinferred that you must have seen it.
• Accept that we’re all fallible, but that we can learn to be betterobservers by learning from mistakes.
• Pay special attention to incidents where someone noticessomething you could have noticed, but did not.
• Don’t strongly commit to a belief about any important evidenceyou’ve seen only once.
• Whenever you describe what you experienced, notice when you’redescribing what you saw and heard, and when you are insteadjumping to a conclusion about “what was really going on.”
• Where feasible, look at things in more than one way, and collectmore than one kind of information about what happened (such asrepeated testing, paired testing, loggers and log files, or videocameras).
• Testers with less expertise…– Think about coverage mostly in terms of what they can see.– Cover the product indiscriminately.– Avoid questions about the completeness of their testing.– Can’t reason about how much testing is enough.
• Better testers are more likely to…– Think about coverage in many dimensions.– Maximize diversity of tests while focusing on areas of risk.– Invite questions about the completeness of their testing.– Lead discussions on what testing is needed.
Where Do These Heuristics Come From?• Notice problems.• Wonder why.• Notice a pattern or dynamic.• Put it into words.• Try using it; try talking about it.• See if it sticks.
Some of the Types of Heuristics• Trigger Heuristics: Ideas associated with an event or
condition that help you recognize when it may be time to takean action or think a particular way. Like an alarm clock for yourmind. Example: No Questions
• Subtitle Heuristics: Help you reframe an idea so you can seealternatives and bring out assumptions during a conversation.
• Guideword Heuristics: Words or labels that help you accessthe full spectrum of your knowledge and experience as youanalyze something.
• Heuristic Model: A representation of an idea, object orsystem that helps you explore, understand, or control it.
• Heuristic Procedure or Rule: A plan of action that may helpsolve a class of problems.
Key challenges of exploratory testing• Software testing poses several core challenges to the skilled practitioner / manager. The
same challenges apply to exploratory testers:– Learning (How do we get to know the program?)– Visibility (How to see below the surface?)– Control (How to set internal data values?)– Risk / selection (Which are the best tests to run?)– Execution (What’s the most efficient way to run the tests?)– Logistics (What environment is needed to support test execution?)
• We don't cover logistics in this course
– The oracle problem (How do we tell if a test result is correct?)• See the course sections on oracles, high volume automation and architectures of test automation
– Reporting (How can we replicate a failure and report it effectively?)• See the sections on bug reporting
– Documentation (What test documentation do we need?)– Measurement (What metrics are appropriate?)– Stopping (How to decide when to stop testing?)– Training and Supervision (How to help testers become effective, and how to tell
Test Documentation & Exploratory Testing• Exploratory testing is a "lightweight process" -- in general, less documentation
is better.• "Less" doesn't necessarily mean "none," but it does mean "no more than you're
sure you need." Think of the documentation that you create during exploratorytesting as tester's notes.
• Taking extensive notes about what you're doing, while you're testing:– takes a lot of time– focuses you on procedural-level thinking (the details of what you've done) rather
than tactical or strategic thinking.– focuses you on what you did rather than on what you should do next– doesn't necessarily provide much benefit. After all, exploratory testers aren't trying
to set up regression tests--they create new tests as they learn more, rather thanrepeating old tests.
• But you might intentionally explore the product for the purpose of creatingnotes, such as function lists, lists of variables, and so on.
• Use your judgment. Create what will be useful to guide your testing, or to helpyou supervise someone else or report your status to someone else, but look forways to minimize interference with your workflow. Don't create documents forarchival purposes unless there are clear, affirmative requirements for them.
Test Documentation & Exploratory Testing• But don't we need to take extensive notes while testing in order to
know what we've been doing, so we can reproduce bugs when wefind them?
• Sometimes, when you find a bug, you won't be able to reproduce it.– In my (Kaner's) experience, most of these times, it is because the tester
was not paying attention to what turns out to be the critical variable.(See our discussion of irreproducible bugs in the discussion of bugreporting.)
– More rarely, the tester forgets what she was doing and can't recreate thesteps. This is more of a problem with inexperienced testers, who punchkeys without awareness of the reasoning behind the test they're runningat the moment.
• Is the better solution to spend hours writing procedural details so that, on rareoccasions when they are needed, they are available?
• Or is the better solution to spend more time thinking about why you're doingthe test that you're doing, before and while you do it? (This makes it morelikely that you'll run much the same test if you try to repeat it.)
– Automatic logging tools can help you solve part of this problem.
What if You Need Extensive Documentation?• Concerns
– We have a long-life product and many versions, and we want agood corporate memory of key tests and techniques. Corporatememory is at risk because of the lack of documentation.
– The regulators would excommunicate us. The lawyers wouldmassacre us. The auditors would reject us.
– We have specific tests that should be rerun regularly.• Suggestions
– In the face of concerns like these, use a balanced approach, notpurely exploratory.
– You don't have to remember every test. You only have toremember the tests that your company will want to repeat, orproduce for later inspection.
Key challenges of exploratory testing• Software testing poses several core challenges to the skilled practitioner /
manager. The same challenges apply to exploratory testers:– Learning (How do we get to know the program?)– Visibility (How to see below the surface?)– Control (How to set internal data values?)– Risk / selection (Which are the best tests to run?)– Execution (What’s the most efficient way to run the tests?)– Logistics (What environment is needed to support test execution?)– The oracle problem (How do we tell if a test result is correct?)– Reporting (How can we replicate a failure and report it effectively?)– Documentation (What test documentation do we need?)– Measurement (What metrics are appropriate?)
• See the sections on test-related measurement
– Stopping (How to decide when to stop testing?)
– Training and Supervision (How to help testers becomeeffective, and how to tell whether they are?)
– ET works well for expert testers, but we don’t have any.• Replies
– Detailed test procedures do not solve that problem, theymerely obscure it, like perfume on a rotten egg.
– Our goal as test managers should be to develop skilled testersso that this problem disappears, over time.
– Since ET requires test design skill in some measure, ETmanagement must constrain the testing problem to fit the leveland type of test design skill possessed by the tester.
– We constrain the testing problem by personally supervisingthe testers, and making use of concise documentation, NOT byusing detailed test scripts. Humans make poor robots.
Training and Supervision• In exploratory testing, every tester is a test planner and test designer.• To train the explorer:
– Start with a small task, testing a small area of the product or testing in anarrow way or for a particular kind of bug. Let him do the work (for anhour or a few hours). Talk with him about what he did, have him showyou his tests, comment on them, suggest alternatives, and then try again.
– As the tester's skill improves, you can assign longer and morecomplicated tasks. It's like building up a credit rating. The tester earnsmore over time. The tester who doesn't earn more should be reassignedto some other type of work.
– As you pick small tasks, pick some that give good practice in specifictechniques. The more techniques the tester understands and can do, themore effective he'll be as an explorer.
• Concerns– How do I tell the difference between bluffing and exploratory testing?– If I send a scout and he comes back without finding anything, how do
I know he didn’t just go to sleep behind some tree?• Replies
– You never know for sure– just as you don’t know if a tester trulyfollowed a test procedure.
– It’s about reputation and relationships. Managing testers is likemanaging executives, not factory workers.
– Give novice testers short leashes; better testers long leashes. Anexpert tester may not need a leash at all.
– Work closely with your testers, and these problems go away.
• Conjecture and Refutation: reasoning without certainty.• Abductive Inference: finding the best explanation among alternatives.• Lateral Thinking: the art of being distractible.• Forward-backward thinking: connecting your observations to your
imagination.• Heuristics: applying helpful problem-solving short cuts.• De-biasing: managing unhelpful short cuts.• Pairing: two testers, one computer.• Study other fields. Example: Information Theory.