Automated Object-Oriented Software Testing using Genetic Algorithms and Static-Analysis Lucas Serpa Silva Software Engineering Swiss Federal Institute of Technology A thesis submitted for the degree of Msc Computer Science Supervised by Yi Wei Prof. Bertrand Meyer 2010, March
66
Embed
Automated Object-Oriented Software Testing using Genetic …se.inf.ethz.ch/old/projects/lucas_silva/report.pdf · 2010-05-25 · Automated Object-Oriented Software Testing using Genetic
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
C.1 List of all test groups part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
C.2 List of all test groups part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
C.3 List of all test groups part 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
D.1 List of all classes tested part 1 of 3 . . . . . . . . . . . . . . . . . . . . . . . . 52
D.2 List of all classes tested part 2 of 3 . . . . . . . . . . . . . . . . . . . . . . . . 53
D.3 List of all classes tested part 3 of 3 . . . . . . . . . . . . . . . . . . . . . . . . 54
ix
1
Introduction
1.1 Motivation
In the past 50 years the growing influence of software in all areas of industry have led to an
ever-increasing demand for complex and reliable software. According to a study(3) conducted
by the National Institute of Standard & Technology, approximately 80% of the development
cost is spent on identifying and correcting defects. The same study found that software bugs
cost the United States economy around $59.5 billion a year, with one third of this value being
attributed to the poor software testing infrastructure. In the effort to improve the existing
testing infrastructure, a number of tools have been developed to automate the test execution
such as JUnit(1) and GoboTest(4). However, the automation of test data generation is still
a topic under research. Recently, a number of methods such as metaheuristic search, random
test generation and static analysis have been used to completely automate the testing process,
but the application of these tools to real software is still limited. Random test case generation
has been used by a number of tools (Jartege(34), AutoTest(33), Dart(32)) that automate
the generation of test cases, but a number of studies found a genetic algorithm (evolutionary
testing) to be more efficient and to outperform random testing(9; 13; 16; 18; 26) for structural
testing.
1.2 Past research
The study of genetic algorithms as a technique for automating the process of test case gen-
eration is often referred to as evolutionary testing in the literature. Since the early 1990s, a
number of studies have been done on evolutionary testing. The complexity and applicability
1
1.2 Past research
of these studies vary. In order to classify the relevance of past research for this project, a
number of studies have been classified according to the complexity of the test cases being
generated and the optimization parameter used by the genetic algorithm. The complexity
of the test cases being generated is important because to generate test cases for structured
programs that only take simple input, such as numbers, is simpler than generating test cases
for object-oriented programs, which is one of the goals of this project.
Reference Year Language type Optimization parameter
(5)Xanthakis, S. 1992 Procedural (C ) Branch coverage(6)Shultz, A.. 1993 Procedural (Vehicle Simulator) Functional(7)Hunt, J. 1995 Procedural (POP11[X] ) Functional (Seeded errors)(8)Roper, M.. 1995 Procedural (C) Branch coverage(9)Watkins, A. 1995 Procedural (TRITYP simulator) Path Coverage(10)Alander, J.. 1996 Procedural (Strings) Time(18)Harman M. 1996 Procedural (Integers) Branch coverage(14)Jones, B. 1998 Procedural (Integers) Branch coverage(11)Tracey, N.. 1998 Complex (ADA) Functional (specification)(12)Borgelt, K. 1998 Procedural (TRITYP simulator) Path Coverage(13)Pargas, R.. 1999 Procedural (TRITYP simulator) Branch coverage(15)Lin. 2001 Procedural (TRITYP simulator) Path Coverage(16)Michael, C. 2001 Procedural (GADGET) Branch coverage(17)Wegener, J. 2001 Procedural Branch coverage(19)Daz E 2003 Procedural Branch coverage(20)Berndt, D. 2003 Procedural (TRITYP simulator) Functional(9)A. Watkins 2004 Procedural Functional (Seeded error)(24)Tonella, P 2004 Object-oriented (Java) Branch coverage(21)D. J. Berndt 2005 Procedural (Robot simulator) Functional (Seeded error)(22)Alba .E 2005 Procedural (C) Condition coverage(23)McMinn P. 2005 Procedural (C) Branch coverage(27)Wappler, S. 2005 Object-oriented (Java) Branch, condition coverage(28)Wappler, S. 2006 Object-oriented (Java) Exceptions / Branch coverage(26)Harman, M. 2007 Procedural Branch coverage(25)Mairhofer, S. 2008 Object-oriented (Ruby) Branch coverage
Table 1.1: Previous work.
As shown in Table 1.1, there have been only a few projects that generate test cases for
object-oriented programs, and to the best of our knowledge there was only one project(11)
that generates test cases for object-oriented programs and uses the number of faults found as
2
1.3 Project goals
the optimization parameter for the genetic algorithm. In that study, test cases were generated
for ADA programs, but a formal specification had to be manually specified in a SPARK-Ada
proof context. Thus, the testing process was not completely automated.
Table 1.1 also shows that branch coverage was the most common optimization parameter
used to drive the evolution of test cases. However, there is little evidence of a correlation
between branch coverage and the number of uncovered faults. Although code coverage is a
useful test suit measurement, the number of faults a test suit unveils is a more important
measurement. Past research has shown that evolutionary testing is a good approach to
automate the generation of test cases for structured programs (9; 13; 16; 18; 26). To make
this approach attractive to industry, however, the system must be able to automatically
generate test cases for object-oriented programs and to use the number of faults found as the
main optimization parameter. To the best of our knowledge, there is currently no existing
project that fulfills these requirements.
programs and using the number of errors as an optimization parameter. However, a
specification for the program under test had to be manually provided so the process was
not completely automated. One of the problems when evolving test case for object-oriented
programs is the initialization of an object into a specific state. The object may recursively
require other objects as a parameter and the typing must match. Tonella(24) solved this
problem by defining a grammar for the chromosome and defining the mutation and crossover
operations based on it. Another problem when generating test case for object-oriented pro-
gram is the lack of software specification to check if a test has passed. Wappler(28) used
software exceptions as an indication of a fault and Alander(10) used the time needed for
execution.
1.3 Project goals
The base hypothesis for this work is that an automated testing strategy which can adapt
the test case generation to the classes under test will perform better than strategies which
cannot. This hypothesis is based on three assumptions:
1. Each class has a different testing priority. That is, each class has a set of methods that
should be tested more often than the others.
2. Each class has different object type distribution. The testing strategy ought to generate
more strings for classes that work with strings than objects of other types.
3
1.3 Project goals
3. Each class has an optimal set of primitive values, and this set is not necessarily the
same for other classes.
In this project a representation of a testing strategy is encoded in a chromosome and
then a genetic algorithm is used to evolve a testing strategy that is adapted to the classes
under test. The genetic algorithm will evolve a testing strategy according to the number
of unique faults it finds, the number of untested features and the number of unique states.
This project innovates by using a combination of functional and structural metrics as the
optimization parameter for the genetic algorithm and by applying static analysis to improve
the performance of the genetic algorithm. This project is based on the AutoTest(2) tool and
the Design by Contract methodology implemented by the Eiffel programming language(29).
4
2
Background
2.1 Testing
Testing is one of the most used software quality assessment methods. There are two important
processes when testing object-oriented software. First, the software has to be initialized with
a set of values. These values are used to set a number of variables that are relevant for the
test case. The values of these variables define a single state from the possible set of states the
softwara can be. These values can either be a primitive value such as an integer or complex
values such as an object. With the software initialized, its methods under test can then be
tested by calling them. If a method takes one or more objects as parameters, these objects
also have to be initialized. To determine if the test case passed or fail, a software specification
has to be used. The software specification defines the output of the software and what is
a valid input. Since the number of possible states a software may have is exponential, it is
impossible to test all of them. In practice, interesting states are normally identified by the
developers according to a software specification, program structure or their own experience.
There are many types of testing. However, they can all be classified as either black box or
white box testing.
2.1.1 Black box
The Black box testing, also called functional testing(30), will consider the unit under test as a
black box where data is fed in and the output is verified according to a software specification.
Functional testing has the advantage that it is uncoupled from the source code because,
given the software specification, test data can be generated even before the function has
5
2.1 Testing
been implemented. Functional testing is also closely related to the user requirements since
it is testing a function of the program. Its main disadvantage is that it requires a software
specification and it may not explore the unit under test well, since it does not know the code
structure.
2.1.2 White box
The white box testing technique, also called structural testing, will take into account the
internal structure of the code. By analyzing the structure of the code, different test data
can be generated to explore those specific areas. Structural testing may also be used to
measure how much of the code has been covered according to some structural criteria. By
analyzing the program flow and the path of an execution, a code coverage can be computed
given certain criteria such as statement coverage, which computes the number of unique
statements executed.
2.1.3 Automated testing
To automate the testing process, both the generation of test data and the execution of
test cases have to be automated. There are already a number of tools such as JUnit(1)
and GoboTest(4) that automate the test case execution but the main problem lies in the
automation of the test data generation. Apart from the evolutionary testing, there are a
number of tools that automate the testing process using static analysis, software model and
random testing. The table 2.1 describe the method used for each tool. Random testing is
the widely adopted method; tools such as as AutoTest (33), DART (32) and Jartege(34)
implement a random algorithm, but there are many algorithms that performs better than
random algorithms for optimization problems.
2.1.4 Test evaluation
The main goal of software testing is to uncover as many faults as possible and show the
robustness of the software. But to be able to rely on a test suit for quality assurance, this
test suit needs to be good. Besides the number of fauls found, one method used to evaluate
a test suit is code coverage. The code coverage measures how much of the source code was
executed by the test suit. Intuitively, a good test suit ought to have a good code coverage.
6
2.1 Testing
Name Method
(46) Agitator Dynamic Analysis(33) AutoTest Random testing(32) DART Random testing(47) DSD-Crasher Static / Dynamic Analysis(48) Eclat Models(44) FindBugs Static Analysis(34) Jartege Random testingJML(45) Java PathFinder Model Checker(50) JTest Static Analysis(49) Symstra State Space Model
Table 2.1: Testing tools.
However, this depends on how the code coverage is measured, some of the main code coverage
criteria are described below.
• Statement coverage measures the number of unique statements executed. The main
advantage of statement coverage is that it can be measured directly from object code.
But it is too simplistic and usually not a good testing evaluation criteria(51).
• Branch coverage measures the unique evaluation of boolean expression from condi-
tional statements. It is simple to compute and it is stronger than statement coverage.
Full branch coverage implies full statement coverage. However, it is insensitive to com-
plex boolean expression and it does not take into account the sequence of statements.
• Condition coverage measures the unique evaluation of each atomic boolean expres-
sion independently of others. It provides a more sensitive analysis compared to branch
coverage. Full condition coverage does not imply full branch coverage since brahcnes
might exist that are unreachable.
• Path coverage measures unique paths a program execution can take. It requires
thorough testing to achieve a good path coverage, it is very expensive, and in many
situation unfeasible, since the number of paths is exponential to the number of branches
and there are paths that are impossible to execute. For instance in figure 2.1, it is not
possible to execute a program that has S2 and S6 on its path.
7
2.2 Eiffel & Design by Contract
Figure 2.1: Program flow - Example of program flow analysis
2.1.4.1 State Coverage
Another interesting approach is to use state coverage, where the state is defined as the values
of numerical and boolean attributes of a class at each execution point. At the end of the
execution the sum of the number of unique values at each execution point is considered as the
state coverage. This approach was used in this project because it provides a more fine-grain
measurement of how much of the software was executed and it may be easier to implement
compared to other code coverage.
2.2 Eiffel & Design by Contract
The lack of software specification is one of the main problems when automating the testing
process. Without specification it is impossible to be sure that a feature1 has failed. Even when
the test case leads the program to crash or throw an exception, it is not clear if the software
has a fault since the program may have not been defined for the given input. Normally,
the developers will write a header as a comment for each method, describing its behaviour.
Although there are guidelines on how to write these headers, they are not formal enough to1Feature means either a procedure or a function. In this report feature and method are interchangeably
used to refer to a procedure or a function.
8
2.2 Eiffel & Design by Contract
allow the derivation of the method’s specification.
This problem has been dealt with by the Eiffel programming language(29), which, besides
other methodologies, implements the Design by Contract (31) concept. The idea behind De-
sign by Contract is that each method call is a contract between the caller (client) and the
method (supplier). This contract is specified in terms of what the client must provide and
what the supplier guarantees in return. This contract is normally written in the form of pre-
and postcondition boolean expressions for each method. In the example illustrated in Figure
2.2, the precondition is composed of four boolean expressions and the postcondition of two
boolean expressions. These expressions are evaluated sequentially upon method invocation
and termination. The system will throw an exception as soon as one expression is evaluated
to be false. Therefore, the method caller must ensure the precondition is true before calling
the method and the method must ensure that the postcondition is true before returning. For
example, the borrow book method shown in Figure 2.2 takes the id of a borrower and the id
of the book being borrowed.
Figure 2.2: Example of Design by Contract -
The method caller must ensure that the book id is a valid id, the book with that id
has at least one copy available, the borrower id is a valid id and the borrower can borrow
books. If these conditions are fulfilled, the method guarantees that it will add the book to the
borrower’s list of borrowed book and decrease the number of copies available by one. Apart
from the pre- and postcondition, every class has an invariant condition that has to remain
true after the execution of the constructor and loops may have variants and invariants. With
Design by Contract a method has a fault if it:
9
2.3 Autotest
1. violates another method’s precondition.
2. does not fulfil its own postcondition.
3. violates the class invariant.
4. violates loop variant or invariant.
For the automation of test case generation, Design by Contract can be used to determine if
the generated test data is defined for a given method by checking it against the precondition.
It can also be used to check if a method has failed or not by comparing the result against the
postcondition. In the next section we discuss how this idea is implemented in the AutoTest
tool (2).
2.3 Autotest
AutoTest exploits the Design by Contract methodology implemented in Eiffel to automatically
generate random test data for Eiffel classes. AutoTest works with a given timeout and a set
of classes to be tested. AutoTest starts by loading the classes to be tested and creating a
table containing all (including the inherited) methods of those classes. AutoTest can then
apply an original random strategy (OR) or a precondition satisfaction strategy (PS). These
48 MULTI ARRAY LIST, MULTAR LIST CURSOR49 PART SORTED SET50 PART SORTED TWO WAY LIST, TWO WAY LIST CURSOR51 SORTED TWO WAY LIST, TWO WAY LIST CURSOR52 SUBSET STRATEGY TREE, BINARY SEARCH TREE SET53 TWO WAY CIRCULAR, CIRCULAR CURSOR54 TWO WAY CURSOR TREE, TWO WAY CURSOR TREE CURSOR55 TWO WAY LIST, TWO WAY LIST CURSOR56 TWO WAY SORTED SET, TWO WAY LIST CURSOR57 TWO WAY TREE, TWO WAY TREE CURSOR, BINARY TREE
Table C.3: List of all test groups part 3
50
Appendix D
List of classes tested
# Tested Classes1 ACTIVE LIST2 ARRAY3 ARRAYED CIRCULAR4 ARRAYED LIST5 ARRAYED LIST CURSOR6 ARRAYED QUEUE7 ARRAYED SET8 ARRAYED TREE9 BINARY SEARCH TREE SET10 BINARY TREE11 BOUNDED QUEUE12 CIRCULAR CURSOR13 COMPACT CURSOR TREE14 COMPACT TREE CURSOR15 CURSOR16 DS ARRAYED LIST17 DS ARRAYED LIST CURSOR18 DS AVL TREE19 DS BILINKED LIST20 DS BILINKED LIST CURSOR21 DS BINARY SEARCH TREE22 DS BINARY SEARCH TREE CURSOR23 DS BINARY SEARCH TREE SET
Table D.1: List of all classes tested part 1 of 3
51
24 DS BINARY SEARCH TREE SET CURSOR25 DS HASH SET26 DS HASH SET CURSOR27 DS LEFT LEANING RED BLACK TREE28 DS LINKED LIST29 DS LINKED LIST CURSOR30 DS LINKED QUEUE31 DS LINKED STACK32 DS MULTIARRAYED HASH SET33 DS MULTIARRAYED HASH SET CURSOR34 DS MULTIARRAYED HASH TABLE35 DS MULTIARRAYED HASH TABLE CURSOR36 DS RED BLACK TREE37 FIXED DFA38 FIXED TREE39 HIGH BUILDER40 KL EQUALITY TESTER41 KL STRING42 LEX BUILDER43 LEXICAL44 LINKED AUTOMATON45 LINKED CIRCULAR46 LINKED CURSOR TREE47 LINKED CURSOR TREE CURSOR48 LINKED DFA49 LINKED LIST50 LINKED LIST CURSOR51 LINKED PRIORITY QUEUE52 LINKED SET53 LINKED TREE54 LINKED TREE CURSOR55 LX ACTION FACTORY56 LX DESCRIPTION57 LX DFA58 LX DFA REGULAR EXPRESSION59 LX DFA STATE60 LX EQUIVALENCE CLASSES61 LX FULL DFA62 LX LEX SCANNER
Table D.2: List of all classes tested part 2 of 3
52
63 LX NFA64 LX NFA STATE65 LX PROTO66 LX PROTO QUEUE67 LX REGEXP PARSER68 LX REGEXP SCANNER69 LX RULE70 LX START CONDITIONS71 LX SYMBOL CLASS72 LX TEMPLATE LIST73 LX TRANSITION TABLE74 MULTAR LIST CURSOR75 MULTI ARRAY LIST76 PART SORTED SET77 PART SORTED TWO WAY LIST78 PDFA79 SORTED TWO WAY LIST80 STATE81 STATE OF DFA82 SUBSET STRATEGY TREE83 TWO WAY CIRCULAR84 TWO WAY CURSOR TREE85 TWO WAY CURSOR TREE CURSOR86 TWO WAY LIST87 TWO WAY LIST CURSOR88 TWO WAY SORTED SET89 TWO WAY TREE90 TWO WAY TREE CURSOR91 UT ERROR HANDLER92 YY BUFFER
Table D.3: List of all classes tested part 3 of 3
53
Bibliography
[1] Y. Cheon and G. T. Leavens. A simple andpractical approach to unit testing: The JMLand JUnit way. Technical Report 01-12, De-partment of Computer Science, Iowa StateUniversity, Nov. 2001. 1, 6
[2] I. Ciupa, A. Leitner, M. Oriol, and B.Meyer. Experimental assessment of randomtesting for object-oriented software. In Pro-ceedings of the International Symposiumon Software Testing and Analysis 2007 (IS-STA07), pages 8494, 2007. 4, 10, 20
[3] NIST (National Institute of Standardsand Technology): The Economic Impactsof Inadequate Infrastructure for SoftwareTesting, Report 7007.011, available atwww.nist.gov/director/prog-ofc/report02-3.pdf 1
[4] Eric Bezault et al.: Gobo library and tools,at www.gobosoft.com. 1, 6
[5] Xanthakis, S. Ellis, C. Skourlas, C., LeGall,A, Katsikas, S, Application of Genetic Al-gorithms to Software Testing, Proceedingsof the 5th International Conference of Soft-ware Engineering, pages 625-636, France,December, 1992. 2
[6] Shultz, A., Grefenstette, J., De Jong, K.,Test & Evaluation by Genetic Algorithms,Navy Center for Applied Research in Arti-ficial Intelligence, IEEE, 1993. 2
[7] Hunt, J., Testing Control Software using aGenetic Algorithm, Working Paper, Univer-sity of Wales, UK, 1995. 2
[8] Roper, M., Maclean, I., Brooks, A., Miller,J.,Wood, M., Genetic Algorithms and theAutomatic Generation of Test Data, Work-ing Paper, Department of Computer Sci-ence, University of Strathclyde, UK, 1991.2
[9] Watkins, A., The Automatic Generationof Software Test Data using Genetic Algo-rithms, Proceedings of the Fourth SoftwareQuality Conference, 2: 300-309, Dundee,Scotland, July, 1995. 1, 2, 3
[10] Alander, J., Mantere, T. and Turunen, P,Genetic Algorithm Based Software Testing,in G. Smith, N. Steele and R. Albrecht, edi-tors, Artificial Neural Nets and Genetic Al-gorithms, Springer-Verlag, Wien, Austria,pages 325-328, 1998. 2, 3
[11] Tracey, N., Clark, J., Mander, K., Auto-mated Program Flaw Finding Using Sim-ulated Annealing, ISSTA-98, ClearwaterBeach, Florida, USA, 1998. 2
[12] Borgelt, K., Software Test Data Genera-tion From A Genetic Algorithm, IndustrialApplications Of Genetic Algorithms, CRCPress 1998. 2
[13] Pargas, R., Harold, M., Peck, R., Test DataGeneration Using Genetic Algorithms, Soft-ware Testing, Verification And Reliability,9: 263-282, 1999. 1, 2, 3
[14] Jones,B.,Sthamer, H. and D. Eyres. Au-tomatic structural testing using geneticalgorithms. Software Engineering Jour-nal,11(5):299306, 1996. 2
54
BIBLIOGRAPHY
[15] Lin, J-C. and Yeh, P-U. Automatic TestData Generation for Path Testing usingGAs, Information Sciences, 131: 47-64,2001. 2
[16] Michael, C., McGraw, G., Schatz, M., Gen-erating Software Test Data by Evolution,IEEE Transactions On Software Engineer-ing, 27(12), December 2001. 1, 2, 3
[17] Wegener, J., Baresel, A., Sthamer, H., Evo-lutionary Test Environment for AutomaticStructural Testing, Information & SoftwareTechnology, 2001. 2
[18] Harman M. The automatic generation ofsoftware test data using genetic algorithms.Ph.D. thesis, University of Glamorgan, Pon-typrid, Wales, Great Britain, 1996. 1, 2, 3
[19] Daz E, Tuya J., and Blanco R. AutomatedSoftware Testing Using a MetaheuristicTechnique Based on Tabu Search, In 18thIEEE International Conference on Auto-mated Software Engineering, pp. 310-313,2003. 2
[20] Berndt, D., Fisher, J., Johnson, L., Ping-likar, J., and Watkins, A. (2003). Breed-ing Software Test Cases with Genetic Al-gorithms. In 36th Annual Hawaii Int. Con-ference on System Sciences (HICSS2003). 2
[21] D. J. Berndt, A. Watkins, High VolumeSoftware Testing using Genetic Algorithms,hicss,pp.318b, Proceedings of the 38th An-nual Hawaii International Conference onSystem Sciences (HICSS’05) - Track 9, 20052
[22] Alba E., and Chicano J. F. Software Testingwith Evolutionary Strategies, Proceedingsof the Rapid Integration of Software Engi-neering Techniques (RISE-2005), Heraklion,Grecia, 2005 2
[23] McMinn P., and Holcombe M. Evolution-ary testing of state-based programs. InProceedings of the Genetic and Evolution-ary Computation Conference (GECCO05),pages 1013 1020. Washington DC, USA,June 2005. 2
[24] Tonella, P. Evolutionary Testing of Classes.In Proceedings of the 2004 ACM SIGSOFTinternational symposium on Software test-ing and analysis (ISSTA 04), ACM Press,New York, NY (2004) 119-128 2, 3, 25
[25] Stefan Mairhofer, Search-based softwaretesting and complex test data generation ina dynamic programming language, Masterthesis 2008 2
[26] Harman, M., and McMinn, P. A theoreti-cal & empirical analysis of evolutionary test-ing and hill climbing for structural test datageneration. In ISSTA 07: Proceedings of the2007 international symposium on Softwaretesting and analysis (New York, NY, USA,2007), ACM, pp. 7383. 1, 2, 3
[27] Wappler, S., and Lammermann, F. Usingevolutionary algorithms for the unit testingof object-oriented software. In GECCO 05:Proceedings of the 2005 conference on Ge-netic and evolutionary computation (NewYork,NY, USA, 2005), ACM, pp. 10531060.2
[28] Wappler, S., and Wegener, J. Evolution-ary unit testing of object- oriented soft-ware using a hybrid evolutionary algorithm.In CEC06: Pro- ceedings of the 2006IEEE Congress on Evolutionary Computa-tion (2006), IEEE, pp. 851858. 2, 3
[32] Godefriod, P., Klarlund, N., and Sen, K.Dart:directed automated random testing.In PLDI 05: Proceedings of the 2005ACM SIGPLAN conference on Program-ming language design and implementation(New York, NY, USA, 2005), ACM Press,pp. 213223. 1, 6, 7
[33] Meyer, B., Ciupa, I., Leitner, A., and Liu,L. L. Automatic testing of object-orientedsoftware. In Proceedings of SOFSEM 2007(Current Trends in Theory and Practice ofComputer Science) (2007), J. van Leeuwen,Ed., Lecture Notes in Computer Science,Springer-Verlag. 1, 6, 7
[34] Oriat, C. Jartege: a tool for randomgeneration of unit tests for Java classes.Tech. Rep. RR-1069-I, Centre Nationalde la Recherche Scientifique, Institut Na-tional Polytechnique de Grenoble, Univer-sitte Joseph Fourier Grenoble I, June 2004.1, 6, 7
[35] De Jong, K. A. (1975). An analysis of thebehavior of a class of genetic adaptive sys-tems (Doctoral dissertation, University ofMichigan). Dissertation Abstracts Interna-tional, 36(10), 5140B. (University Micro lmsNo. 76-9381) 12
[36] Holland, J. H. (1975). Adaptation in naturaland arti cial systems. Ann Arbor: Univer-sity of Michigan Press. 12
[37] Goldberg, D. E. (1989c). Genetic algorithmsin search, optimization, and machine learn-ing. Reading, MA: Addison-Wesley. 16, 18
[38] M. Wall. GAlib: A C++ Library ofGenetic Algorithm Components. MIT,http://lancet.mit.edu/ga/, 1996. 19
[39] I. Ciupa, A. Pretschner, A. Leitner, M.Oriol, and B. Meyer. On the predictabilityof random tests for object-oriented software.In Proceedings of the First InternationalConference on Software Testing, Verifica-tion and Validation (ICST08), April 2008.
[40] Satisfying Test Preconditions throughGuided Object Selection Yi Wei, SergeGebhardt, Manual Oriol, Bertrand MeyerThird International Conference on Soft-ware Testing, Verification and Validation(ICST’10) 11
[41] L. S, Silva, Evolutionary Object-OrientedTesting. Msc. thesis, University of Amster-dam, Amsterdam, The Netherlands, 2009.20, 30
[42] L. S, Silva, Evolutionary Testing of Object-Oriented Software, To appear in: Proceed-ings of ACM Symposium on Applied Com-puting in 2010, ACM SIGAPP. 30
[44] HOVEMEYER, D., AND PUGH, W. Find-ing bugs is easy.SIGPLAN Not. 39, 12(2004), 92106. 7
[45] VISSER, W., PASAREANU, C. S., ANDKHURSHID, S. Test input generation withjava pathfinder. In ISSTA 04: Proceedingsof the 2004 ACM SIGSOFT internationalsymposium on Software testing and analysis
56
BIBLIOGRAPHY
(New York, NY, USA, 2004), ACM Press,pp. 97107. 7
[46] BOSHERNITSAN, M., DOONG, R., ANDSAVOIA, A. From Daikon to Agitator:lessons and challenges in building a com-mercial tool for developer testing. In ISSTA06: Proceedings of the 2006 internationalsymposium on Software testing and analysis(New York, NY, USA, 2006), ACM Press,pp. 169180. 7
[47] CSALLNER, C., AND SMARAGDAKIS,Y. Dsd-crasher: A hybrid analysis tool forbug finding. In International Symposiumon Software Testing and Analysis (ISSTA)(July 2006), pp. 245254. 7
[48] PACHECO, C., AND ERNST, M. D.Eclat: Automatic generation and classi-fication of test inputs. In ECOOP 2005Object-Oriented Programming, 19th Euro-pean Conference (Glasgow, Scotland, July2529, 2005). 7
[49] XIE, T., MARINOV, D., SCHULTE, W.,
AND NOTKIN, D. Symstra: A frameworkfor generating object-oriented unit tests us-ing symbolic execution. In Proceedings ofthe 11th International Conference on Toolsand Algorithms for the Construction andAnalysis of Systems (TACAS 05) (April2005), pp. 365381. 7