Top Banner
Swarm Intell (2014) 8:139–157 DOI 10.1007/s11721-014-0094-2 Interactive ant colony optimization (iACO) for early lifecycle software design Christopher L. Simons · Jim Smith · Paul White Received: 16 April 2014 / Accepted: 30 May 2014 / Published online: 19 June 2014 © Springer Science+Business Media New York 2014 Abstract Finding good designs in the early stages of the software development lifecycle is a demanding multi-objective problem that is crucial to success. Previously, both interactive and non-interactive techniques based on evolutionary algorithms (EAs) have been success- fully applied to assist the designer. However, recently ant colony optimization was shown to outperform EAs at optimising quantitative measures of software designs with a limited computational budget. In this paper, we propose a novel interactive ACO (iACO) approach, in which the search is steered jointly by an adaptive model that combines subjective and objective measures. Results show that iACO is speedy, responsive and effective in enabling interactive, dynamic multi-objective search. Indeed, study participants rate the iACO search experience as compelling. Moreover, inspection of the learned model facilitates understand- ing of factors affecting users’ judgements, such as the interplay between a design’s elegance and the interdependencies between its components. Keywords Ant colony optimization · Software design · Interactive search 1 Introduction Early in the software engineering lifecycle, designers wrestle with numerous trade-off judg- ments as they identify a class model that forms the basis for, and so significantly affects the success of, subsequent down-stream development. Viewing this as a search problem, Simons and Parmee (2009) and Bowman et al. (2010) applied evolutionary algorithms (EAs) to C. L. Simons (B ) · J. Smith · P. White Department of Computer Science and Creative Technologies, University of the West of England, Bristol BS16 1QY, UK e-mail: [email protected] J. Smith e-mail: [email protected] P. White e-mail: [email protected] 123
19

Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Feb 06, 2023

Download

Documents

Francesco Tava
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157DOI 10.1007/s11721-014-0094-2

Interactive ant colony optimization (iACO) for earlylifecycle software design

Christopher L. Simons · Jim Smith · Paul White

Received: 16 April 2014 / Accepted: 30 May 2014 / Published online: 19 June 2014© Springer Science+Business Media New York 2014

Abstract Finding good designs in the early stages of the software development lifecycle isa demanding multi-objective problem that is crucial to success. Previously, both interactiveand non-interactive techniques based on evolutionary algorithms (EAs) have been success-fully applied to assist the designer. However, recently ant colony optimization was shownto outperform EAs at optimising quantitative measures of software designs with a limitedcomputational budget. In this paper, we propose a novel interactive ACO (iACO) approach,in which the search is steered jointly by an adaptive model that combines subjective andobjective measures. Results show that iACO is speedy, responsive and effective in enablinginteractive, dynamic multi-objective search. Indeed, study participants rate the iACO searchexperience as compelling. Moreover, inspection of the learned model facilitates understand-ing of factors affecting users’ judgements, such as the interplay between a design’s eleganceand the interdependencies between its components.

Keywords Ant colony optimization · Software design · Interactive search

1 Introduction

Early in the software engineering lifecycle, designers wrestle with numerous trade-off judg-ments as they identify a class model that forms the basis for, and so significantly affects thesuccess of, subsequent down-stream development. Viewing this as a search problem, Simonsand Parmee (2009) and Bowman et al. (2010) applied evolutionary algorithms (EAs) to

C. L. Simons (B) · J. Smith · P. WhiteDepartment of Computer Science and Creative Technologies, University of the West of England,Bristol BS16 1QY, UKe-mail: [email protected]

J. Smithe-mail: [email protected]

P. Whitee-mail: [email protected]

123

Page 2: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

140 Swarm Intell (2014) 8:139–157

early lifecycle software design (ELSD), optimising quantitative measures of designs relatingto aspects such as coupling and cohesion. Subsequent benchmark comparisons of differ-ent meta-heuristics for the ELSD reported ant colony optimization (ACO) discovering highquality designs faster than equivalent EAs (Simons and Smith 2012, 2013). Focussing ondesigner preferences, and judgements of ‘elegance’, the authors have also investigated theuse of interactive EAs for ELSD. To facilitate evolution, and reduce the burden of user inter-action, surrogate fitness models were used to replace most user interaction with a weightedsum of machine-calculated measures relating to design structural integrity and symmetry.The weightings were then adapted in response to the periodic user feedback (Simons et al.2010; Simons and Parmee 2012). These studies confirmed that the precise balance of factorsaffecting subjective judgments varies between design tasks, which suggests that the surro-gate fitness models cannot be pre-determined. In fact, as we have shown elsewhere (Pauplinet al. 2010), interactive heuristic search is inherently dynamic, since designers’ perceptionof solution quality changes in response to their experience of the system, and what it mayachieve. We hypothesise that ACO’s pheromone decay mechanism provides an automaticmethod for dealing with this temporal aspect of interactive search by discounting previousjudgements.

This paper proposes and evaluates the use of interactive ACO (iACO) to address thecomplex dynamic challenges of ELSD. Section 2 presents a brief survey of the relevantissues and approaches—first in Search-Based Software Engineering (and ELSD in particular),then in interactive meta-heuristic optimisation. Section 3 provides details of the differentcomponents in our iACO framework. Thereafter, Sect. 4 describes the methodology used,Sect. 5 the results obtained, and Sect. 6 threats to validity of our findings. Finally, Sect. 7concludes by assessing the effectiveness of iACO in supporting ELSD.

2 Background

2.1 Early lifecycle software design and search

The term ‘Search-Based Software Engineering’ (SBSE) (Harman and Jones 2001) describesan approach that treats many aspects of software development as optimization problemsamenable to automated search. Beginning with the evolution of software test sequences(Xanthakis et al. 1992; Smith and Fogarty 1996), applications of SBSE can now be foundacross the spectrum of the software development lifecycle, including requirements analysisand scheduling (Ren et al. 2011); design tools and techniques (Simons et al. 2010; Bowmanet al. 2010); testing (McMinn 2004); automated bug fixing (Weimer et al. 2010); and main-tenance (O’Keeffe and O Cinnéide 2008). A comprehensive repository of publications inSBSE is maintained by Zhang (2014). Different tasks within SBSE can pose very differentchallenges, and identifying appropriate meta-heuristics for different SBSE domains is recog-nised as an unsolved problem (Harman 2011). This paper focuses on the early stages in thedevelopment lifecycle, wherein designers identify and evaluate the concepts and informationrelevant to the problem domain without which the proposed software system cannot func-tion. This involves making trade-offs of competing criteria, and has historically been intenselypeople-centric (Cockburn 2002; Martin 2003; Maiden 2011). The quality of the initial designthus depends greatly on the competence of the individuals involved, and (Brooks 1987, p. 11)asserts “I believe the hard part of building software to be the specification, design and testingof this software construct, not the labour of representing it and the testing of the fidelity ofthe representation”.

123

Page 3: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 141

In the object-oriented paradigm, concepts and information identified from the problemdomain are expressed as logical ‘objects’ and ‘classes’. In the often used Unified Mod-elling Language (Object Management Group 2013), classes are placeholders or groupingsof attributes (i.e. data that need to be stored, computed and accessed) and methods (i.e. unitsof execution by which objects communicate with other objects, programs and users). Thus,ELSD can be formulated as a search through a space of candidate designs, each representing agrouping of attributes and methods into classes. Many of the design desiderata such as ‘cohe-sion’ and ‘coupling’ can be quantified by metrics (see Sect. 3.3), creating search problemsthat, although complex and highly constrained, can be tackled via meta-heuristics (Bowmanet al. 2010; Simons and Parmee 2009; Simons and Smith 2012, 2013). However, these papersand others using interactive tools (Simons et al. 2010; Simons and Parmee 2012) all showthat it is difficult to quantify the trade-off between desiderata. This is partly because manyof the factors are in opposition, and partly because of hard to quantify factors such as ‘housestyle’, and the desire to reuse software and designs. All of these factors point to the need foran effective interactive search mechanism rather than one using pre-defined preferences.

2.2 Interactive meta-heuristic optimization

Interactive EAs, with users rating the fitness of candidate solutions, have been successfullyapplied to support the customisation of artefacts in many domains (Takagi 2001; Jaszkiewiczand Branke 2008). As well as providing a means to optimise problems that are ill-definedor otherwise hard to quantify, the evolutionary history implicitly captures the users’ multi-objective decision making, avoiding the time consuming process of explicit knowledge acqui-sition.

The more general class of interactive meta-heuristic optimisation (IMHO) techniqueshas also been widely used in the multi-criteria decision making (MCDM) community togain insight into combinatorial optimization problems. In a comprehensive survey, Miettinen(1998) distinguishes various phases of human involvement. Both a priori methods (pre-specification of weightings or preferences) and a posteriori methods (selection from a rangeof alternatives produced by the search algorithm) may be distinguished from truly interactivesearch, where user input occurs during search—both guiding, and potentially being affectedby the process. From a learning perspective, Belton et al. (2008) emphasise the role that thehuman–computer interface plays in enabling mutual learning between decision makers andsearch processes. Synthesising the lessons from MCDM and SBSE, Deb (2012) highlightsthe need for a dynamic search process in which objectives, constraints and search parametersmay change over time to suit the interaction of the individual.

2.3 Reducing the cognitive burden of interactive search

A major problem for IMHO is that fatigue and reduced engagement cause inconsistency inhumans decisions in a way that varies non-linearly over time. There have been a number ofstudies addressing how to minimise the fatigue, both physical and psychological, that canresult from prolonged interaction times and the possible stress of the evaluation process.Making each interaction simpler, by discretising continuous fitness values into a few levels,has been shown to facilitate decision making, without significantly compromising conver-gence. Typically, a low odd number (5–7) is chosen to give a symmetrical spread similar tothe Likert scales used in opinion polling (Ohsaki et al. 1998), and information is organisedin several dimensions and successively into a sequence of ‘chunks’, as suggested by Miller(1956).

123

Page 4: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

142 Swarm Intell (2014) 8:139–157

Users’ engagement can be promoted by providing them with a sense of continued andsubstantive improvements, apparently in response to their input. In an interactive EA taskdesigned to let them compare user evaluations with a ‘ground truth’, Caleb-Solly and Smith(2007) showed that an elitist (μ+λ) strategy maintained user consistency for longer than otherpopulation management strategies. Qualitatively, users appeared to show more frustrationwith non-elitist strategies as the system seemed to forget what it had been told.1

A complementary approach is the use of surrogate models for most fitness evaluations,periodically updated via user evaluations of selected individuals. Simons and Parmee (2012)employed linear regression to combine quantitative design metrics for interactive ELSD.Other successful approaches for interactive tasks include clustering individuals (Lee andCho 1998; Boudjeloud and Poulet 2005) or using multiple fuzzy state-value functions toapproximate the trajectory of human scoring (Kubota et al. 2006). Avigad et al. (2005)propose a multi-objective EA in which a model-based fitness of sub-concept solutions (usinga sorting and ranking procedure) is combined with human evaluation. Similar approachesare reported by Brintrup et al. (2008).

2.4 Choice of meta-heuristics for interactive search

In order to make a preliminary evaluation of the applicability of different meta-heuristics forinteractive ELSD, we have previously conducted benchmark comparisons of EAs and SimpleACO (Dorigo and Stützle 2001, 2004) using a range of metrics relating to structural integrityand design symmetries as surrogates for design elegance (Simons and Smith 2012, 2013). Theresults are summarised as follows. Given a large computational budget (in terms of searchiterations), an EA with an integer-based representation discovers higher quality solutions. AnEA is also more robust for very large scale design problems with a high number of classes.However, as we have argued above, the nature of ELSD is such that, even assuming the useof surrogate metrics of design elegance, interactive approaches are needed to adapt thosemodels, which limits the computational budget available. Under these constraints, a verydifferent picture emerges: ACO finds higher quality solutions and in fewer search iterations.

From an algorithmic perspective, there are several features that make ACO possibly bettersuited to interactive search than an EA:

• the pheromone decay process naturally reduces the influence of the previous humanjudgments, whereas EAs require additional diversity creation mechanisms in dynamicenvironments (Jin and Branke 2005) such as interactive evolution;

• preservation of the system’s ‘memory’ in the form of a pheromone matrix makes itstraightforward to incorporate user manipulation of results, thus promoting user engage-ment and maximising the value of each interaction. For example, ‘freezing’ partial solu-tions can be achieved by directly manipulating the matrix values, while leaving the mech-anisms for generating new solutions untouched. In contrast, achieving the same effect inan EA would require either some method of manipulating recombination and mutationoperators on-the-fly or a mechanism for dynamically creating complex constraints.

Given these considerations, examples of interactive ACOs are perhaps surprisingly scarce.Xing et al. (2007) report the use of interactive fuzzy ACO for job shop problems, while Ugurand Aydin (2009) describe an interactive ACO for the TSP. Albakour et al. (2011) reportthe use of ACO to simulate and interact with query logs to learn about user behaviour in acollection of documents.

1 The use of anthropocentric language is deliberate here, and mimics users’ vocal responses during interactionsessions.

123

Page 5: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 143

3 Proposed approach

We begin this section by specifying the problem representation, and the ACO used as theunderlying search engine. Next we describe a number of relevant quantitative metrics andthe adaptive surrogate model. Lastly, we describe the interactive features of iACO, and howthese are integrated with the ACO.

3.1 Solution encoding

Applications of ACO typically use a representation where candidate solutions (tours) are apermutation of a fixed set of values. In contrast, ELSD requires assigning a class label to eachelement of a design—essentially graph partitioning. To achieve this in a format amenableto ACO search, and leave scope for future refinements such as inheritance, each candidatesolution is represented as a permutation of a set comprising design elements (attributes andmethods) and ‘end-of-class’ markers. If a problem has a attributes and m methods to begrouped into c classes, the set is of size a + m + c − 1. A solution path of a attributes andm methods, labelled 1 to a, a+1 to m, respectively, is constructed and then divided into csegments by adding c − 1 cuts. We label these from a + m + 1 to a + m + c − 1 and eachsegment represents a class in the candidate solution. In the solution, we ignore the orderingwithin the segment. For example, given a set of elements comprising 4 attributes, 4 methodsand 3 classes, a design with classes {125}{367} and {48} is represented by the solution{1-2-5-9-3-6-7-10-4-8}.

Following standard practice, we impose a constraint that each class holds at least oneattribute and one method. In this way, candidate solutions appear more comprehensible andmeaningful during interactive search. To allow meaningful comparison of the results obtainedwith those obtained via the manually performed design or previous EA-based approaches,we also add the constraint that candidate solutions for a problem have the same number ofclasses as the manually produced design. However, it is not a necessary part of the approach.

3.2 Interactive ACO search engine

In this section, we provide a brief description of the ACO algorithm we use in this paper,which is inspired by MAX–MIN Ant System (Stützle and Hoos 2000); we do so using as anexample the problem of finding a minimum cost path through a set of l nodes.

The ACO algorithm maintains an l × l ‘pheromone matrix’ M that defines the probabilitydistribution function for sampling new solutions. In each iteration, each ant is placed at arandom starting node and constructs a path as follows:

• At each node i : 1 ≤ i ≤ l, the ant creates a list S of all the as-yet unvisited nodes andpheromone values associated with the relevant links,

• The ant then selects a node j from S with the probability distribution:

Pmove(i j) ={

Mαi j Hβ

i j/∑r∈S

Mαir Hβ

ir if j ∈ S

0 otherwise(1)

where α is a parameter controlling pheromone attractiveness, β is a parameter controllingthe importance of the heuristic information and H is a heuristic matrix. We have reportedelsewhere Simons and Smith (2013) that benefits of adding heuristic information fornon-interactive ELSD are rather complexly related to other algorithmic modifications,so here we set Hi j = 1 for all i, j .

123

Page 6: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

144 Swarm Intell (2014) 8:139–157

• After each ant has constructed a full solution path, its cost is measured, which for thisproblem will lie in the interval (0,1] (see next section). Evaporation is firstly applied tothe pheromone matrix M :

Mt ′i j = (1 − ρ) · Mt

i j (2)

where ρ ∈ (0, 1] is the pheromone decay coefficient. Then if best denotes the set of edgescomprising the least cost path, with value f ∗ for generation t , and ei j ∈ best is taken tomean that edge ei j is traversed in that path, the pheromone matrix M is updated at theend of each iteration according to

Mt+1i j = Mt ′

i j + (1 − f ∗)μ if ei j ∈ best (3)

where the parameter μ controls pheromone update.Key factors that distinguish MAX–MIN Ant System variants, as well as our variant, are

that M is initialized to its maximum value Mmax, is only updated with the information fromthe best ant per iteration (3), and that its values are truncated to lie within a pre-specifiedrange [Mmin, Mmax] to avoid over saturation.

3.3 Metrics of design quality

Three measures are calculated for each candidate design, all of which are to be minimised,and lie in the interval (0,1].

The first is inspired by the coupling between objects (CBO) measure proposed by Harrisonet al. (1988). The design problem is specified by a number of use cases, from which solutionattributes and methods are derived. The CBO cost is defined as the number of times that amethod from one class makes reference to (uses) the value of an attribute from another –expressed as a proportion of the total number of uses. Drawing on the documentation of asoftware problem instance, and the numbering scheme outlined above, an l x l matrix U isconstructed such that

Ui j ={

1, if i ≤ a, a < j ≤ a + m, method j uses attribute i0, otherwise

(4)

Given U and the assignment of elements to classes, the CBO cost is given by

fC BO =∑

i∑

j,class( j)�=class(i) Ui j∑i∑

j Ui j(5)

The second cost measure, numbers among classes (NAC), reflects design symmetry andhas been shown to correlate well with designers’ recorded feelings of design ‘elegance’.Intended to penalise unevenly sized classes, NAC is defined using the standard deviation σm

of the numbers of methods and the standard deviation σa of the number of attributes amongthe classes of the design (Simons and Parmee 2012). The values of σm and σa are truncatedto the range [0,6] and the cost is given by

fN AC = 1

6·(σm

2+ σa

2

). (6)

The third measure, also reflecting designer notions of ‘elegance’, is the attribute to methodratio (ATMR). ATMR is intended to penalise designs where the ratio of attributes to methodsvaries greatly between classes, and is defined as the standard deviation σa/m of that ratioamong the classes of the design. Values are then truncated in the range [0,6] and the equivalent

123

Page 7: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 145

cost is given by

f AT M R = 1

6· σa/m (7)

3.4 The adaptive surrogate model

To reduce the number of user interactions required, we use a surrogate model that combinesthe quantitative measures via multiple linear regression:

festimated = a0 + wC BO · fC BO + wN AC · fN AC + wAT M R · f AT M R (8)

where weights a0,wC BO,wN AC and wAT M R are initialized to 0, 0.34, 0.33 and 0.33, respec-tively. Having been presented with a visualisation of a candidate design, the user is invitedto provide an overall evaluation on a scale of 1 (poor) to 100 (ideal). The model parametersare then updated to minimise the least-squares error between predicted and actual scores forall points evaluated by the user. The use of continuous, rather than discretised user values,aids the linear regression.

Previously, we have reported that in some regions of the search space the mapping betweenquantitative metrics and users’ judgement is highly non-linear; designs with high coupling arescored lowly, regardless of ‘elegance’ (Simons and Parmee 2012), whereas the relationship ismore piece-wise linear for solutions with lower coupling. To improve the likely quality of thesurrogate model and alleviate the problem of wasting users’ effort, we focus attention on low-coupling solutions. We achieve this via an adaptive scheme whereby, after each interaction,we calculate the interval before the next evaluation according to

Interaction_Interval = f 2C BO · ic (9)

where based on previous findings the constant ic is set to 40 (Simons and Parmee 2012).

3.5 Presentation of candidate solutions

Candidate solutions are presented as class models based on UML. Each class is visualised asa rectangle with three compartments. Arrows between classes reveal ‘external uses’ pointingfrom method to attribute, while the thickness is proportional to the number of external uses.

The top compartment in each rectangle shows the ‘cohesion’—a measure of integrity. Wecalculate this as the proportion of the class elements that use, or are used by, other classelements. We have previously shown that colour can play an important role in design visu-alisation, reducing the need for reading text (Simons et al. 2010; Simons and Parmee 2012).Here, we compare two different visual metaphors, colouring classes with high, intermediateor low cohesion, respectively in green/amber/red (‘traffic light’) or red/amber/blue (‘watertap’). Figure 1 shows an example presentation, see Simons (2014) for more.

3.6 Mechanisms supporting user interaction

As well as scoring presented solutions, designers have the opportunity to provide ‘hints’ tothe iACO search engine, which have a more immediate and direct effect. One option is for thedesigner to right-click on a class in the GUI and select ‘freeze’. This allows the preservationof an individual class considered interesting and useful, so it is unchanged by on-goingsearch. Algorithmically, this is simply achieved by manipulating the pheromone table withinthe ACO to ‘lock-in’ the sub-path corresponding to that class. Notably, this would be muchharder to achieve with an EA as it would involve complex manipulation of both the crossoverand mutation operators. Conceptually, the designer is mentally ‘anchoring’, that is, fixing his

123

Page 8: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

146 Swarm Intell (2014) 8:139–157

Fig. 1 Presentation of a candidate design for the cinema booking system (CBS) design problem

thinking on some bias or partial ‘chunk’ of the solution (Buchanan and Daellenbach 1997).It is also possible for the designer to ‘unfreeze’ class(es) at any interaction. This ‘freezing’mechanism also provides an effective mechanism to address larger scale designs—smaller‘chunks’ of the solution can be controlled before moving onto further design chunks.

Another support mechanism is the ability to place interesting and useful candidate designsinto an archive as iACO search progresses. This enables the recall and comparison of inter-esting designs.

A flow chart of the iACO algorithm is shown in Fig. 2.

4 Experimental methodology

In this section, we describe the problem instances used in our experiments, the algorithmparameters used and our methodology for the empirical investigation.

4.1 Software design problems

When undertaking experimental comparisons of meta-heuristics, it is generally preferableto use either randomised problem instance generators or a suite of well-known problembenchmarks to facilitate comparison with previous and future results (see e.g. Eiben andSmith 2003, pp. 252–258). Unfortunately, we are not aware of the existence of any recognised

123

Page 9: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 147

Initialize Weights

NoYes

iteration = 0; interaction = 0

Construct Solutions

(as Eq. (1) )

Calculate fCBO, fNAC, fATMR

iteration = iteration + 1

Update Pheromones

Adjust Weights

Select Design Solution Path

Present Design Visualization

Designer Evaluates Design

Designer Terminates?

Calculate Regression Coefficients

No

Stop

Yes

Option: Freeze Class(es)

Option: Unfreeze Class(es)

Option: Archive Design

Interactive evaluation?

interaction = interaction + 1

(as Eq. (2), (3))

Fig. 2 Flow chart of the proposed dynamic multi-objective iACO Search. Sequential activities are shown inrectangles with solid lines; optional activities are shown with dashed lines

benchmark software design problems, either in the SBSE research literature or from industrialpractice. For ELSD, a randomised problem generator would create issues of semantics andunderstanding for the designer. Therefore, we have selected three real world software designproblems of differing scale which have been used previously. Specifications for all three areavailable from Simons (2014).

The first is a generalised abstraction of a cinema booking system (CBS), which addresses,for example, making an advance booking for a showing of a film at a cinema, and paymentfor tickets. The second problem is an extension to a student administration system createdto record outcomes relating to its graduate development program (GDP). The extension wasdesigned, implemented and deployed at the authors’ university. The third problem is basedon an industrial case study—select cruises (SC). This is an automated system for a companyselling nautical adventure holidays that handles quotation requests, cruise reservations, pay-ment and confirmation via paper letter mailing. Manually performed designs for CBS andGDP have 5 classes and 16 for SC. Table 1 shows the number of classes, attributes, methodsand uses for each design problem and the values for different metrics for the manual design.

123

Page 10: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

148 Swarm Intell (2014) 8:139–157

Table 1 Software design problems and metric value for corresponding manual design

Problem Classes Attributes Methods Uses fC B O fN AC f AT M R

CBS 5 16 15 39 0.154 0.821 0.199

GDP 5 43 12 121 0.297 2.592 2.617

SC 16 52 30 126 0.452 1.520 1.848

Table 2 Algorithm parametervalues

Parameter Description Value

N Number of ants in colony 100

α Attractiveness parameter of pheromone trails 1.5

μ Update parameter of pheromone trails 3.0

ρ Pheromone decay coefficient 0.035

Mmax Maximum matrix pheromone value 3.5

Mmin Minimum matrix pheromone value 0.5

4.2 Algorithm parameters

Values in Table 2 for the parameters N , α, μ and ρ, are derived from the performancesreported in Simons and Smith (2013). The upper and lower limits, Mmax and Mmin, arebased on the recommended values in Stützle and Hoos (2000), and confirmed by preliminaryexperiments using a fixed weighting of the three cost functions.

4.3 Empirical methodology

Eleven software development professionals with experience of ELSD were invited to partic-ipate in trials. The total relevant experience of the participants amounts to 228 years in bothacademia and industrial practice. Participants 4 and 9 are authors of this paper. To start eachsession, the iACO approach is explained and use of the tool illustrated using a dummy designproblem. Each of the three problems is described, and then a schedule of up to five interactivedesign episodes starts. Formulated to minimise interaction fatigue, the schedule begins withtwo sessions using the CBS problem, continues with two using GDP and finishes with oneusing SC. Within this schedule of design problems, the effects of the colour metaphor andthe ‘freeze’ and ‘archive’ capabilities are varied in different participant episodes to createthe evidence needed to permit valid statistical comparison. Each episode then proceeds untileither the participant decides to halt or a maximum time of one hour for the participant sessionis reached. Details of the schedules, the ethics process and the participants’ backgrounds areavailable at Simons (2014).

After each ACO iteration, a record is stored containing enough details to fully identifythe specific episode, along with the current weights and lowest values for fC BO , fN AC andf AT M R achieved by the colony. After each designer interaction, all details, such as valueof evaluation, the updated weights, which classes were frozen or unfrozen, and whetherthe design was archived, are recorded. At the end of each session, participants are invited tocomplete a questionnaire on their overall experience, with prompts for any satisfying aspects,any aspects that generated user fatigue, and any suggestions for enhancements.

123

Page 11: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 149

5 Results

This section begins by reporting results for quantitative measurements of designer engage-ment and solution cost values obtained, both during and at the end of interactive episodes.Next, to show the overall effectiveness of iACO, results relating to computational learningand the human experience are presented. Lastly, a broad comparison of iACO performancewith the previous results achieved with interactive EAs is made. All experimental data areavailable at Simons (2014).

5.1 Number of interactions

Table 3 shows the number of interactions during design episodes for each participant andproblem, with summary statistics. Episodes missed due to time constraints are shown as‘–’. Numbers for CBS and GDP are higher than SC because according to the experimentalschedule, most participants undertook two design episodes for these design problems. Num-bers of interactions for each design problem episode have been examined. Analysis usingthe non-parametric Wilcoxon test shows that differences between CBS and SC (p = .027)and between GDP and SC (p = .028) are statistically significant differences but sampledifferences between CBS and GDP do not achieve statistical significance. The statisticallysignificant differences may have arisen because the number of classes in candidate designsolutions for CBS and GDP is 5 in both cases, compared with 16 for SC. It is possible that thehigher cognitive load required for the SC problem results in a premature termination leadingto the above differences. Another possible cause is that SC was always the last probleminstance to be investigated, when participants were becoming more familiar with the tool.

5.2 Example designer evaluations

Figure 3 depicts a typical example of the evaluations obtained during an episode for participant10 for the GDP problem. This example is characterised by a non-monotonic upward trajectory

Table 3 Number of interactionsfor each participant

Participant Design problem Total

CBS GDP SC

1 98 149 12 259

2 36 30 – 66

3 47 29 13 89

4 35 13 8 56

5 44 107 17 168

6 36 18 10 64

7 45 – – 45

8 17 6 – 23

9 27 27 – 54

10 30 32 12 74

11 64 – – 64

Total 479 411 72 962

Mean 43.545 45.666 12.00

SD 21.786 48.610 3.033

123

Page 12: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

150 Swarm Intell (2014) 8:139–157

0

20

40

60

80

100

0 5 10 15 20 25 30

DE

SIG

NE

R E

VA

LUA

TIO

N

INTERACTION NUMBER

Fig. 3 Designer evaluations during an iACO design episode for the participant 10 for the GDP problem

displaying negative serial correlation (i.e. a large number of turning points and short runsagainst an increasing trend). This pattern of responses may be because of a single non-dominated solution path that is chosen at random from the population for presentation to thedesigner. This variety of candidate solutions appears to both help maintain user engagementand provide a range of values for learning of metric weights. The behaviour after interaction18 suggests that the system is finding higher quality solutions, and after some experimentationthe user has decided that these represent some upper limit—or at least an acceptable solution.

5.3 Example metric values

Figure 4 shows a typical example of the metric cost values observed during an episode—inthis case for participant 2 with the mid-scale design problem GDP. As can be seen, despitethe periodic change in weights following user evaluations, the underlying ACO-based searchefficiently locates solutions that minimise the three cost measures. Cost values for fC BO areminimised below that for the manual design at iteration 125. Cost values for fN AC are lowerthan the manual design value at the start of search and minimise at iteration 40. f AT M R costvalues are also lower than the manual design value at the start, minimise at iteration 15 andremain the same to iteration 100, before becoming higher in later iterations.

5.4 Variation in cost values at end of episodes

Table 4 shows summary statistics for the best values obtained for the three cost metrics atthe last interaction of episodes. In Table 4, ‘N’ indicates the number of participant episodes.The ‘Best’ row shows the single best value achieved in all episodes for each design problem,while the ‘Mean’ row shows the mean of all best values at the end of episodes with standarddeviation in parentheses. Metric values for the manually produced designs are shown initalic font for comparison. Bold font is used to indicate that metric values achieved (eithersingle best or mean best) using iACO are better than those of the manually produced design.The single sample t-test has been used to compare the sample means against the values forthe manually produced solution. For the sake of brevity, p values are only shown where

123

Page 13: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 151

0.2

0.4

0.6

0.8

1

1.2

1.4

0 20 40 60 80 100 120 140 160

FIT

NE

SS

VA

LUE

S

ITERATION NUMBER

fCBOfNAC

fATMR

Fig. 4 Progression of fC B O , fN AC and f AT M R in an iACO episode for participant 2 for the GDP problem.For comparison, the cost values for manual design are 0.297 ( fC B O ), 2.592 ( fN AC ) and 2.617 ( f AT M R)

Table 4 Best, mean and manual values for fC B O , fN AC and f AT M R at end of episodes for CBS, GDP andSC design problems

Design problem

CBS (N = 22) GDP (N = 17) SC (N = 6)

fC B O

Best 0.175 0.234 0.562

Mean 0.265 (0.045) 0.298 (0.062) 0.602 (0.029)

Manual 0.154 0.297 0.452

t-test p < .001 p < .001

fN AC

Best 0.200 0.490 1.038

Mean 1.599 (1.291) 1.902 (1.966) 1.292 (0.169)

Manual 0.821 2.592 1.520

t-test p = .022

f AT M R

Best 0.036 0.249 0.406

Mean 0.045 (0.040) 0.679 (0.333) 0.602 (0.110)

Manual 0.199 2.617 1.848

t-test p < .001 p < .001 p < .001

Bold indicates better values compared to manual designs

differences are significant at the alpha = 0.05 level. Analysing each metric measure in turn,we see that

i for fC BO , mean values for CBS and SC are a little worse than values for the manuallyproduced design, and this difference is statistically significant.

ii for fN AC , the best value achieved is better than the manual design value for all designproblems, and the mean values are also better for GDP and SC, the difference beingstatistically significant for the SC problem.

123

Page 14: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

152 Swarm Intell (2014) 8:139–157

iii for the f AT M R metric, all best and mean values are better for all design problems, andthe differences are statistically significant.

Overall, the results show that participants using iACO choose to create more elegantdesigns (with lower fN AC and f AT M R values) than were managed without the tool, at theexpense of slightly increased coupling ( fC BO ). This is consistent with the hypothesis thatelegance plays an integral role in software design.

5.5 Effect of designer hints

To examine the effect of freezing and colour scheme, we conducted a 2 × 2 mixed analysisof variance with freezing (on, off) as a 2 level between subjects variable and colour scheme(traffic lights, water tap) as a 2 level repeated measures factor with outcomes fC BO, f AT M R

and fN AC at the last designer interaction. Due to sample size limitations, we restrict theanalysis to the CBS design problem (N = 22). Results show that although there is significantvariation in the final values of fC BO and fN AC , only three values are observed for f AT M R

i.e. 0.036 (seen 20 times), 0.224 and 0.044 (once each), which explains the low standarddeviation reported in Table 4. This suggests that f AT M R is less sensitive as a measure inthe multi-objective evaluation performed by participants in this investigation, and possiblecauses and consequences of this are discussed in the following sections.

For both fC BO and fN AC , the analysis reveals no statistically significant differencesbetween results obtained with freezing on and freezing off, or for the colour scheme used. Itwas, however, observed that while some participants made heavy use of the freeze capability,others did not, despite being aware of its presence. Results of the participant questionnaireare reported in Sect. 5.7.

5.6 Learning of metric weights

Mean final values of the weights wC BO , wN AC , wAT M R learned by the iACO environmentare shown in Table 5. This reveals the overall balance obtained between the learned weightsand the impact of scale. Firstly, wC BO emerges as the highest learned weight for all threeproblems. Secondly, we see that wN AC is similarly small across all scales of design problem.Thirdly, wC BO increases while wAT M R decreases with scale, confirming that the users’balance of judgements is problem dependent. We speculate that as the cognitive load of thedesign problem increases, the iACO environment learns that participants are placing lessemphasis on design elegance and rely more on the quantitative measure of coupling betweenobjects (CBO)—which has a strong visual manifestation as a dense network of black arrowsin the presentation of a highly coupled design.

Table 5 Mean weight values(standard deviation) for CBO,NAC, ATMR at end of episodes

Design problem wC B O wN AC wAT M R

CBS (N = 22) 0.588 (0.208) 0.097 (0.058) 0.314 (0.233)

GDP (N = 17) 0.742 (0.251) 0.075 (0.062) 0.182 (0.227)

SC (N = 6) 0.817 (0.073) 0.096 (0.073) 0.086 (0.063)

Total (N = 45) 0.677 (0.229) 0.088 (0.061) 0.233 (0.233)

123

Page 15: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 153

5.7 Human experience

Ten of the eleven participants responded to the questionnaire, inviting them to commentgenerally and in particular regarding how compelling and effective they found their interactiveiACO experience (see Simons (2014) for transcripts). On a scale of 1 (‘Not at all compelling’)to 5 (‘Very Compelling’), five participants rated the interactive design experience at 5, andfive at 4. We applied 95 % confidence levels for proportion (using the Pearson Clopperintervals) and found this to be a statistically significant positive rating (p = .002).

Asked to rate how effective they found the tool in achieving useful and relevant softwaredesigns, three participants rated the effectiveness at 5, four at 4 and three at 3. Although 7ratings are positive and three ratings are neutral, 95 % confidence levels for proportions didnot show statistical significance with this sample size. We conjecture that this is consistentwith the participants’ perception of the findings in the previous section. It seems possible thatalthough the iACO environment achieves design solutions of better fitness, the lack of sen-sitivity of the ATMR metrics might be implicitly perceived as constraining the effectivenessof interactive search.

When asked to comment on their preferred colour scheme, 7 out of 10 participants stateda preference for ‘traffic lights’, and 3 for the ‘water tap’ metaphor. This finding showsthe importance of allowing users some choice when creating interactive experiences. Note,however, that as seen above, the choice of metaphor is not reflected in statistically differentperformance.

Many of the ‘free text’ participant’s comments about the iACO experience were positive,e.g. ‘the tool looks good and works well’ and ‘the tool did seem to help quickly arrive atan optimal class design’. Other participants commented on the effectiveness of the designvisualisation, e.g. ‘the visibility of the cohesion and coupling’ and the use of a colour schemethat ‘speeded up the decision process’. When asked for suggestions for improving the iACOexperience, participants suggested even more interactivity, such as a visual indication of afrozen class; the ability to backtrack along the history of the episode and restart the searchfrom particular design variants; and the capability to ‘drag and drop’ elements between classesto give hints or suggestions to the iACO environment.

5.8 Comparison with previous interactive evolutionary algorithm

Before comparing the results obtained with iACO with those reported by Simons and Parmee(2012) using an interactive EA (IEA), it should be borne in mind that there are some method-ological differences between the two experiments. For example, in the IEA experiments, fivefitness measures are used, and the linear regression technique used to create the surrogatemodel is less sophisticated than the least-squares approach in iACO. In the IEA experiments,designer evaluation is performed using a ‘one star’ to ‘five star’ rating rather than the 0 to 100rating in this paper. There are also differences in the participant cohorts (7 for IEA versus 11for iACO). All these factors limit the validity of any comparison of IEA against iACO.

Nevertheless, in both cases, the users interacted with a black-box tool and were asked tocarry on each session until they were satisfied with the results. They were given no guidance onhow, if at all, the scale of their rankings would affect the algorithm. Internally, both approachesused the users’ input primarily to re-calibrate the surrogate fitness models, which were thenused to update the pheromone matrix (ACO) or to influence selection (IEA) governing thecreation of future candidate designs. We can summarise the comparative results as follows.

Designers choose to interact with the iACO for longer than IEA. The number of interactionsper episode for iACO is higher than IEA. Insufficient data exist for valid comparison for the

123

Page 16: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

154 Swarm Intell (2014) 8:139–157

SC problem, but the Mann-Whitney independent samples test shows that the number ofinteractions is significantly greater with iACO for both the CBS (p <.002) and the GDP (p=.039) problems.

Examining the cost measures for the designs at the end of episodes reveals a more mixedpicture. The same analytical technique shows that fC BO for GDP is significantly lower foriACO compared to IEA (p = .026). Meanwhile for CBS, fN AC is significantly worse foriACO (p = .045), while f AT M R is significantly better (p ≤ .001). No other comparisons arestatistically significant. These findings are consistent with both meta-heuristics being effectiveand efficient, but taken with our analysis (see e.g. Fig. 3) suggest that iACO encourages agreater exploration of a range of high quality solutions with different characteristics, andhence greater understanding of the problem.

6 Threats to validity

The principal threat to the validity of the conclusions drawn lies in the relatively small scaleof the investigation, specifically that we have restricted ourselves to studying three designproblem instances. This has been done partly to enable direct comparison with the previousresults, but primarily because of the lack of recognised benchmark examples in the field asdiscussed earlier. Nevertheless, the three real-life examples chosen do demonstrate a rangeof scale and complexity which gives us some confidence in the generality of our results.

With respect to the internal validity, the iACO design experience is highly dependenton the design context, and so every attempt has been made to make that consistent for allparticipants. To counteract the Hawthorne effect of perceived social treatment (Salkind 2010,p. 561), the experimenter was out of the participant’s field of vision, they were told that thehalting of interactive design episodes was entirely at their discretion, and that there was noexpectation about the particular designs created. The learning effect threatens validity in thesense that participant capability improves during the episodes through learning by repetition.To counter this, the experimental setup includes a period of familiarisation with a dummydesign problem first, so that knowledge of how to use the iACO environment is instilled priorto proceeding with the three design problems. The threat posed by fatigue is mitigated byensuring that design episodes are halted after one hour. This is important in the light of lowinteraction scores and the cognitive load of evaluation of designs for the SC problem.

With respect to external validity, the outcomes of the investigations depend on the numberand experience of the participants being representative of some segment of the software designcommunity. While the 9 participants (other than the authors) are acquainted with the authors,they were not aware of this research prior to the study, and had little or no prior experience ofinteractive search. While a greater number of participants would have lent greater robustnessto the statistical analysis of the study, the experience of all trial participants suggests a levelof credibility for their evaluations of the candidate designs. However, the balance towardselegance can only be taken to be typical of more experienced designers.

7 Conclusions

Based on the quantitative results and participant feedback, we conclude that ACO is effectiveas an engine for interactive search in early lifecycle software design. Indeed, with speedydiscovery of useful candidate designs, study participants rate the experience as compelling,and this is reflected in continued exploration once good solutions are found. While the

123

Page 17: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 155

influence of colour scheme and designer ‘hints’ such as freezing has proved statisticallyinconclusive, the sample size is relatively small, and great variation in participant behaviourduring interaction is evident. Nevertheless, study participants have provided positive ratingsand comments for both ‘hint’ capabilities.

While methodological concerns make it necessary to treat the results with a certain amountof caution, a comparison with similar results obtained using an IEA shows that although theeffectiveness of the two meta-heuristics is broadly comparable, the number of designer inter-actions per episode is significantly higher with iACO compared to IEA. This suggests a greaterparticipant engagement for iACO search, potentially leading to an increased understandingof software solutions.

Learned weightings in surrogate fitness models indicate that elegance does indeed playa significant role in evaluation of candidate designs. Detailed analysis shows that the ele-gance measure of the ratio of attributes to methods ( f AT M R) is less influential than fN AC

in multi-objective search. It would appear that the evenness of distribution of attributes andmethods among classes ( fN AC ) is the more significant measure of elegance, which in turnsuggests that this evenness of distribution, when combined with structural integrity, is animplicit but important component of effective early lifecycle software design. We concludethat iACO holds considerable promise for interactive exploration of this, and other, designspaces.

Acknowledgments The authors would like to very much thank the editorial staff and the anonymous review-ers for their professional and constructive comments.

References

Albakour, M.-D., Kruschwitz, U., Nanas, N., Song, D., Fasli, M., & De Roeck, A. (2011). Exploring ant colonyoptimisation for adaptive interactive search. In Proceedings of Advances in Information Retrieval Theory.Lecture notes in computer science (Vol. 6931, pp. 213–224). Heidelberg: Springer.

Avigad, G., Moshaiov, A., & Brauner, N. (2005). Interactive concept-based search using MOEA: The hierar-chical preference case. International Journal of Computational Intelligence, 2(3), 182–191.

Belton, V., Branke, J., Eskelinen, P., Greco, S., Molina, J., Ruiz, F., et al. (2008). Interactive multiobjectiveoptimization from a learning Perspective. In J. Branke, K. Deb, K. Miettinen, & R. Słowinski (Eds.),Multiobjective optimization: Interactive and evolutionary approaches (pp. 405–433). Heidelberg: Springer.

Boudjeloud, L., & Poulet, F. (2005). Visual interactive evolutionary algorithm for high dimensional dataclustering and outlier detection. In 9th Pacific-Asia Conference on Advances in Knowledge Discovery andDesign (pp. 428–43). Heidelberg: Springer.

Bowman, M., Briand, L. C., & Labiche, Y. (2010). Solving the class responsibility assignment problem inobject-oriented analysis with multi-objective genetic algorithms. IEEE Transactions on Software Engineer-ing, 36(6), 817–837.

Brintrup, A., Ramsden, J., Takagi, H., & Tiwari, A. (2008). Ergonomic chair design by fusing qualitative andquantitative criteria using interactive genetic algorithms. IEEE Transactions on Evolutionary Computation,12(3), 343–354.

Brooks, F. P, Jr. (1987). No silver bullet: Essence and accidents of software engineering. Computer, 20(4),10–19.

Buchanan, J. T., & Daellenbach, H. G. (1997). The effects of anchoring in interactive MCDM solution methods.Computers and Operations Research, 24(10), 907–918.

Caleb-Solly, P., & Smith, J. E. (2007). Adaptive surface inspection via interactive evolution. Image and VisionComputing, 25(7), 1058–1072.

Cockburn, A. (2002). Agile software development. Boston: Addison-Wesley.Deb, K. (2012). Advances in evolutionary multi-objective optimization. In proceedings of 4th International

Symposium on Search-based Software Engineering. LNCS (Vol. 7515, pp. 1–26). Heidelberg: Springer.Dorigo, M., & Stützle, T. (2001). An experimental study of the simple ant colony optimization algorithm. In

N. Mastorakis (Ed.), Advances in fuzzy systems and evolutionary computation (pp. 253–258). Dallas, TX:World Scientific and Engineering Society Press.

123

Page 18: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

156 Swarm Intell (2014) 8:139–157

Dorigo, M., & Stützle, T. (2004). Ant colony optimization. Cambridge: MIT Press.Eiben, A. E., & Smith, J. E. (2003). Introduction to evolutionary computing. Heidelberg: Springer.Harrison, R., Councell, S., & Nithi, R. (1998). An investigation into the applicability and validity of object-

oriented design metrics. Empirical Software Engineering, 3(3), 255–273.Harman, M. (2011). Software engineering meets evolutionary computation. Computer, 44(10), 31–39.Harman, M., & Jones, B. J. (2001). Search-based software engineering. Information and Software Technology,

43(14), 833–839.Jaszkiewicz, A. and Branke, J. (2008). Interactive multiobjective evolutionary algorithms. In J. Branke (Ed.),

MultiObjective optimisation: Interactive and evolutionary approaches. LNCS (pp. 179–193). Heidelberg:Springer.

Jin, Y., & Branke, J. (2005). Evolutionary optimization in uncertain environments—a survey. IEEE Transac-tions on Evolutionary Computation, 9(3), 303–317.

Kubota, N., Nojima, Y., Kojima, F., & Fukuda, T. (2006). Multiple fuzzy state-value functions for humanevaluation through interactive trajectory planning of a partner robot. Soft Computing, 10(10), 891–901.

Lee, J.-Y., & Cho, S.-B. (1998). Interactive genetic algorithm with wavelet coefficients for emotional imageretrieval. In 5th International Conference on Soft Computing and Information/Intelligent Systems (Vol. 2,pp. 829–832). Singapore: World Scientific.

Martin, R. C. (2003). Agile software development: Principles, patterns and practices. Upper Saddle River,NJ: Prentice-Hall.

Maiden, N. (2011). Requirements and aesthetics. IEEE Software, 28(3), 20–21.McMinn, P. (2004). Search-based software test data generation: A survey. Software Testing, Verification and

Reliability, 14(2), 105–156.Miettinen, K. M. (1998). Nonlinear multiobjective optimization. Norwell, MA: Kluwer.Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing

information. Psychology Review, 63(2), 81–97.Object Management Group. (2013). Unified modelling language resource page. Retrieved August 28, 2013,

from http://www.uml.org/.Ohsaki, M., Takagi, H., & Ohya, K. (1998). An input method using discrete fitness values for interactive

genetic algorithms. Journal of Intelligent and Fuzzy Systems, 6(1), 131–145.O’Keeffe, M., & Cinnéide, O. M. (2008). Search-based refactoring for software maintenance. Journal of

Systems and Software, 81(4), 502–516.Pauplin, O., Caleb-Solly, P., & Smith, J. E. (2010). User-centric image segmentation using an interactive

parameter adaptation tool. Pattern Recognition, 43(2), 519–529.Ren, J., Harman, M., & Di Penta, M. (2011). Cooperative co-evolutionary optimisation of software project

assignments and job scheduling. In 3rd International Symposium on Search-based Software Engineering(SSBSE, 2011). LNCS (Vol. 6956, pp. 127–141). Heidelberg: Springer.

Salkind, N. J. (2010). Encyclopaedia of research design (Vol. 2). Thousand Oaks: Sage Publications.Simons, C. L. (2014). Use case specifications and related study information. Retrieved April 14, 2014, from

http://www.cems.uwe.ac.uk/clsimons/iACO.Simons, C. L., & Parmee, I. C. (2009). An empirical investigation of search-based computational support

for conceptual software engineering design. 2009 IEEE International Conference on Systems, Man, andCybernetics, (SMC ’09) (pp. 2577–2582). IEEE Press: Piscataway.

Simons, C. L., & Parmee, I. C. (2012). Elegant, object-oriented software design via interactive evolutionarycomputation. IEEE Transactions on Systems, Man, and Cybernetics: Part C, 42(6), 1979–1805.

Simons, C. L., Parmee, I. C., & Gwynllyw, R. (2010). Interactive, evolutionary search in upstream object-oriented software design. IEEE Transactions on Software Engineering, 33(6), 798–816.

Simons, C. L., & Smith, J. E. (2012). A comparison of evolutionary algorithms and ant colony optimisationfor interactive software design. In Fast Abstract (Ed.), Collection of the 4th Symposium of Search-BasedSoftware Engineering, (SSBSE 2012) (pp. 37–42). Italy: FBK-Press.

Simons, C. L., & Smith, J. E. (2013). A comparison of meta-heuristic search for interactive software design.Soft Computing, 17, 2147–2162.

Smith, J. E., & Fogarty, T. C. (1996). Evolving software test data—GAs learn self-expression. In EvolutionaryComputing (Ed.), Fogarty (pp. 137–146). Heidelberg: Springer.

Stützle, T., & Hoos, H. (2000). MAX–MIN ant system. Future Generation Computer Systems, 16(8),889–914.

Takagi, H. (2001). Interactive evolutionary computation: Fusion of the capabilities of EC optimization andhuman evaluation. Proceedings of the IEEE, 89(9), 1275–1298.

Ugur, A., & Aydin, D. (2009). Interactive simulation and analysis software for solving TSP using ant colonyoptimization algorithms. Advances in Engineering Software, 40(5), 341–348.

123

Page 19: Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design

Swarm Intell (2014) 8:139–157 157

Weimer, W., Forrest, S., Le Goues, C., & Nguyen, T. (2010). Automatic program repair with evolutionarycomputing. Communications of the ACM, 53(5), 109–116.

Xanthakis, S., Ellis, C., Skourlas, C., Le Gall, A., Katsikas, S., & Karapoulios, K. (1992). Application ofgenetic algorithms to software testing. In: 5th IASTED International Conference on Software Engineeringand Applications (pp. 625–636). Innsbruck: ACTA Press.

Xing, L.-N., Chen, Y.-W., & Yang, K.-W. (2007). Interactive fuzzy multi-objective ant colony optimisationwith linguistically quantified decision functions for flexible job shop scheduling problems. Frontiers in theConvergence of Bioscience and Information (FBIT 2007) (pp. 801–806). IEEE Press: Piscataway.

Zhang, Y. (2014). Repository of publications on search-based software engineering. Retrieved April 15, 2014,from http://crestweb.cs.ucl.ac.uk/resources/sbse_repository/.

123