Top Banner
1 of 15 Comparative Analysis of Classical Multi-Objective Evolutionary Algorithms and Seeding Strategies for Pairwise Testing of Software Product Lines Roberto E. Lopez-Herrejon*, Javier Ferrer**, Francisco Chicano**, Alexander Egyed*, Enrique Alba** * Johannes Kepler University Linz, Austria ** University of Malaga, Spain
15

Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

Jan 18, 2017

Download

Javier Ferrer
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

1 of 15

Comparative Analysis of Classical Multi-Objective Evolutionary Algorithms

and Seeding Strategies for Pairwise Testing of Software Product Lines

Roberto E. Lopez-Herrejon*, Javier Ferrer**, Francisco Chicano**,

Alexander Egyed*, Enrique Alba*** Johannes Kepler University Linz, Austria

** University of Malaga, Spain

Page 2: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

2 of 15

Introduction

Software Product Lines (SPLs) Families of software products

• Each product has different feature combinations Have multiple economical and technological advantages

• Increased software reuse, faster time to market, better customization

Challenge: How to test a Software Product Line effectively? Important factors to consider

Typical SPLs have a large number of different software products Avoiding repeating tests Within the economical and technical constraints

Page 3: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

3 of 15

Pairwise testing of SPLs

Existing work (Wang13, Henard13)Use a linearization approach where each optimization

objective is given a weight and later added

Optimization objectives: coverage and test suite size Our proposal

Formalization of SPL pairwise testing problem for multiple-objective algorithms

Study 4 classical MOEAs for pairwise testing SPLSAnalyze the impact of three seeding strategiesEvaluate using a large and diverse corpus

Page 4: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

4 of 15

Combinatorial Interaction Testing (CIT) for SPLs

Combinatorial Interaction Testing (CIT)Select a test suite, which is a group of products where

faults are more likely to occur

Based on feature modelsDe facto standard to model all the products (feature

combinations) of a product line

Pairwise testing – combinations of two features 4 options: selected both, not selected both, one

selected but not the other, and vice versa

Page 5: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

5 of 15

Num requires Search SCC requires DFS

CC requires Undirected Cycle requires DFS

CC requires Search Kruskal requires Undirected Kruskal excludes Prim

SCC requires Directed Kruskal requires Weight Shortest requires DirectedPrim requires Undirected Prim requires Weight Shortest requires Weight

Feature Model ExampleGraph Product Line (GPL)

GPL

Driver

Benchmark

GraphType

Directed Undirected

Weight Search

DFS BFS

Algorithms

Num CC SCC Cycle

Prim Kruskal

Shortest

Mandatory OptionalExclusive-or

Inclusive-orRoot

Cross-Tree Constraints (CTC)

Page 6: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

6 of 15

Valid Feature Sets, Pairs & Examples A valid feature set is a combination of features that meets all

the constraints from the feature model A valid pair is a combination of two features that meets all the

constraints from the feature model

Page 7: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

7 of 15

Pairwise Test Suite

Pairwise test suite is a set of valid feature sets that covers all possible valid pairs

GPL Example73 feature sets418 pairs

Pairwise test suite for GPL

Page 8: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

8 of 15

MOO from the Software Engineer’s Perspective

Number of Products

Cove

rage

Per

cent

age

Pareto Front for the GPL Example

Page 9: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

9 of 15

Our work in a nutshell

Uses classical MOO algorithmsNSGA-II – crowding distance and rankingMOCell – cellular GA, based on neighbourhoodSPEA2 – population and archivePAES – evolution strategy

Uses standard comparison MOO metricsHypervolumeGenerational distance

Analyses the impact of seeding Three distinct strategies

Page 10: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

10 of 15

Analyzing Impact of Seeding

Seeding Embed domain knowledge into the individuals of the population

We used 3 seeding strategies for the initial population Size-based Random Seeding

• Compute a pairwise test suite with CASA and use its size to generate the population

Greedy Seeding• Greedily computes a pairwise test suite and uses its elements

to generate the population

Single-Objective Based Seeding• Creates a population based on a single-objective output CASA

Page 11: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

11 of 15

Evaluation Overview

Selection of 19 realistic case studies from different application domains Feature models and implementation publicly available

Feature model analysis employed standard tools FAMA, SPLAR, SPLCA

Experimental setting Quality indicators employed: Hypervolume(HV) and Generational

Distance(GD) Total independent runs 6,840

• 4 algorithms × 3 seeding strat. × 19 models × 30 runs = 6,840

Standard statistical analysisWilcoxon Test and Â12

Page 12: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

12 of 15

Results

Algorithms HV GD TIME NSGA-II 0.658

3 0.0396 70,523

MOCell 0.6553 0.0293 74,325

SPEA2 0.6533 0.0289 71,349

PAES 0.6390 0.0351 101,246

Algorithms HV GD TIMENSGA-II – SPEA2 0.5182 0.5172 0.4904

NSGA-II – MOCell 0.5112 0.5202 0.4816

NSGA-II – PAES 0.5626 0.4560 0.2839

SPEA2 – MOCell 0.4932 0.5039 0.4910

SPEA2 – PAES 0.5447 0.4205 0.3019

MOCell - PAES 0.5521 0.4194 0.3027

Seeding HV GD TIME

Sized-Based 0.6421 0.0427 138,404

Greedy 0.6556 0.0447 76,783

Single Obj. 0.6568 0.0123 25,800

Seeding HV GD TIMESized Based - Greedy 0.4568 0.4795 0.6377

Sized Based – Single Obj. 0.4558 0.8562 0.8619

Greedy – Single Obj. 0.4977 0.7839 0.8227

Quality Indicators Results

Â12 Statistic Test Results

Page 13: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

13 of 15

Summary of Results

RQ1. What is the best algorithm among the four studies for multi-objective SPL pairwise testing?No clear winner between NSGA-II, MoCELL, SPEA2PAES performs slightly worse overall

RQ2. How does the seeding impact the quality of solutions obtained by the four algorithms?Single-objective Based Seeding clearly yields better

results than other two strategies• The more knowledge used in the initial population

the better

Page 14: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

14 of 15

Future Work

Extending the feature model corpusLarger and more diverse case studies

Analysis of the impact of parameter setting

Integrate other domain knowledgeControl flowStructural metrics of feature models

Page 15: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of Software Product Lines.

15 of 15

Acknowledgements

Questions?

Spanish Ministry of Economy and Competitiveness,

FEDERAustrian Science Fund