.lu software verification & validation V V S Effective Test Suites for Mixed Discrete-Continuous Stateflow Controllers Reza Matinnejad Shiva Nejati Lionel Briand SnT Center, University of Luxembourg Thomas Bruckmann Delphi Automotive Systems, Luxembourg
48
Embed
Effective Test Suites for ! Mixed Discrete-Continuous Stateflow Controllers
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
.lusoftware verification & validationVVS
Effective Test Suites for !Mixed Discrete-Continuous
Stateflow Controllers Reza Matinnejad Shiva Nejati Lionel Briand SnT Center, University of Luxembourg
Thomas Bruckmann Delphi Automotive Systems, Luxembourg
Cyber Physical Systems (CPSs) Combination of computations (algorithms) and physical dynamics (differential equations)
2
Physical world Computation
Testing (Typical) Software
3
X = 10, Y = 30
Z = 20
Algorithms
Fail Pass Z = 10
Testing (CPS) Software
4
Algorithms + Differential Equations
Fail Z = 20
X = 10, Y = 30 S1(t) S2(t)
S3(t) Pass
Z = 20 S3(t)
S1t
S2t
S3t
S3t
Software Testing Challenges (CPS)
• Mixed discrete-continuous behavior (combination of algorithms and continuous dynamics)
• Inputs/outputs are signals (functions over time)
• Simulation is inexpensive but not yet systematically automated
• Partial test oracles
5
Generating effective test suites for Software used in !
Cyber-Physical Systems
6
Our Goal
Simulink/Stateflow
• A data flow-driven block diagram language
• Is widely used to develop Cyber Physical Systems
• Is executable
7
Stateflow
• A Statechart dialect integrated into Simulink
• Captures the state-based behavior of CPS software
• Has mixed discrete-continuous behavior
8
Generating effective test suites for mixed discrete-continuous
Stateflow controllers
9
Our Goal
Discrete Behavior What we typically think of software models
10
On
Off
On
Off
Speed < 10 Speed > 10
Discrete-Continuous Behavior What software models are actually being built using Stateflow
11
On
Off
CtrlSig
On
Off
Speed < 10 Speed > 10
tCtrlSig
tCtrlSig
Generating effective test suites for mixed discrete-continuous
Stateflow controllers
12
Our Goal
Test Suite Effectiveness (1) • Test suite size should be small because
• Test oracles cannot be fully automated
• Output signals need to be inspected by engineers
13
ModelSimulation
InputSignals
OutputSignal(s)
S3t
S2t
S1t
S3t
S2t
S1t
Test Case 1
Test Case 2
Test Suite Effectiveness (2) • Test suites should have a high fault revealing power
• Small deviations in outputs may not be recognized/important
• Test inputs that drastically impact the output signal shape are likely to have a higher fault revealing power
14
Test Output 1
TimeTime
CtrlSig
Faulty Model OutputCorrect Model Output
Test Output 2
Test Generation Algorithms!!
15
Our Approach
Test Generation Algorithms • Input-based Test Generation:
• Input Diversity Algorithm
• Coverage-based Test Generation:
• State Coverage Algorithm
• Transition Coverage Algorithm
• Output-based Test Generation:
• Output Diversity Algorithm
• Failure-based Algorithm
16
Input Diversity • Maximizing distances among input signals
17
Test Case 1
Test Case 2
Input Signal 1 Input Signal 2
S1t
S1t
S2t
S2t
Distance Between Signals
18
Time
Signal
Test Generation Algorithms • Input-based Test Generation:
• Input Diversity Algorithm
• Coverage-based Test Generation:
• State Coverage Algorithm
• Transition Coverage Algorithm
• Output-based Test Generation:
• Output Diversity Algorithm
• Failure-based Algorithm
19
Structural Coverage
• Maximizing the number of states/transitions covered
20
State Coverage Transition Coverage
1
4
2
3
1
4
2
3
Test Generation Algorithms • Input-based Test Generation:
• Input Diversity Algorithm
• Coverage-based Test Generation:
• State Coverage Algorithm
• Transition Coverage Algorithm
• Output-based Test Generation:
• Output Diversity Algorithm
• Failure-based Algorithm
21
Output Diversity • Maximizing distances among output signals
22
Test Case 1
Test Case 2
Output Signal
S3t
S3t
Failure-based Test Generation
23
Instability Discontinuity
0.0 1.0 2.0-1.0
-0.5
0.0
0.5
1.0
Time
Ctr
lSig
Output
• Maximizing the likelihood of presence of specific failure patterns in output signals
0.0 1.0 2.0Time
0.0
0.25
0.50
0.75
1.0
Ctr
lSig
Output
We developed our failure-based test generation algorithm using!