Top Banner
March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB
18

March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

Mar 26, 2015

Download

Documents

Kimberly Hall
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

March 25, 2012

Organizing committee:

Hana ChocklerIBM

Daniel KroeningOxford

Natasha SharyginaUSI

Leonardo Mariani Giovanni DenaroUniMiB

Page 2: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

2

Program

09:15 – 09:45 Software Upgrade Checking Using Interpolation-based Function Summaries Ondrej Sery

09:45 – 10:30 Finding Races in Evolving Concurrent Programs Through Check-in Driven Analysis Alastair Donaldson

14:00 – 14:45 Empirical analysis of Evolution of Vulnerabilities Fabio Massacci

11:45 – 12:30 Regression Verification for Multi-Threaded ProgramsOfer Strichman

11:00 – 11:45 Sendoff: Leveraging and Extending Program Verification Techniques for Comparing Programs Shuvendu K. Lahiri

•16:00 – 16:45 Automated Continuous Evolutionary Testing• Peter M. Kruse

•14:45 – 15:30 Testing Evolving Software • Alex Orso,

coffee

lunch

coffee

Page 3: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

3

Motivation:Challenges of validation of evolving software

Large software systems are usually built incrementally: Maintenance (fixing errors and flaws, hardware changes, etc.) Enhancements (new functionality, improved efficiency, extension, new

regulations, etc.) Changes are done frequently during the lifetime of most systems and

can introduce new software errors or expose old errors Upgrades are done gradually, so the old and new versions have to co-

exist in the same system Changes often require re-certification of the system, especially for

mission-critical systems

"Upgrading a networked system is similar to upgrading software of a car while the car's engine is running, and the car is moving on a highway. Unfortunately, in networked systems we don't have the option of shutting the whole system down while we upgrade and verify a part of it.“

source: ABB

Page 4: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

4

What does it mean to validate a change in a software system?

• Equivalence checking – when the new version should be equivalent to the previous version in terms of functionality• Changes in the underlying hardware• Optimizations

• No crashes – when several versions need to co-exist in the same system, and we need to ensure that the update will not crash the system• When there is no correctness specification, this is often the only thing

we can check• Checking that a specific bug was fixed

• A counterexample trace can be viewed as a specification of a behavior that needs to be eliminated in the new version

• Validation of the new functionality• If a correctness specification for the change exists, we can check

whether the new (or changed) behaviors satisfy this specification

Page 5: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

5

Why is it validation of evolving software different from standard software validation?• Software systems are too large to be formally verified or exhaustively tested at

once• Even if it is feasible to validate the whole system, often the process is too long

and expensive and does not fit into the schedule of small frequent changes• When validating the whole system, there is a danger of overlooking the change

How can we use the fact that we are validating evolving software?• If the previous version was validated in some way, we can assume that it

is correct and not re-validate the parts that were not changed• If the results of previous validation exist, we can use them as a basis for

the current validation – especially useful when there are many versions that differ from each other only slightly

• The previous version can be used as a specification

Page 7: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

7

PINCETTE: exchange of information between static analysis and dynamic analysis techniques• Using static slicer as a preprocessing step to the dynamic analysis

tools• The slicer reduces the size of the program so that only the parts

relevant to the change remain• The resulting slice is then extended to an executable program

• Specification mining: obtaining candidate assertions from dynamic analysis and using them in static analysis

Static Analysis Component

Dynamic AnalysisComponent

Page 8: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

Slicing procedure

Program

Control Flow Graph (CFG)

Prog. Dep. Graph (PDG)

2

5

8

6

14

12

10 15

9

11

13

167

©Ajitha Rajan, Oxford

Page 9: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

Forward Slicing from Changes

• Compute the nodes corresponding to changed statements in the PDG, and

• Compute a transitive closure over all forward dependencies (control + data) from these nodes.

Backward Slicing from Assertions

• Identify the assertions to be rechecked after the changes• Compute a transitive closure of backward dependencies (control +data)

from these assertions

+

©Ajitha Rajan, Oxford

Page 10: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

Example

Depth first traversal from Node b = -a;

int main(){

int a, b;if (a>=0)

b = a;else

b = -a; assert(b >= 0);return 0;

}

Backward Slice

©Ajitha Rajan, Oxford

b=-a

assert(b>=0)

Forward Slice

b=-a

assert(b>=0)

b=a

If (a>=0)

b=-a

assert(b>=0)

b=a

If (a>=0)

int a,b

return 0

Control Dep.

Data Dep.

PDG

Page 11: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

Slicing procedure

ProgramGOTO

Programgoto-cc Control Flow

Graph (CFG)

Prog. Dep. Graph (PDG)

Forward Slice

Backward Slice

©Ajitha Rajan, Oxford

Page 12: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

Slicing procedure

ProgramGOTO

Programgoto-cc Control Flow

Graph (CFG)

Prog. Dep. Graph (PDG)

Forward Slice

Backward Slice

Merged Slice

©Ajitha Rajan, Oxford

Page 13: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

Slicing procedure

ProgramGOTO

Programgoto-cc Control Flow

Graph (CFG)

Prog. Dep. Graph (PDG)

Forward Slice

Backward Slice

Merged Slice

Residual Nodes and

edges

Program Slice

executable

©Ajitha Rajan, Oxford

Page 14: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

Static Pre-Pruning

14

Static Slicer

DynamicAnalyser

ConstrainInputs

...

©Ajitha Rajan, Oxford

Page 15: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

15

Dynamically Discovering Assertionsto Support Formal Verification

Motivation: •“Gray-box” components (such as OTS components) – poor specifications, partial view of internal details•Lack of specification complicates validation and debugging•Lack of description of the correct behavior complicates integration

Idea: Analyze gray-box components by dynamic analysis techniques:•Monitor system executions by observing interactions at the component interface level and inside components• Derive models of the expected behavior from the observed events• Mark the model violations as symptoms of faults

©Leonardo Mariani, UniMiB

Page 16: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

16

Dynamically Discovering Assertions at BCT

• Combining dynamic analysis and model-based monitoring• Combining classic dynamic analysis techniques (Daikon) with

incremental finite state generation techniques (kBehavior) to produce I/O models and interaction models• FSA are produced and refined based on subsequent executions

• Extracting information about likely causes of failures by automatically relating the detected anomalies

• Filtering false positives in two steps:• Identify and eliminate false positives by comparing failing and

successful executions with heuristics already experienced in other contexts

• Rank the remaining anomalies according to their mutual correlation and use this information to push the related likely false positives far from the top anomalies

©Leonardo Mariani, UniMiB

Page 17: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

1717

“User in the Middle” Strategy

Dynamic Analyser

System Under Test

Executions

Static Analysis

user

candidate assertions

©Leonardo Mariani, UniMiB

trueassertions

approvedassertions

Static Analysis

upgradeDynamic Analysis

true assertions(no user intervention)

Page 18: March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB.

18

PINCETTE Project – Validating Changes and Upgradesin Networked Software

Checking for crushes

Static Analysis Component

Black box testing

Dynamic AnalysisComponent

Front end

Methodologybook

Using function

summaries

Verifying only the change

Concolic testing

Next talkNext talk