Top Banner
Collaborative runtime verification with tracematches Eric Bodden Laurie Hendren Patrick Lam Ondrej Lhotak Nomair A. Naeem McGill University University of Waterloo
18

Collaborative runtime verification with tracematches

Feb 07, 2016

Download

Documents

kioko

McGill University. Eric Bodden Laurie Hendren Patrick Lam Ondrej Lhotak Nomair A. Naeem. University of Waterloo. Collaborative runtime verification with tracematches. Problem. Ideally, runtime verification code should be included in deployed programs: Allows for easier debugging - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Collaborative runtime verification with tracematches

Collaborativeruntime

verificationwith

tracematches

Eric BoddenLaurie Hendren

Patrick LamOndrej Lhotak

Nomair A. Naeem

McGill University

University of Waterloo

Page 2: Collaborative runtime verification with tracematches

ProblemIdeally, runtime verification code should be

included in deployed programs: Allows for easier debugging Actual usage vs. test case coverage

Current runtime monitoring approaches do not scale well enough.

2

Here: Tracematches

Page 3: Collaborative runtime verification with tracematches

A common programming problem

Collection c = Collections.synchronizedCollection(myC);

synchronized(c) {

}

Iterator i = c.iterator(); while (i.hasNext()) foo(i.next());

3

Page 4: Collaborative runtime verification with tracematches

Tracematch "ASyncIteration"tracematch(Object c) {

sym sync after returning(c):call(* Collections.synchr*(..));

sym asyncIter before:call(* Collection+.iterator()) && target(c)

&& if(!Thread.holdsLock(c));

sync asyncIter {System.err.println( "Iterations over "+c+" must be synchronized!");

}}

4

Page 5: Collaborative runtime verification with tracematches

5

Instrumentedprogra

m

Runtime Monitor

Static Optimizations (ECOOP 2007)

Page 6: Collaborative runtime verification with tracematches

Static Optimizations (ECOOP 2007) Quick check:

Eliminate incomplete tracematches Pointer analysis:

Retain “consistent sets of instrumentation points”Brings overhead under 10% in most cases.

However, some overheads still exceed 150%!Goal: 10% overhead in all cases

6

Page 7: Collaborative runtime verification with tracematches

Instrumente

dprogra

m

Runtime Monitor

Instrumente

dprogra

m

Runtime Monitor

Instrumente

dprogra

m

Runtime Monitor

7

Collaborative runtime verificationSpatial partitioning

Page 8: Collaborative runtime verification with tracematches

8

Spatial partitioning in detailFirst of all, identify multiple probes: A set of instrumentation points (shadows)

that could potentially lead to a match Find such sets of shadows using flow-

insensitive points-to analysis

Page 9: Collaborative runtime verification with tracematches

Identifying probes

o1o2

asyncIter(c=c2)

asyncIter(c=c3)

sync(c=c1)

9

Probe

Page 10: Collaborative runtime verification with tracematches

Instrumente

dprogra

m

Runtime Monitor

Instrumente

dprogra

m

Runtime Monitor 10

Instrumente

dprogra

m

Runtime Monitor

Completeness

Page 11: Collaborative runtime verification with tracematches

Instrumente

dprogra

m

Runtime Monitor

Instrumente

dprogra

m

Runtime Monitor 11

Instrumente

dprogra

m

Runtime Monitor

Temporal partitioning

Problem: Hot shadows

Page 12: Collaborative runtime verification with tracematches

Could switching probes on and off lead to false positives?

12

No, we can safely enable a probe anytime due to tracematch semantics.Opposed to e.g. LTL always match against a

suffix of the execution trace.

Can also disable anytime.Just have to make sure we discard bindings.

sync aSyncIter

skip(aSyncIter)*

Page 13: Collaborative runtime verification with tracematches

Code generation for probe switching

asyncIter(c=c3)sync(c=c1)

13

asyncIter(c=c2)sync(c=c5)

asyncIter(c=c4)sync(c=c1)

0 1

2 3

0 4

0

1

2

0 1 2 3 4

Page 14: Collaborative runtime verification with tracematches

Benchmarks

Benchmark Tracematch probes Initial additional runtime

antlr Reader 4 20.6%

chart FailSafeIter 742 20.6%

lucene HasNextElem 6 11.9%

pmd FailSafeIter 426 79.0%

pmd HasNext 32 158.1%

14

Ran each benchmark/tracematch combination with one probe enabled at a time

Measured relative runtime overhead

ECOOP ’07 benchmarks with largest overheads

Page 15: Collaborative runtime verification with tracematches

Overheads after spacial partitioning

15

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 170.00%

20.00%

40.00%

60.00%

80.00%

100.00%

120.00%

140.00%

pmd/HasNext

pmd/FailSafeIter

chart/FailSafeIter

antlr/Reader

lucene/HasNextElem

Probe, sorted by decreasing overhead

Page 16: Collaborative runtime verification with tracematches

Future work Implement temporal partitioning

Requires probabilistic foundation Try this out on a larger scale

Need Java programs with a large user base, willing to cooperate

Try using JVM support to find hot probesProduction JVMs already compute statisticsWould enable more efficient probe switching

Eliminate super-hot shadows through better static analysis

16

Page 17: Collaborative runtime verification with tracematches

Conclusion Sound collaborative RV is possible using

tracematches Can construct probes using a flow-insensitive

points-to analysis Approach works for some programs but very

hot shadows can still be bottlenecks Found a heuristic to statically identify shadows

with potentially high runtime impact Further static optimizations probably more

promising

17

Page 18: Collaborative runtime verification with tracematches

Thank youThank you for listening and the entire AspectBench Compiler group for their

enduring support!

Download our tool, examples and benchmarks at:

www.aspectbench.org

18