Top Banner
Code-coverage Based Test Vector Generation for SystemC Designs Alair Dias Junior Departamento de Eletronica Universidade Federal de Minas Gerais Av. Antonio Carlos, 6627 Pampulha - Belo Horizonte - MG Email: [email protected] Diogenes Cecilio da Silva Junior Departamento de Eletronica Universidade Federal de Minas Gerais Av. Antonio Carlos, 6627 Pampulha - Belo Horizonte - MG Email: [email protected] Abstract— Time-to-Market plays a central role on System-on- a-Chip (SoC) competitiveness and the quality of the final product is a matter of concern as well. As SoCs complexity increases, the validation effort reveal itself as a great barrier, consuming a considerable fraction of the total development time and resources. This work presents a methodology for automatic test vector generation for SystemC designs based on code coverage analysis that is complementary to the functional testing. Instead of create all vectors necessary to guarantee total coverage of the design, it uses code coverage information to generate test vectors to cover the portions of code not exercised by the Black-box Testing. Vectors are generated using a numerical optimization method which does not suffer from restrictions related to symbolic execution such as defining array reference values and loop boundaries. By using the methodology, we expect to guarantee total coverage of the DUV minimizing the fault of omission problem, undetectable by Structural testing. I. I NTRODUCTION As VLSI and System-on-a-Chip (SoC) technologies advance into the multi-million gates designs, productivity becomes one of the bottlenecks of the design flow. This well known prob- lem, referred as productivity gap [1], has been the motivation for the development of new techniques such as component reuse and platform-based design [1]–[5]. System-level descrip- tion languages (SDLs) play a central role in this new business model as they make the design cycle more straightforward and the product of each iteration more reliable. SystemC is an SDL that is becoming a standard for the description of complex SoCs and has been used by major silicon companies like Intel, STMicroelectronics, Philips and Texas Instruments [6]. It enables simulation of hardware and software and guarantees a fast learning-curve, once it is based on C++ open standard. The following characteristics also make it atractive: 1) The same language can be used from the earlier stages of the design cycle to the RTL level; 2) Modules described at different levels of abstraction, and even software, can be simulated together; 3) Designers can explore the performance of different ar- chitectures easily, as higher levels of abstraction simulate faster than low-level HDLs; 4) Verification can take place earlier in the design process. In the complex SoCs context, verification is a major issue, consuming about 70% of the total development effort [7]. Despite the verification effort has been pointed as a problem in studies over the last decade [2], [5], the number of 1-spin successful chips is decreasing [8]. This may indicate that the first generation of verification tools and methodologies were not very well successful in attend the complexity increasing in SoC designs or, at least, they were not largely adopted by the industry. SystemC was not defined with formal analysis in mind and recent works were intended to allow more efficient verification using it. In [9] it is proposed a methodology based on a combination of static code analysis and SystemC semantics described with abstract state machines (ASM). The authors assume most of SoCs are composed of multiple Intellectual Properties (IPs) which has already been verified individu- ally. Hence, the verification process of the SoC properties are mostly related to transactions which can be represented efficiently as state machines. Once compiled, the finite state machines can be used for formal verification by external tools linked to ASM. Another approach [10] presents a toolbox for analisys of SoC described in SystemC at the transactional level. The proposed tools extract the formal semantics of the SystemC design and build a set of parallel automata. This intermediate representation can be connected to existing formal verification tools via appropriate encodings, allowing formal verification of the design. Despite the results achieved by these works, formal veri- fication is not the mainstream methodology in the industrial context and verification will continue to rely on simulation for the foreseeable future [11]. Tools and methodologies should be directed to the automatic generation of the testbench and the input test vectors necessary to efficiently exercise the Design Under Verification (DUV). In [12] it is proposed an interesting tool for automatic testbench generation. VeriSC generates templates of the test- bench’s modules according to the DUV’s characteristics, leav- ing to the verification engineer to create the signal handshak- ing, functional coverage metrics and input value distributions. SystemC Verification Library (SVC) classes are created au-
6

Code-coverage Based Test Vector Generation for SystemC Designs

Feb 09, 2023

Download

Documents

Pedro Kalil
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Code-coverage Based Test Vector Generation for SystemC Designs

Code-coverage Based Test Vector Generation forSystemC Designs

Alair Dias JuniorDepartamento de Eletronica

Universidade Federal de Minas GeraisAv. Antonio Carlos, 6627

Pampulha - Belo Horizonte - MGEmail: [email protected]

Diogenes Cecilio da Silva JuniorDepartamento de Eletronica

Universidade Federal de Minas GeraisAv. Antonio Carlos, 6627

Pampulha - Belo Horizonte - MGEmail: [email protected]

Abstract— Time-to-Market plays a central role on System-on-a-Chip (SoC) competitiveness and the quality of the final productis a matter of concern as well. As SoCs complexity increases,the validation effort reveal itself as a great barrier, consuming aconsiderable fraction of the total development time and resources.This work presents a methodology for automatic test vectorgeneration for SystemC designs based on code coverage analysisthat is complementary to the functional testing. Instead of createall vectors necessary to guarantee total coverage of the design, ituses code coverage information to generate test vectors to coverthe portions of code not exercised by the Black-box Testing.Vectors are generated using a numerical optimization methodwhich does not suffer from restrictions related to symbolicexecution such as defining array reference values and loopboundaries. By using the methodology, we expect to guaranteetotal coverage of the DUV minimizing the fault of omissionproblem, undetectable by Structural testing.

I. INTRODUCTION

As VLSI and System-on-a-Chip (SoC) technologies advanceinto the multi-million gates designs, productivity becomes oneof the bottlenecks of the design flow. This well known prob-lem, referred as productivity gap [1], has been the motivationfor the development of new techniques such as componentreuse and platform-based design [1]–[5]. System-level descrip-tion languages (SDLs) play a central role in this new businessmodel as they make the design cycle more straightforward andthe product of each iteration more reliable.

SystemC is an SDL that is becoming a standard for thedescription of complex SoCs and has been used by majorsilicon companies like Intel, STMicroelectronics, Philips andTexas Instruments [6]. It enables simulation of hardware andsoftware and guarantees a fast learning-curve, once it is basedon C++ open standard. The following characteristics also makeit atractive:

1) The same language can be used from the earlier stagesof the design cycle to the RTL level;

2) Modules described at different levels of abstraction, andeven software, can be simulated together;

3) Designers can explore the performance of different ar-chitectures easily, as higher levels of abstraction simulatefaster than low-level HDLs;

4) Verification can take place earlier in the design process.

In the complex SoCs context, verification is a major issue,consuming about 70% of the total development effort [7].Despite the verification effort has been pointed as a problemin studies over the last decade [2], [5], the number of 1-spinsuccessful chips is decreasing [8]. This may indicate that thefirst generation of verification tools and methodologies werenot very well successful in attend the complexity increasingin SoC designs or, at least, they were not largely adopted bythe industry.

SystemC was not defined with formal analysis in mind andrecent works were intended to allow more efficient verificationusing it. In [9] it is proposed a methodology based on acombination of static code analysis and SystemC semanticsdescribed with abstract state machines (ASM). The authorsassume most of SoCs are composed of multiple IntellectualProperties (IPs) which has already been verified individu-ally. Hence, the verification process of the SoC propertiesare mostly related to transactions which can be representedefficiently as state machines. Once compiled, the finite statemachines can be used for formal verification by external toolslinked to ASM.

Another approach [10] presents a toolbox for analisys ofSoC described in SystemC at the transactional level. Theproposed tools extract the formal semantics of the SystemCdesign and build a set of parallel automata. This intermediaterepresentation can be connected to existing formal verificationtools via appropriate encodings, allowing formal verificationof the design.

Despite the results achieved by these works, formal veri-fication is not the mainstream methodology in the industrialcontext and verification will continue to rely on simulation forthe foreseeable future [11]. Tools and methodologies should bedirected to the automatic generation of the testbench and theinput test vectors necessary to efficiently exercise the DesignUnder Verification (DUV).

In [12] it is proposed an interesting tool for automatictestbench generation. VeriSC generates templates of the test-bench’s modules according to the DUV’s characteristics, leav-ing to the verification engineer to create the signal handshak-ing, functional coverage metrics and input value distributions.SystemC Verification Library (SVC) classes are created au-

diogenes
which
diogenes
creating
diogenes
this
Page 2: Code-coverage Based Test Vector Generation for SystemC Designs

tomatically to generate the inputs for the design and for aReference Model (RM) written in any high-level executablelanguage. The Design and the RM’s output data are thencompared to determine whether the DUV behaved correctly.

Several works have also been written about automatic inputvectors generation, most of them related to Black-box testingapplied to pure software development. In such techniques,there is no need for previous knowledge of the DUV’s internalstructure, and the behavior of the DUV is compared against arequirements specification. In [13], it is proposed an automatictest generation using checkpoint encoding and antirandomtesting which has achieved better results than pure randomtest generation [14].

A structural approach for test data generation is presentedin [15] where a system to generate test data for ANSI Fortranprograms is described. The system executes symbolically aprogram path in order to create a set constraints to the inputvariables. From this constraints it is possible to extract thenecessary input values to exercise the given path, if the path isfeasible. Symbolically executing a program is a computationalexpensive task, and raise some technical difficulties such asdealling with variable arrays, loop conditions and modulecalls.

Gallagher et al used another approach [16]. The problemof test data generation is treated as a numerical optimizationproblem and the method does not suffer from the technicaldifficulties associated to symbolic execution systems. Manyof Gallagher’s contributions are used in this methodology.

This work presents a methodology for automatic test vectorgeneration for SystemC designs based on code coverageanalysis that is complementary to the functional testing. It usesthe information provided by a code coverage tool to find a pathin a Coverage Flow Graph (TFG1) that leads to the uncovereddata-flow units. The constraints in this path are used by anoptimization method to extract the test vectors necessary toguarantee total coverage of the design.

Concepts of Code Coverage are presented in section II.Section III gives an overall look of the methodology usedfor automatic test generation. Section IV describes the in-strumentation of the source code and section V describes theNumerical Method for Test Generation. An example is givenin section VI and concluding remarks are presented in sectionVII, where it is also pointed the direction for future works.

II. CODE COVERAGE ANALYSIS

Code coverage analysis is a Structural testing techniquethat compares the program under test behavior against itssource code apparent intention. Code coverage analysis toolsautomate code coverage analysis by measuring coverage [17].Their results are used as an indirect measure of the qualityof the test suite, indicating portions of code not exercised bythe tests. The completeness of a test is measured in terms ofthe fraction of data-flow units (DUFs), such as statements and

1TFG is used to refer to Coverage Flow Graph instead of CFG which isfrequently used to refer to Control Flow Graph.

branches, covered by it. Many definitions for types of codecoverage can be found in the literature. This work uses theones described below [14], [17], [18]:

• Statement Coverage - The fraction of blocks exercisedby the test data;

• Branch Coverage - The fraction of branches evaluatedboth true and false in a program;

• Relational Coverage - The fraction of boundary limitsexercised in each relational operator, catching commonmistakes like using a “<” where a “<=” is intended;

• Loop Coverage - The fraction of loops covered bythe tests. Complete loop coverage requires that a loopcondition should be executed once, several times and alsoshould be skipped;

• Path Coverage - The fraction of execution paths fromthe program’s entry point to its exit covered by the test.

Some of these coverage types could be infeaseble in someprograms. For example, the number of paths in a programincreases tremendously with each additional loop or branch,making it impossible to achieve 100% path coverage in most ofthe designs. Additionally, code coverage cannot help findingevery possible fault on a given design. The most commonfault overlooked by this technique is the fault of omissionwhen some required feature is not implemented. On the otherhand, intuition suggests a relationship between test coverageand reliability. Malaiya et al [18] confirmed a positive relationbetween them and proposed a model that can predict reliabilityfrom test coverage measures. From this perspective, a toolcapable of generate input vectors based on code coverage canimprove the quality of the final product.

III. THE METHODOLOGY

Black-box testing techniques are based on the functionalrequirements and on random testing in some extension, notconsidering the internal structure of the DUV. Such an ap-proach can leave portions of code not exercised, such as rarelyexecuted modules or cases which have low controllability.Even using code coverage tools, the task of testing thoroughlya design can be difficult, as such tools only produce sourcecode information which the verification engineers could finddifficult to interpret. In cases that automatic code generationor automatic synthesis tools are used, this situation can bemore complex and even good programmers would need a lotof time to completely understand the results.

The methodology proposed in this work combines Black-box testing techniques with Structural testing in order toimprove test performance. The information regarding portionsof code not exercised by the first test cycle (Black-box testing)is used to generate vectors capable of exercise the unverifiedcode during a second cycle (Structural testing). As the Black-box testing is expected to cover most of required featuresof the DUV, we expect to guarantee total coverage of thedesign minimizing the fault of omission problem, undetectableby Structural testing alone. The methodology is sumarized infigure 1.

Page 3: Code-coverage Based Test Vector Generation for SystemC Designs

Apply the FunctionalTest Vectors to the

DUV

Identify the UncoveredData-Flow Units (DFU)

Use the Coverage Flow Graphto Find a Path to the

Uncovered DFU

Force the repeatedly Executionof the Path as a Straight-lineminimizing the error of the

Path Constraints

Use the Found Variable Valuesto create the Code-Coverage

based Test Vectors

Fig. 1. Methodology

Before the functional testing, the DUV source code mustbe instrumented. Code instrumentation allow a fine controlof the program execution and it is explained in detail onsection IV. The instrumented source code is compiled to obtaintwo products: 1) The DUV AST and 2) The DUV Executablemodel.

An AST [19] is a condensed form for representing languageconstructs. In such representation, operators and keywordsare associated with the interior nodes and the node leavesrepresent the operands. The DUV AST would provide us withthe program flow information necessary to create a DUVCoverage Flow Graph (TFG).

The TFG represents the program flow through the data-flowunits. Its arcs are the necessary conditions to reach a vertexand the vertices represent the data-flow units covered by thatcondition. Figure 2 shows an example of a Coverage FlowGraph (TFG) construction for branch coverage. Each branchis identified by its line number in the source code.

1: If (controle == 1){2: c = a + b;3: if (c > 10);4: c = 10;5: }else c = 0;6:

1

3

If-then-else

==

controle 1 =

c

a b

+

True

False

.

If-then-else

>

c 10

=

c 10

end

=

c 0

True

False

Fig. 2. Coverage Flow Graph Construction

After code instrumentation and compilation, the FunctionalTesting Vectors are applied to the DUV Executable Model.Data-flow units covered by the test are recorded. The remain-ing DFUs are marked as uncovered. For each uncovered DFUa set of paths is traced from the begining of the TFG to thevertex containing that DFU. Some criteria should be used tochoose one among all the possible paths, for example thenumber of vertices it passes through. The set of constraints ofthe chosen path represents the necessary conditions to executeits branches and are explained in detail in section V.

A numerical optimization method is used to generate theinput values that leads the program flow through the chosenpath. The numerical optimization routine is inserted into thedesign source code together with the instrumentation duringthe first step. This approach has advantages over symbolic ex-ecution as the optimization method has access to all variables,making unecessary to mantain its values and path conditions asalgebraic expressions. Control conditions are placed in everybranch to allow forcing the branch execution. The path isexecuted repeatedly trying to minimize the penalty function ofthe path constraints. A failure to find the correct input vectorsto the path can mean that [16] the path is infeasible, the pathmay or may not be feasible but the optimization search hasbecome stuck in a local minimum or the initial starting pointis too far from the solution.

IV. INSTRUMENTATION

In code instrumentation, each predicate in a branch issubstituted by a function call, as presented in figure 3. Thisstep is needed to extract coverage information of the first testcycle and to use the numerical method of test data generation[16].

   

1: If (c == 1)2: c = a + b;3: else4: c = a - b;

1: If (store(eq, c – 1, c == 1, 1))2: c = a + b;3: else4: c = a - b;

Fig. 3. Instrumented Source Code

Store is a global function that records information aboutcondition statements. It has been modified from the onepresented in [16] and has four parameters:

1) type - represents the relational operators =, ≥, >, 6= withthe enumerated type {eq, ge, gt, ne} respectivelly. Con-ditions involving the other relational operators should berearranged to use one of them;

2) constraint - represents the constraint of the branchpredicate. The predicate should be reformulated to bein the form of c == 0 or c 6= 0 or c ≥ 0 or c > 0;

diogenes
would
diogenes
root
Page 4: Code-coverage Based Test Vector Generation for SystemC Designs

3) branch predicate - The branch predicate in its originalform;

4) branch identification - A number that exclusively iden-tify the branch.

The store function records the values described above inglobal variables. These variables are indexed by the branchidentification number. The identification number is also used asa stop condition to the numerical method. If a branch markedas the end of the path segment is reached, the iteration ends.

In the methodology proposed in this work, store is also usedto control the program flow by selecting the way the branchpredicate is executed. The branch identification numbers aregiven in the instrumentation fase rather than during the pro-gram execution as done in [16], facilitating the instrumentationof loop conditions. The store function is presented in figure 4.

   

bool store(comp_type type_v, int g_v, bool p_v, int id){ type[id] = type_v; g[id] = g_v; p[id] = p_v; if (p_v) pTrue[id] = true; else pFalse[id] = false; if (id == stopBranch) segmentEnd();

return (p[id] && !contExec) || (contExec && Branch[id]);}

Fig. 4. Store function

Variables pTrue and pFalse record if the branch has beenexecuted in its true and false conditions respectivelly. Theyare used to determine what DFUs were not covered properlyduring the Functional Testing.

A signal controls if the branch is executed normally orforced. If the global variable contExec is false, the branchpredicate is tested normally and the program flow depends onthe input vectors. However, if contExec is true, the programflow is controled by the values on the Branch boolean array.Each element of this array indicates if a branch should beforced to execute. The contExec is true while the optimizationmethod is executing, allowing the program to be execute as astraight line through a path determined by the Branch arrayvalues.

V. NUMERICAL METHOD FOR TEST DATA GENERATION

Instead of symbolically executing the DUV, a numericaloptimization method is used to determine the necessary inputvalues to exercise a given path.

All decision statements in the DUV source code are substi-tuted by path constraints of the form ci = 0, ci 6= 0,ci ≥ 0 orci > 0, where ci is a real-valued constraint that measures howclose the ith path condition is to being satisfied. If a statementpredicate is not in one of the forms gived above, it shouldbe rearranged. For example, a predicate of the form ctl < 2should be rewrited as 2− ctl > 0.

Using this path constraints, the predicate can be solvedwithout explictly knowing its exact form by picking valuesfor the inputs such that ci satisfies the given constraint.The constrained optimization problem can be formulated asminimizing the following function:

f(x), x ∈ <n (1)

Subject to the following set of nonlinear path constraints:

gi(x)

>≥=6=

0, i = 1, 2, ...,m (2)

Where the vector x represents the DUV n inputs and f(x)is a function which result represents the distance from thecurrent vector x to the optimal vector.

Using a penalty function technique, the constrained opti-mization problem can be reduced to an unconstrained one withthe following objective function:

f ′(x, w) =n∑i

G(gi(x), wi, typei) (3)

Where w is a set of positive weighting factors and theterm G(gi(x), wi, typei) represents the penalty imposed uponthe ith path constraint with a constraint value of gi(x) and arelational operator represented by typei. The penalty functionG is always positive and it is chosen such that its value is smallif the constraint is satisfied. The value of gi(x) is calculatedforcing the execution of the branches in the ith path. Pathconstraints involving logical operators can be reduced usingthe following rules [16]:

• G(p1AND p2) = G(g1) + G(g2)• G(p1OR p2) = min(G(g1), G(g2))• G(NOT p) = G′(g)• G(p1XOR p2) = min(G(g1)+G′(g2), G′(g1)+G(g2))

Where pi is a predicate with contraint value gi and G′(gi)is the penalty function to test the inverse relational operatorin pi.

The value of the vector x that minimizes function 3 can becalculated using any unconstrained multivariate optimizationmethod such as a quasi-Newton using BFGS update. A morecomplete description of the Numerical Method for Test DataGeneration employed here can be found on [16].

VI. A SIMPLE EXAMPLE

The piece of code in figure 5 is a simple arithmeticunit designed to calculate an operation over two integers.Possible operations are sum, subtraction, multiplication anddivision. There are three input variables for the design: a, band ctl, respectivelly the first operator, second operator andthe operation selection line. Multiplication and division weredesigned as loop structures to ilustrate the use of them in the

diogenes
phase
diogenes
executed
diogenes
these
diogenes
choosing
diogenes
i-th
diogenes
i-th
diogenes
in equation (3)
diogenes
in
diogenes
operator, nao seria "operand"?
diogenes
operator, nao seria "operand"?
Page 5: Code-coverage Based Test Vector Generation for SystemC Designs

methodology. A specialization was made on multiplication anddivision by two. If the second operator is two, the operationis treated as a bitwise shift.

Analysing carefully the code, one can find two mistakes:1) The specialization for the division on line 27 is mistaken.

Division by two is the same as a shift right operation;2) Again in the division operation, when the operators have

different signals, the result would be wrong. There is afault of omission in the code, as the signals of operatorsshould be tested in order to determine the signal of theresult.

1: if (store(eq, ctl - 1, ctl == 1, 1)) 2: { 3: c = a + b; 4: }else{ 5: if (store(eq, ctl - 2, ctl == 2, 2)) 6: { 7: c = a - b; 8: }else{ 9: if (store(eq, ctl - 3, ctl == 3, 3))10: {11: if (store(eq, b - 2, b == 2, 4))12: {13: c = a << 1;14: }else{15: c = 0;16: int i = 0;17: for (i = 0; store(gt, b - i, i < b, 5); ++i)18: {19: c += a;20: }21: }22: }else{23: if (store(eq, ctl - 4, ctl == 4, 6))24: {25: if (store(eq, b - 2, b == 2, 7))26: {27: c = a << 1;28: }else{29: if (store(ne, b, b != 0, 8))30: {31: c = 0;32: int a_v = abs(a);33: int b_v = abs(b);34: while (store(gt, a_v, a_v > 0, 9))35: {36: a_v -= b_v;37: ++c;38: }39: }else{40: error = DIVISION_BY_ZERO;41: }42: }43: }44: }45: }46: }

Fig. 5. Instrumented Source Code Example

For the Black-box testing, the input vectors are determinedrandomly, trying to cover all possible operations in the DUV.Using these vectors, detecting the fault of omission in thedivision code is easy, but to find the error of the divisionby two is not so simple as only when b = 2 and ctl = 4 themistaken code is exercised.

The TFG for this example is represented in figure 6. Thenumbers in the vertices represent the branch identificationnumbers passed to the store function. The path not exercisedby the Black-box testing vectors listed on table I is highlighted

Input a Input b Input ctl

10 2 17 -5 12 -12 25 4 24 2 33 -6 312 3 410 -2 410 0 41 5 5

TABLE IBLACK-BOX TESTING VECTORS

indicating the path to the uncovered DFU (branch 7 true). Ifthe Verification engineers don’t know the specialization for thedivision by two, the line 27 of the code would not be verifiedby these vectors and the fault would not be detected.

   

1

2

True

False

end

True

3

4

6

False

True

False

False

True

5True

False

False

7

TrueTrue

8

False

False9

True

True

False

Fig. 6. Coverage Flow Graph of the Example

The path constraints can be determined walking the pathfrom the begining to its end. However, the extraction ofthese constraints is not necessary to the methodology, as thepath would be forced to executed as a straight line by theinstrumentation. Each time the store function is called, itrecords the value of the constraint. The numerical optimizationmethod is executed between every running of the path, usingthe constraints recorded values to determine the next inputvector. The path constraints are listed below and the resultsfor the firsts and lasts iterations are given on table II.

1) ctl − 1 6= 0;2) ctl − 2 6= 0;3) ctl − 3 6= 0;4) ctl − 4 = 0;5) b− 2 = 0.Analysing the results, one can verify that the input a does

not influenciate the constraints of the path and that the input

diogenes
operand
diogenes
presented
diogenes
do not
diogenes
beginning
diogenes
to be
diogenes
run
diogenes
first
diogenes
last few
diogenes
table talvez seja interessante explicar as colunas da tabela 2.
Page 6: Code-coverage Based Test Vector Generation for SystemC Designs

Interation a ∂f/∂a b ∂f/∂b ctl ∂f/∂ctl

1 10 4.2 14 4.8e+015 -4 -2.9e+0102 10 6.3 14 4.1e+015 -4 -2.9e+0103 10 -0.52 14 1.6e+015 -4 -2.9e+0104 10 0 13 8.8e+014 -4 -2.9e+010... ... ... ... ... ... ...53 10 -3.7e-015 2.1 1.3 4.2 2.254 10 1.9e-015 1.9 -1.3 4.1 1.555 10 -1.9e-015 2 -1.2 4 0.4756 10 -1.9e-015 2 -1.3 4 -1.4

TABLE IIITERATION RESULTS

values to exercise the path are b = 2 and ctl = 4, as expected.

VII. CONCLUSION

A working methodology to generate test vectors for Sys-temC designs was presented. The methodology is comple-mentary to the Functional testing as it uses code coverageinformation to generate test vectors to cover the portions ofcode not exercised by the Black-box Testing. Instead of usingsymbolic execution, vectors are generated using a numericaloptimization method. This approach does not suffer fromrestrictions related to symbolic execution such as definingarray reference values and loop boundaries, as the code isreally executed together with the optimization.

A simple example was given, demonstrating the use of themethodology. Although the achieved results are estimulating,more studies are needed to determine the restrictions to themethodology and the impact of its use in the SoC design cycle.

Future work will concentrate on a tool to automaticallyinstrument the SystemC source code and build the TFG.Methodologies to choose the best path to a vertex giventhe TFG and the path’s constraints will also be a matter ofresearched.

REFERENCES

[1] H. Chang, L. Cooke, M. Hunt, G. Martin, A. McNelly, and L. Todd,Surviving the SoC Revolution - A Guide to Platform-Based Design.Kluwer Academic Publishers, 1999.

[2] P. J. Bricaud, “Ip reuse creation for system-on-a-chip design,” in CustomIntegrated Circuits, 1999. Proceedings of the IEEE 1999. New York,NY, USA: IEEE Computer Society, 1999, pp. 395–401.

[3] D. D. Gajski, A. C.-H. Wu, V. Chaiyakul, S. Mori, T. Nukiyama, andP. Bricaud, “Embedded tutorial: essential issues for ip reuse,” in ASP-DAC ’00: Proceedings of the 2000 conference on Asia South Pacificdesign automation. New York, NY, USA: ACM Press, 2000, pp. 37–42.

[4] R. A. Bergamaschi and J. Cohn, “The a to z of socs,” in ICCAD’02: Proceedings of the 2002 IEEE/ACM international conference onComputer-aided design. New York, NY, USA: ACM Press, 2002, pp.790–798.

[5] T. A. Claasen, “System on a chip: Changing ic design today and in thefuture,” IEEE Micro, vol. 23, no. 3, pp. 20–26, 2003.

[6] M. Moy, F. Maraninchi, and L. Maillet-Contoz, “Pinapa: an extractiontool for systemc descriptions of systems-on-a-chip,” in EMSOFT ’05:Proceedings of the 5th ACM international conference on Embeddedsoftware. New York, NY, USA: ACM Press, 2005, pp. 317–324.

[7] J. Bergeron, Writing Testbenches: Functional Verification of HDL Mod-els. Kluwer Academic Publishers, 2003.

[8] O. Blaurock, “A systemc-based modular design and verification frame-work for c-model reuse in a hw/sw-co-design flow,” in ICDCSW’04: Proceedings of the 24th International Conference on DistributedComputing Systems Workshops - W7: EC (ICDCSW’04). Washington,DC, USA: IEEE Computer Society, 2004, pp. 838–843.

[9] A. Gawanmeh, A. Habibi, and S. Tahar, “Enabling systemc verificationusing abstract state machines,” Department of Electrical and ComputerEngineering, Concordia University, Montreal, Canada, Tech. Rep., 2004.

[10] Moy, Maraninchi, and Maillet-Contoz, “Lussy: A toolbox for the ana-lysis of systems-on-a-chip at the transactional level,” in ACSD ’05:Proceedings of the Fifth International Conference on Application ofConcurrency to System Design. Washington, DC, USA: IEEE ComputerSociety, 2005, pp. 26–35.

[11] Y. Wolfsthal and R. M. Gott, “Formal verification: is it real enough?”in DAC ’05: Proceedings of the 42nd annual conference on Designautomation. New York, NY, USA: ACM Press, 2005, pp. 670–671.

[12] K. R. G. da Silva, E. U. K. Melcher, G. Araujo, and V. A. Pimenta, “Anautomatic testbench generation tool for a systemc functional verificationmethodology,” in SBCCI ’04: Proceedings of the 17th symposium onIntegrated circuits and system design. New York, NY, USA: ACMPress, 2004, pp. 66–70.

[13] Y. K. Malaiya, “Antirandom testing: Getting the most out of black-box testing,” in Proceedings of the 1995 International Symposium OnSoftware Reliability Engineering, 1995, pp. 86–95.

[14] H. Yin, Z. Lebne-Dengel, and Y. K. Malaiya, “Automatic test generationusing checkpoint encoding and antirandom testing,” in ISSRE ’97: Pro-ceedings of the Eighth International Symposium on Software ReliabilityEngineering (ISSRE ’97). Washington, DC, USA: IEEE ComputerSociety, 1997, p. 84.

[15] L. A. Clarke, “A system to generate test data and symbolically executeprograms.” IEEE Trans. Software Eng., vol. 2, no. 3, pp. 215–222, 1976.

[16] M. J. Gallagher and V. Narasimhan, “Adtest: A test data generation suitefor ada software systems,” IEEE Transactions on Software Engineering,vol. 23, no. 8, pp. 473–484, 1997.

[17] J. Lawrance, S. Clarke, M. Burnett, and G. Rothermel, “How well doprofessional developers test with code coverage visualizations? an em-pirical study,” in VLHCC ’05: Proceedings of the 2005 IEEE Symposiumon Visual Languages and Human-Centric Computing (VL/HCC’05).Washington, DC, USA: IEEE Computer Society, 2005, pp. 53–60.

[18] Y. K. Malaiya, N. Li, R. Karcich, B. S. Skbbe, and J. Bieman, “Therelationship between test coverage and reliability,” in Proceedings of1994 International Symposium On Software Reliability Engineering,1994, pp. 186–195.

[19] A. V. Aho, R. Sethi, and J. D. Ullman, Compilers: principles, techniques,and tools. Boston, MA, USA: Addison-Wesley Longman PublishingCo., Inc., 1986.

diogenes
an instrumented code followed by a numerical
diogenes
results obtained
diogenes
research.