Top Banner

of 50

Testing With Actel SDE

Apr 04, 2018

Download

Documents

arpansen
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 7/30/2019 Testing With Actel SDE

    1/50

    Testing code with Actel SDE Jean Porcherot

    Testing code with Actel SDEVersion 1.7

    Revision HistoryDate Version Description Author

    07 July 2008 1.0 First draft version Jean Porcherot

    26 September 2008 1.1 Added 4.9, 1.2.1.3 and 4.14.6 Jean Porcherot

    10 October 2008 1.2 Reviewed by Gary

    22 October 2008 1.3 Update with recent mktest changes Jean Porcherot

    18 Novembre 2008 1.4 Include feedbacks from software seminar Jean Porcherot15 January 2009 1.5 Added 'Error Checking Method' section Jean Porcherot

    27 January 2009 1.6 Added CppUnit::Exception Jean Porcherot

    09 April 2009 1.7 Added cppunit.dll staging step Jean Porcherot

    Contributors:Jean PorcherotEdward ReusserNabil Tewolde

    Location on Livelink:

    http://sv-livelink-02/actel813/livelink.exe?func=ll&objid=2230203&objAction=browse&sort=name

    Confidential Actel Corporation, 2012 Page 1 of 50

    http://sv-livelink-02/actel813/livelink.exe?func=ll&objid=2230203&objAction=browse&sort=namehttp://sv-livelink-02/actel813/livelink.exe?func=ll&objid=2230203&objAction=browse&sort=name
  • 7/30/2019 Testing With Actel SDE

    2/50

    Testing code with Actel SDE Jean Porcherot

    Table of Contents

    1. Introduction...............................................................................................................................3

    2. Quick start: How to create a new test........................................................................................7

    3. Maintaining your test............................................................................................................ ..23

    4. Tips for writing unit and integration tests...............................................................................26

    5. Using and taking advantage of your tests........................................................................... ....43

    6. Test Driven Development.........................................................................................................48

    7. Future enhancements/Roadmap.............................................................................................49

    Confidential Actel Corporation, 2012 Page 2 of 50

  • 7/30/2019 Testing With Actel SDE

    3/50

    Testing code with Actel SDE Jean Porcherot

    1. Introduction

    The purpose of this document is to present how to easily create, run, maintain and takeadvantage of testing within Actels Software Development Environment (SDE). Any test you will

    write should be usable within Actels SDE. This document presents how to integrate your test inthe vobs and have it be executed and validated by testing framework tools.

    This document will guide you in writing, running and maintaining tests using Actel SDE.

    We will not explain how test should be written in detail nor what technique you should use to testa specific component (white box, black box..).

    1.1 About software testing

    Software testing is the process of checking software, to verify that it satisfies its requirements andto detect errors.

    1.1.1 Software testing goals

    Writing tests is necessary to validate code.

    Goal number 1: Validate a new feature/piece of code you write

    You write some new code, a new test will validate the code. This test may or may not be writtenby the same developer writing the new code. The test should cover all use cases, it validates thatthe new code works when it should and returns errors when it needs to.

    Goal number 2: Validate a code change you do

    You modify an existing code (bug fix, enhancement, refactoring.), the existing test will help youverifying that your change does not affect previous functionalities. The test may be broken by the

    change if you modified the behaviour of the tested component. Then, and only then, you need tofix the test to make it test and validate the new behaviour.Note that the test may also be broken because your change introduced a bug ;-) Check the rootcause of the failure before updating the test itself.

    1.1.2 What sort of code can be tested?

    Low level C code can be tested, C++ code and also MFC code can be tested. In general, anypiece of code from any language should be testable.

    1.2 Definitions

    1.2.1 A test

    We will separate tests in two categories. Unit tests and integration tests.

    In most case, a unit test validates a module from a library. Its a low level test.In most case, an integration test validates a functionality of a program (this may involve manymodules and libraries, already tested by unit tests). Its a higher level test.

    Confidential Actel Corporation, 2012 Page 3 of 50

  • 7/30/2019 Testing With Actel SDE

    4/50

    Testing code with Actel SDE Jean Porcherot

    1.2.1.1 Unit Test

    A unit test is a program that verifies the individual units of source code are working properly. Aunit is the smallest testable part of an application. In procedural programming the smallest unitmay be an individual program, function, procedure, etc., while in object-oriented programming,the smallest unit is a method.

    To avoid having too many different tests, we most commonly apply unit testing to a full class or amodule that may include a few classes.

    In Actels SDE, a unit test is a folder including set of files (header and source files), exportingat least a main entry point. To run the test, the set of files need to be compiled to produce anexecutable. This executable can link with the library defining the module to be tested (it could alsolink only with some .o files, see 4.14.2). Then, the executable can be ran and its exit code will beused for test validation. If it returns 0, the test passed, else, it failed. The program will provide,through its output, information explaining why it failed.

    Example:

    You have a should module, providing two functions, ShouldReturnTrue, supposed to always

    return true. If you want to write a small test program that validates this functions, you can createthe test folder manually and it may contain a single file, TestAdd.cpp:

    #include sys.h

    #include should.h

    int main(int argc, char* argv[])

    {

    if ( !ShouldReturnTrue() )

    {

    SysPrintf(ShouldReturnTrue does not work);

    return 1;

    }

    SysPrintf(Test succeeded!);

    return 0;

    }

    Example 1

    1.2.1.2 Integration Test

    Integration test will test a set of modules that has been combined as a group (library/program).

    In Actels SDE, an integration test is a folder including a script to be executed by an existingprogram (libero.exe, designer.exe.). This script must be in a format supported by the testedprogram (Tcl in most cases). It does not necessarily test a single unit; it can test the integration ofseveral units within the application.

    Example:

    The test can include a single file, mytest.tcl containing:

    new_design -family PA

    save_design name foo.adb

    close_design

    Example 2

    Confidential Actel Corporation, 2012 Page 4 of 50

  • 7/30/2019 Testing With Actel SDE

    5/50

    Testing code with Actel SDE Jean Porcherot

    In Actel, we used to call the level1 tests as regression tests, but note that integration tests andregression tests are not the same concept. So we will name Tcl script based tests integrationtests rather than regression tests in this document.

    1.2.1.3 Regression test

    A regression test is a test that exercises pre-existing feature behavior and can be used to detectany differences in behavior. This is very useful for detecting unexpected effects (a behaviorregression) of a development change.

    Regression testing applies to both unit and integration testing.

    1.2.1.4 WinRunner Test

    WinRunner is functional testing software for IT applications. It captures, verifies and replays userinteractions automatically, so you can identify defects and determine whether graphical user

    interface work as designed.

    They are currently not part of the testing framework and won't be commented in this document

    1.2.2 Test validation

    When a (unit or integration) test is run, we need an easy way to know if the test passed or failed.

    For unit tests, validation is fully based on the exit code. If the test program returns 0, the testsucceeded. Any test-specific validation (check some files has been generated, test some outputs,compare dumped and golden files) can be performed by the program itself and guarantee thatthe exit code is the only value to be checked for validation.

    For integration tests, we may want the system to do some more checking automatically. Itsharder to check for outputs in Tcl than in C++. So an integration test may specify a method thatshould be used for validation.

    For instance, test below could be validated only if TEST_OK is found in the log file:

    new_design -family PA

    save_design -name foo.adb

    close_design

    puts TEST_OK

    Example 3

    1.2.3 Test suite

    A test suite is a collection of tests. For instance, the well known level1 is an integration testsuite.

    In most case a test suite always tests the same program/library.

    A unit test suite will group together unit tests for a specific library. Each test will validate amodule of this library and then, the whole test suite will validate the whole library behavior.

    Confidential Actel Corporation, 2012 Page 5 of 50

  • 7/30/2019 Testing With Actel SDE

    6/50

    Testing code with Actel SDE Jean Porcherot

    An integration test suite will group together integration tests for a specific program. Each testwill validate a functionality of this program and then, the whole test suite will validate the wholeprogram behaviors.

    This is the general rule, but you may want to do this differently. For instance, picasso integration

    test level includes two tests, both related to power, but not testing the same tool:- smartpower_tcl: will test SmartPower commands using designer.exe- vcd_flow: will test the VCD flow using libero.exe

    Adding your test to a test suite will make it possible to run your test from a test runner and/or frombuild_top. This will make it easy for you and other developers to run the suite (that will includeyour specific test) and have any failure be reported. If the test is not in a test suite, most likely itwill never (or very rarely) be ran after you submitted it to the vobs. Most of us already ran level 1tests to verify that we did not break anything when modifying some code related to Designer, butwho ever searched for other Tcl scripts that should be ran manually?

    A CPPUNIT test suite is not at the same level as the test suite described above. See 2.3.1 formore details.

    1.2.4 Test runner

    A test runner is a tool that will execute test suite(s) for you and will report you what tests failed orpassed.

    Actels SDE has two test runners, one for unit tests (run_tests.rb, see 5.1.1) and one forintegration tests (top_regs, see 5.1.2). They both support threading and can be invoked directlyfrom build_top with specific recipe flags (see 2.3.8 and 2.4.6).

    1.3 Success story

    Many programs/libraries are already using the unit and integration tests as presented below:sgcore, sdcmds, idebase, picasso, ide

    1.4 What should I read in this 50-page document!?

    1.4.1 You are about to write your first tests?

    Read "1.2 Definitions" and "2 Quick start: How to create a new test"When you wrote your first test, come back and move to the next steps.

    1.4.2 You already wrote a test and want to get some advices

    Read "4 Tips for writing unit and integration tests"And also "3 Maintaining your test"

    1.4.3 A test is broken in your c-set and you want to fix it

    Read "3.2 A test is broken!?". And then refer to 2.3.9 to debug a unit test and 2.4.7 for integrationtests.

    Confidential Actel Corporation, 2012 Page 6 of 50

  • 7/30/2019 Testing With Actel SDE

    7/50

    Testing code with Actel SDE Jean Porcherot

    2. Quick start: How to create a new test

    Creating a new test should be very easy. The purpose of this section is to guide you through thecreation of a new test within Actels SDE.

    We will first tell you how to decide what sort of test you should create. Then we will explain how tocreate, run and debug a new test.

    2.1 Unit or integration test?

    First of all, you need to decide if you will use unit or integration testing to validate yourfeature/functionality.

    2.1.1 Unit test

    - The code you want to test is in a library

    - The code does not have much dependency on other modules/libraries- You want to validate a single module

    2.1.2 Integration test

    - The code you want to test is located in an executable (program)- The program must support scripting- You dont want to validate a single module but want to test a global functionality involvingseveral modules- The code you want to test has strong dependencies on other modules, tools for which you canteasily create mockups- You are not interested in model content validation (model content is not easily accessiblethrough scripts)

    2.1.3 Examples

    2.1.3.1 Component import in Project Manager

    In Project Manager (Libero IDE), a component (SmartGen, IP Core) is imported through a CXFfile.

    - We have a unit test that validates CXF parsing; this unit test parses a CXF and checksthat the model is updated correctly. As its a unit test, it can directly access the model tovalidate its content. This one imports CXF files created specially to guarantee a goodcoverage of the code.

    - The, we also have integration tests that focuses on the flow and the display, forexample to validate that a components can be generated in a new project, displayed inthe GUI and that simulation and synthesis pass successfully

    2.1.3.2 SmartPower Tcl support testing

    We wanted to test SmartPower commands support. Command management is in a library(picbase) so we could have a unit test for it. But there was no easy way to create a mockup mod-

    Confidential Actel Corporation, 2012 Page 7 of 50

  • 7/30/2019 Testing With Actel SDE

    8/50

    Testing code with Actel SDE Jean Porcherot

    el (power engine): the easiest way to create such a model is to open an adb. As commands areall scriptable, we decided to write an integration test (Tcl script) to be executed by Designer ratherthan a unit test. Then, the only thing we need to do to create a power engine is to open an adb.

    2.2 Where to create tests

    Tests you will create must be added to the vobs so that they can be run by other people (see4.5). All afi tests should be added to /vobs/afi/tst folder, nsrc tests should be located under/vobs/nsrc/tst folder.

    If your test is located at the right place, you can then use tcd (by analogy with acd) to find the testfolder. acd idebase goes to /vobs/afi/lib/idebase, tcd idebase goes to /vobs/afi/tst/idebase.

    Firstly, if you work from a snapshot view, make sure the test folder is loaded. By default, testfolders are not loaded in snapshot view.Use sv_sync -p /afi/tst/ to load the folder. If you are adding the first test for a library,congratulation! Just create the tst folder!

    2.3 New unit test

    2.3.1 About CPPUNIT

    CPPUNIT is a C++ unit testing framework. This toolkit will help you writing unit tests. It helps you:- Organizing your test in different sub-tests- Validating expressions- Provide explicit information (file/line) on failure

    This requires your test program to define some CPPUNIT derived classes.

    CPPUNIT is not required to write tests within Actel's SDE (Example 1 shows a test that workswithin Actels SDE without using CPPUNIT). But we strongly recommended using CPPUNIT as it

    makes test more structured and easier to read.

    Advantages:

    - It's easier to add assertions in the code- No need to print where the assertion is, CPPUNIT does it for you on failure- If one test fails (one method registered through CPPUNIT_TEST), the other ones are executedanyway, and then, the log may report failures in several CPPUNIT_TEST. One test stops whenthe first assertion is reached, but the other ones are executed anyway.

    Here's an interesting presentation of CPPUNIT:http://www.slideshare.net/iurii.kiyan/cppunit-using-introduction

    CPPUNIT has some objects called test suites and unit tests that are different than the ones we

    have in Actel's SDE.

    Confidential Actel Corporation, 2012 Page 8 of 50

    http://www.slideshare.net/iurii.kiyan/cppunit-using-introductionhttp://www.slideshare.net/iurii.kiyan/cppunit-using-introduction
  • 7/30/2019 Testing With Actel SDE

    9/50

    Testing code with Actel SDE Jean Porcherot

    - One SDE unit test suite lists a set of SDE unit tests (programs to be compiled andexecuted).

    - One SDE unit test, if using CPPUNIT, defines a TestFixture derived class, this one isinstantiated by the main entry point.

    - The TestFixture declares a CPPUNIT_TEST_SUITE via a macro- The CPPUNIT_TEST_SUITE references a set of functions; each one is a CPPUNIT test.

    Running the full SDE unit test program will end up calling all those CPPUNIT testfunctions one by one.

    2.3.2 Create a new unit test using Visual Studio 2005

    There is a one-click tool that will create a new unit test for you.mktest is a utility that generates skeleton C++ source used for unit testing.

    Mktest will:- Add a new folder in the vobs (under /afi/tst/ when is the name of the librarycontaining the code youll test).- Add some common compilation files in it (linkfile.txt, keyinfo.txt.)- Add some source and header files

    - Define a main entry point

    Then, you can start writing your test code (validating expected behaviours). We recommend thatyou use CPPUNIT library for this purpose.

    2.3.2.1 Installing mktest

    Refer to this document to install mktest on your machine:http://sv-livelink-02/actel813/livelink.exe?

    Confidential Actel Corporation, 2012 Page 9 of 50

    SDE Unit test suite

    SDE Unit test1 SDE Unit test2

    CppUnit::TestFixture derived classCPPUNIT_TEST_SUITE macro

    Test function1

    Test function2

    http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=2230203&objAction=browse&sort=name&viewType=1http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=2230203&objAction=browse&sort=name&viewType=1
  • 7/30/2019 Testing With Actel SDE

    10/50

    Testing code with Actel SDE Jean Porcherot

    func=ll&objId=2230203&objAction=browse&sort=name&viewType=1

    If ruby is already installed on your machine, having mktest work should be very easy.

    2.3.2.2 Setting your view

    To use mktest you must be in a snapshot.$ setview

    If your snapshot does not contain the /afi/tst directory re-sync using the p flag.$ sv_sync p /afi/tst/

    2.3.2.3 Running mktest

    One new menu item is added to the project menu in Visual Studio 2005: Add CppUnit Test.Clicking it should open the dialog below.

    Figure 1

    This dialog allows you to specify new or existing tests. Each test has a single test runner and isthe project label. For example, assume the above test runner name is Test1.

    Confidential Actel Corporation, 2012 Page 10 of 50

    http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=2230203&objAction=browse&sort=name&viewType=1http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=2230203&objAction=browse&sort=name&viewType=1
  • 7/30/2019 Testing With Actel SDE

    11/50

    Testing code with Actel SDE Jean Porcherot

    The path can usually be defaulted to . for new tests. This defines the test path asvobs/afi/tst/amfc when amfc is the module. Notice of course that for snapshots vobs has thesnapshot root dynamically substituted.

    The groups are selectors. All tests are defined to belong to the group called all or all_tests.However tests can also be specified as belonging to other groups for use in the test runner script.

    It can be entered as a comma separated list. If the test already exists, the current groups will beexpanded to include the new groups. You cannot delete associated groups from tests at this timeexcept by editing the underlying data files.

    The CPPUNIT test suite (MySuite) is a collection of single CPPUNIT unit tests (MyTestFunction isone of them), each of which is a separate function that is called by the test runner. So forexample, one could specify a test suite as AmfcTooltip which defines the suite as containing alist of tooltip tests. Then you enter a comma separated list of new tests to be added to the suite.If the Test Suite already exists, the new tests are simply added.

    See 2.3.1 for more details on CPPUNIT test structure.

    All files are created into the path directory, the project is created (or replaced if it already exists),

    and the project will be loaded into the solution.

    Path and Module Name are pre-populated.The only thing you need to do is enter a Test Runner Name, Test Suite and New Test name.Make sure you check Add new files to ClearCase, otherwise the test will remain local to yourmachine and wont be added to the vobs.

    Once validated, this will create a new test folder: /afi/tst//_, for the example above, it will be /afi/tst/sgcore/sgcore_mytestmktest will place the generated files below in the test folder:main.cpp - entry point for the unit testsrc directory - contains the class source files

    inc directory - contains the class header filekeyinfo.txt - contains library dependencieslinkfile.txt - contains the module dependencies needed to create the make file by mkmf

    There are three functions in the class MySuite located in the src directory:setUp(): Define all variables common to the set of tests heretearDown(): Last thing executed, free up any resources allocated in setUp hereMyTestFunction(): This is where the test will be executed

    The file /afi/tst/sgcore/sgcore_mytest/src/MySuite.cpp will include the implementation of theMyTestFunction test, part of MySuite unit test:

    void MySuite::MyTestFunction()

    {CPPUNIT_FAIL( not implemented );

    }

    Example 4

    You now simply need to add your testing code in this function. You can use CPPUNIT macros tovalidate the functionalities.

    Note: You can also specify a test group in Figure 1. Using test group makes it possible to run a

    Confidential Actel Corporation, 2012 Page 11 of 50

  • 7/30/2019 Testing With Actel SDE

    12/50

    Testing code with Actel SDE Jean Porcherot

    sub-set of tests from a test suite. See 2.3.8

    2.3.3 Example

    If we extend the test forExample 1 with a ShouldReturnFalse function, you would have to writeyour test file manually:

    #include sys.h

    #include should.h

    int main(int argc, char* argv[])

    {

    if ( ! ShouldReturnTrue() )

    {

    SysPrintf(ShouldReturnTrue() does not work);

    return 1;

    }

    if ( ShouldReturnFalse() )

    {

    SysPrintf(ShouldReturnFalse() does not work);

    return 2;}

    return 0;

    }

    Example 5

    You can use the mktest tool and then take advantage of CPPUNIT testing framework (and westrongly recommend that). Then, the only piece of code you need to write is:

    void MySuite::MyTestFunction()

    {

    CPPUNIT_ASSERT( ShouldReturnTrue() == true );

    CPPUNIT_ASSERT( ShouldReturnFalse() == false );

    }

    Example 6

    2.3.4 Creating new tests without Visual Studio

    2.3.4.1 Using mktst

    mktest uses command line parameters to configure the generated source files, it can be run froma shell, without Visual Studio. For a complete list of options run$ mktest h

    Here is an example:$ mktest -scc ag at af ac

    $ mktest -scc -ag my_group -at my_unit_test -af add -ac test_adder

    sgcore

    scc - specifies that new files must be added to the vobsag - add a test group called data

    Confidential Actel Corporation, 2012 Page 12 of 50

  • 7/30/2019 Testing With Actel SDE

    13/50

    Testing code with Actel SDE Jean Porcherot

    at - specify the test nameaf - specify the name of the member function that will be called to run the testac - specify the name of the class that will contain the test - specify the name of the module to be tested

    2.3.4.2 Adding test classes and functions

    Mktest makes it easy to create a new test. It can also be useful to extend an existing test. Referto the tools help to see how to add test functions and classes to an existing unit test.

    2.3.4.3 By hand

    Even if we dont recommend that, you can still create your CPPUNIT test by hand, or even write anon CPPUNIT-based test. A unit test is just a test program with a main entry point returning 0on success and 1 on failure. We recommend prefixing the test name by the module name. Then,the generated executable, when you'll compile, will not collide with another executable fromanother module.

    Put the content ofExample 5 in a main.cpp file, add linkfile.txt and keyinfo.txt. Make it compile.You now created a unit test by hand.

    Then, you need to add the test to a test suite (mktest does it for you automatically). A unit testsuite is a testinfo.txt file, located in a library folder:/vobs/afi/idebase/testinfo.txt lists all the tests from idebase test suite.

    You need to update testinfo.txt by hand if you want to add your test to a test suite. testinfo.txt is a3 column file. First column is the test name, second is the test location, and third (optional) is thetest group.

    Example, for the library "base", /afi/lib/base/testinfo.txt could be:

    systest ${SDE_SRC_ROOTDIR}/afi/tst/base/systest sys

    sys_recurse_copy ${SDE_SRC_ROOTDIR}/afi/tst/base/sys_recurse_copy syssys_recurse_rmdir ${SDE_SRC_ROOTDIR}/afi/tst/base/sys_recurse_rmdir sys

    sys_setreadonly ${SDE_SRC_ROOTDIR}/afi/tst/base/sys_setreadonly sys

    syscopy_test ${SDE_SRC_ROOTDIR}/afi/tst/base/syscopy_test sys

    defget ${SDE_SRC_ROOTDIR}/afi/tst/base/defget def

    defstringtest ${SDE_SRC_ROOTDIR}/afi/tst/base/defstringtest def

    deftabtest ${SDE_SRC_ROOTDIR}/afi/tst/base/deftabtest def

    deftest ${SDE_SRC_ROOTDIR}/afi/tst/base/deftest def

    filtest ${SDE_SRC_ROOTDIR}/afi/tst/base/filtest

    Example 7

    Then, base test suite has 10 tests (all lines from the file).sys test group from base test suite has 5 tests.

    Groups are handled by mktest, see the group field in Figure 1.

    2.3.5 Validating the test

    The test program (main entry point) generated by mktest will return 0 on success and 1 on failure.

    Whatever needs to be done to validate your test must be part of the test itself. If your test

    Confidential Actel Corporation, 2012 Page 13 of 50

  • 7/30/2019 Testing With Actel SDE

    14/50

    Testing code with Actel SDE Jean Porcherot

    validation is done by comparing dumped to golden files, write a C++ function doing thecomparison and make your test fail through a CPPUNIT assertion.

    2.3.6 Running the test

    To run your unit test, you will first need to compile and stage it. A unit test is just a program and

    can be compiled like any other Actel program (sumatra, flashpro, ide).

    Once it's compiled, you need to stage your executable (use 'mk mod' to build and stage), you candirectly run the executable and this will output messages to the console.

    With the examples above; mk mod should generate sgcore_mytest.exe in your staging area.Running it will output:

    Figure 2

    If the test fails, for instance if you wrote your test function as below:

    void MySuite::MyTestFunction()

    {

    CPPUNIT_ASSERT( ShouldReturnTrue() == true );

    CPPUNIT_ASSERT( ShouldReturnFalse() == true ); // this fails

    }

    Example 8

    Running it will output:

    Figure 3

    Note that CPPUNIT tells you in what file and line the error occurred. No need to do SysPrintf andreturns as you would have done in Example 5.

    CPPUNIT requires the cppunit_dll.dll file (cppunitd_dll.dll with M_DEBUG) to be accessible.Those files are in alien/ms/bin. You may have need stage them in case this folder is not in yourpath when running the unit test.

    2.3.7 Adding the test to a test suite

    When you asked to create the new test mytest from Visual Studio, this one was automaticallyadded to sgcore test suite. This makes it possible to run your test through a test runner:

    Confidential Actel Corporation, 2012 Page 14 of 50

  • 7/30/2019 Testing With Actel SDE

    15/50

    Testing code with Actel SDE Jean Porcherot

    A recommendation is that one test suite from a library should only need this library to be compiledand staged. Note: its not the case for idebase tests ;-) This one needs @libero_hedwig.txt assome tests call libero.exe internally to run some Tcl script on it (in a way we have integration testsembedded in unit tests.).

    2.3.8 Running a unit test suite with build_top

    To have build_top compile, stage, run and validate your test, just use those recipe flags:DO_UNIT_TEST=trueUNIT_TEST_LEVELS=all@sgcoreUNIT_TEST_THREADS=3

    This will run all tests from sgcore, including the new one you recently added. The system will use3 threads and will run one unit test in each one.

    Figure 4

    If your test suite has groups, you can specify a group name to be ran: mygroup@sgcore ratherthan all@sgcore. Instead of running all unit tests from sgcore, only the ones from groupmygroup will be ran.

    2.3.9 Debugging the test

    When creating your Visual Studio solution. You can add a -i parameter to include all tests."mksln -i sgcore" will create a sgcore solution with all the scgore unit tests loaded in it.

    Then, you can use the test program like any other program; you can build it and run it from VisualStudio.

    Confidential Actel Corporation, 2012 Page 15 of 50

  • 7/30/2019 Testing With Actel SDE

    16/50

    Testing code with Actel SDE Jean Porcherot

    Figure 5

    If mktest is integrated in Visual Studio on the machine, tests can be loaded later from VisualStudio (no need to use the -i parameter when creating the solution)

    Figure 6

    CPPUNIT_ASSERT macros are raising exceptions. The program will not break on failure, exceptif you ask Visual Studio to do so.

    Go to Debug Exceptions.From the dialog, add a new exception C++ exception: "CppUnit::Exception".Check the "Thrown" column.

    Now, the program will break on failure.

    2.4 New integration test

    Scripts for integration tests could be written in any language supported by the program you wantto test. At Actel, we mainly support Tcl, so well assume we are writing our integration test in Tclin this section.

    If this script needs to be executed by a new program you are creating, this new program mustsupport script parameters on his command line. Common script parameters for Actel softwareare:script:logfile:script_args:CONSOLE_MODE:show|hideSCRIPT_MODE:query|batch|startup

    Any program using PrgbaseApp as CWinApp based class will support those parametersautomatically (its the case for libero and flashpro). Designer supports them too. In examplesbelow, we will consider that your test needs to be executed by Designer software.

    2.4.1 Creating a new integration test

    You simply need to create a new folder with a top-level Tcl file to be executed.

    Your Tcl script can execute any Tcl command (see http://www.tcl.tk/man/tcl8.4), those can helpaccessing/deleting/modifying/comparing files, calling system functions, doing operations.Your Tcl script will also call some Tcl commands from your program to validate its behavior.

    Confidential Actel Corporation, 2012 Page 16 of 50

    http://www.tcl.tk/man/tcl8.4http://www.tcl.tk/man/tcl8.4
  • 7/30/2019 Testing With Actel SDE

    17/50

    Testing code with Actel SDE Jean Porcherot

    2.4.2 Example

    Here is a very simple integration test that could be run by Designer. This script can be locatedunder /afi/tst/designer/mytest

    # Tcl command, remove any remaining adb from previous run:

    file delete force foo.adb# Starting test:

    new_design -family PA

    save_design -name foo.adb

    close_design

    puts TEST_SUCCEEDED

    Example 9

    2.4.3 Validating the test

    Note that, for Actel tools at least, a Tcl script execution will stop on failure. This means that youdont need to write much code to validate command execution. The test from Example 9 willguarantee that new_design, save_design and close_design work. When running the test, if thelast line of the output is TEST_SUCCEEDED, you can consider the test passed.

    2.4.4 Running the test

    An integration test can either be ran from GUI or console mode.

    2.4.4.1 Run the test from the console

    To run the test from the console, just specify the script test on the command line:

    $ALSDIR/bin/designer.exe script:mytest.tcl logfile:mylog.txt

    Then, check mylog.txt to validate if the test passed or not.

    You may also pass arguments to the script (4.15.5 explains why you may want to do this).$ALSDIR/bin/designer.exe script:mytest.tcl logfile:mylog.txt script_args:"test1"

    On PC, when the script is executed, the output goes to the log file you specified but your consoleis frozen. You don't get the output in real time and have to wait for the test to complete for the logfile to be written before you can have any information on the test execution status.You can specify CONSOLE_MODE:show on your command line. Then, a new console will beopened and this one will display all the outputs in real time (as Designer GUI would show them inthe log window).

    SCRIPT_MODE is used on the command line to specify how the script should be executed:SCRIPT_MODE:batch (default) no GUI opened, script is executed in batch modeSCRIPT_MODE:startup Designer GUI opens and then executes the script. Then, you can seethe output in the log window and you don't need CONSOLE_MODE parameter.SCRIPT_MODE:query Designer frame is created but not displayed, this is usefull if you wantto test it's content (see 4.15.6)

    Confidential Actel Corporation, 2012 Page 17 of 50

  • 7/30/2019 Testing With Actel SDE

    18/50

    Testing code with Actel SDE Jean Porcherot

    2.4.4.2 Run the test from GUI

    You can also start Designer GUI, and then ask to run the script directly from here (this isequivalent to using SCRIPT_MODE:startup).

    Figure 7

    Script arguments can be specified from this dialog.

    2.4.5 Adding the test to a test suite

    An integration test suite is what we commonly call a regression level at Actel. Integration testsuites are *.lst files, they are commonly stored in /vobs/test/reg_test/lists.

    A lst file is just a list of the folders where integration tests are located:

    Example, reglev1_ms.lst is:

    /vobs/test/reg_test/testcases/lev1/AGLP030V5_FullyBonded_miro_commands

    /vobs/test/reg_test/testcases/lev1/A3P030_100_VQFP_USLICC_UJTAG

    /vobs/test/reg_test/testcases/lev1/RTAX1000S_top_edac1

    ...

    Example 10

    In each folder, the test runner will be looking for a "readme" file. This file will give information howthe test needs to be executed and validated.

    Let's come back to Example 9. We wanted to have a very simple script, executing 4 commandsand then displaying "TEST_SUCCEEDED". This script should be executed by Designer. This testis validated by the presence of "TEST_SUCCEEDED" in the log file.

    Here is how to integrate Example 9 in a test suite:

    - We already created the script file /vobs/afi/tst/designer/mytest/mytest.tcl

    - In this folder, you will add a 'readme' file explaining how to run and validate the test:Error Checking Method:exp

    EXECUTABLE=designer

    EXEC_PARAMS=script:myscript.tcl mdbtestlog:mylog.log

    Example 11

    - 'Error Checking Method:exp' means that your test is validated by expression checking in theprogram output (mylog.log). Then, in the same test folder, you will add a myscript.exp file to tellwhat expression should be searched. This file will contain a single line:

    Confidential Actel Corporation, 2012 Page 18 of 50

  • 7/30/2019 Testing With Actel SDE

    19/50

    Testing code with Actel SDE Jean Porcherot

    TEST_SUCCEEDED

    Example 12

    See 4.15.8 for more information on Error Checking Methodflag.- Then, you can add the test folder to an existing lst file, or create a new one. We will add it in anew test suite, we will create /vobs/test/reg_tests/lists/reglevexample.lst file, containing:

    /vobs/afi/tst/designer/mytest

    Example 13

    You added your test to a suite. It can now be executed and validated by a test runner.

    Note that the integration test runner supports many kind of validations (errLog, heflow, mvn_exp,exp, rio_exp, mult_exp, fus, diff_fus.). Refer to /vobs/rtools/scripts/top_regs script to see whatis supported (is there any documentation on Livelink for this? If someone knows, pleaseadvise).

    2.4.6 Running an integration test suite with build_top

    To have build_top run and validate your test, just use those recipe flags:

    DO_TEST=trueTEST_LEVELS=exampleTEST_THREADS=3This will run all tests from /vobs/test/reg_test/lists/reglevexample.lstThe system will use 3 threads and will run one integration test in each one.

    The output directory for integration tests can be specified in the recipe with TEST_DIR. If notspecified, it will be $SDE_LOCAL_ROOTDIR/tst (if the SDE variable is set).

    Refer to build_top documentation for more information on those options.

    Figure 8

    2.4.7 Debugging the test

    To debug your test, you can simply need to repeat 2.4.4.2 from Visual Studio.

    Confidential Actel Corporation, 2012 Page 19 of 50

  • 7/30/2019 Testing With Actel SDE

    20/50

    Testing code with Actel SDE Jean Porcherot

    If you want to run the test directly from the debugger, follow the steps below:You first need to know how the test is supposed to be launched (you simply need to see the'readme' file from the test folder if any, see 2.4.5). This one will tell you what program is used, andwhat are the arguments passed to it. You then need to:

    - Create a solution for the program used by the test

    - Configure the project to use the right arguments for debugging- Copy the test folder somewhere- Run the program from this copy of the test folder

    Figure 9

    Figure 9 creates a copy of the components test folder and shows me how to run it.

    Figure 10

    Then, pressing F5 will start the test in the debugger.

    If the test runs sub-tests, see 4.15.5 to make it easy to debug only one sub-test.

    If one Tcl script runs several time the same action and one is failing, you may have a hard time

    Confidential Actel Corporation, 2012 Page 20 of 50

  • 7/30/2019 Testing With Actel SDE

    21/50

    Testing code with Actel SDE Jean Porcherot

    debugging. If the script is for instance:

    open_designf foo.adb

    compile

    compile

    compile

    compile

    compile

    compile

    Example 14

    If the fourth call to compile fails, you will need to put a breakpoint in the code executing compile todebug it and you will have to ignore the first 3 breaks. It could be nice in this case to add a"breakpoint" Tcl support in your program. This breakpoint Tcl command will do nothing, but will

    just be supported so that you can put a Debugger breakpoint in it. Then, you can update thescript as below:

    open_designf foo.adb

    compilecompile

    compile

    breakpoint

    compile

    compile

    compile

    Example 15

    Then, you put a Visual Studio breakpoint in the "breakpoint" command execute method (seeFigure 11), once reached, you can set the Visual Studio breakpoint in the compile function and hitF5 to reach it, you this is the one that will fail.Note that if your program is based on Acmd system, there is a built-in "breakpoint" command

    available. See the Acmd documentation on Livelink:http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=1886405&objAction=browse&sort=name&viewType=1

    Confidential Actel Corporation, 2012 Page 21 of 50

    http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=1886405&objAction=browse&sort=name&viewType=1http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=1886405&objAction=browse&sort=name&viewType=1http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=1886405&objAction=browse&sort=name&viewType=1http://sv-livelink-02/actel813/livelink.exe?func=ll&objId=1886405&objAction=browse&sort=name&viewType=1
  • 7/30/2019 Testing With Actel SDE

    22/50

    Testing code with Actel SDE Jean Porcherot

    Figure 11

    2.5 Debugging a test under Linux

    No real tricks here.sorry but you'll definitively need to use gdb ;-)

    You just need to compile the program and execute it from gdb with the correct parameters (fromreadme or arglist.txt).

    Tip: Under Linux, when you run "designer" from your staging area, "designer" file is actually a

    script that will set up your environment and will then invoke the real binary "designer_bin". In mostcases, running directly designer_bin won't work and gdb cannot be ran on "designer" as itrequires a binary file. To fix that, edit the "designer" file and change it's last line so that it invokesgdb instead of designer_bin:

    Replace: "$exedir/$exename "$@""With "\gdb $exedir/$exename "$@""

    Confidential Actel Corporation, 2012 Page 22 of 50

  • 7/30/2019 Testing With Actel SDE

    23/50

    Testing code with Actel SDE Jean Porcherot

    3. Maintaining your test

    You implemented a new feature and you wrote a new test to validate this feature. The testpasses, that's perfect.

    Now, what's next? You must make sure that this test is maintained. Make sure that, in two yearsfrom now, when you'll modify some code, you are still able to run the test and validate that whatyou did does not break the feature you implemented today. This means that you must forceyourself to run this test some time to time and possibly update if needed, so that it continuesworking in the future.

    3.1 Test must be maintained across all releases

    The only reason why a test can start failing and you don't want to fix it, is if the feature it tests wasabandoned and is not supported by the software anymore. Then, you can even considerremoving the test for good from the vobs (or at least de-reference it from any test suite).

    3.1.1 New functionality

    When adding a new functionality to a module/class, you should add the corresponding validationtest code in the module's test if any (if there's no test for this module.maybe it's a goodopportunity to create one!).

    The developer, who wrote the test originally, probably tried to have a good coverage of themodule, he made sure there's no way to make this module not behave correctly without breakingthe test. If you add new code but don't update the test, you add some untested code in themodule and make it more open to new bugs.

    3.1.2 Bug fix

    When fixing a bug, you have a high probability to introduce a new problem. By following theprocess below, you can take advantage of a test to reduce this risk.

    You have a module implementing some functionality. Youve been assigned to fix a reported bugin that code module, and youre about to fix it. There is a test for this module, so you would reallyappreciate if this test could validate the piece of code you are about to change. If the test isworking, then you can expect it to detect any new bug your change may introduce (or any existingbehaviour you may break).

    How to know if the specific piece of code you are about to modify is actually validated by thistest? Maybe the test has a bad coverage (see 3.1.1) and does not even go through your codehere's a simple tip to check this:

    - Before doing anything, run the test and verify that it passes- Comment out all the piece of code you are about to modify- Run the test again

    (1) If it passes, this specific piece of code is not tested. You may want to extend the test so that itcovers and validates this piece of code before you change anything.

    (2) If it fails, the test validates what this specific piece of code is doing, you are lucky. Dont forget

    Confidential Actel Corporation, 2012 Page 23 of 50

  • 7/30/2019 Testing With Actel SDE

    24/50

    Testing code with Actel SDE Jean Porcherot

    to restore the old code you commented out. The fact that the test does validate this code doesnot guarantee that any bug you may introduce would be detected, but there is at least a chanceto be detected..

    You can now modify the code to fix the bug and run the test again to validate your change.

    Use Test Driven Development:If the test passed (1), you may ask yourself: "why this test passes when there's actually a bug inthe code"? The answer is "Because the test does not validate the functionality correctly".If the test failed (2), you will have to write a test for this piece of code.In both cases, it may be interesting to do some test driven programming: You can add a new testcase it the test program or script that detects the bug you are about to fix. This will make the testfail temporarily.then you can safely fix the bug, if the test starts passing again, it means youfixed the bug correctly.

    3.2 A test is broken!?

    At some point, one test may be broken. There may be several reasons:- Your environment is not set correctly, so the test is not executed properly.

    - Someone (other than you) broke the test- The changes you did in your change-set broke the test

    The finality is that one test "seems" to be broken in your c-set. Here are some tips to get out ofthis situation.

    3.2.1 Did I break it?

    Firstly, you must determine if the changes you did in your c-set are the root cause of the failure.This is by far the most important thing to do before even trying to debug the test.

    Was the test broken by another developer in another c-set?

    Try to run the test from another c-set than the one you experienced the failure with.If it does fail from another c-set, most likely someone else broke the test. It does not meanfor sure that your changes do not affect the test. One test may fail due to two different bugs (oneintroduced by another developer and one introduced by your c-set). As a test stops on failure, thefact that it fails ends up with all test cases not being fully executed, so maybe a part that wouldidentify a bug in your code is not executed anymore. Ideally, you should wait that the otherdeveloper fixes his bug before you can see the test passing in your c-set and then submit. But inmost case you won't do that and you'll submit. Then, the other developer will have to deal with thebug you introduced to make the test pass again ..If it does not fail from another c-set, most likely your changes are causing the failure thenyou can and need to debug the test.

    A recommendation is to always run tests after you create or update a c-set. Then, you know if the

    tests are supposed to pass, and, if they start failing, you know if it's due to changes you did inyour c-set.

    3.2.2 What should I do then?

    When you identified that "a test is broken" by your changes (ie: "you broke the test"), there maybe two reasons:

    - You introduced a bug in the code- The test itself has a bug

    Confidential Actel Corporation, 2012 Page 24 of 50

  • 7/30/2019 Testing With Actel SDE

    25/50

    Testing code with Actel SDE Jean Porcherot

    Of course the test may have a bug (reason 2). The test is a piece of code (C++ or Tcl), so it maycontain bugs, it can possibly report a failure when it should report a successful run..but that'sprobably in less than 5% of the cases. So don't spend too much time on debugging the test codeitself.focus on the code changes you did and this will most likely be the easiest way to solve theproblems.

    In some cases, if you did not write the test and if the changes you did in your c-set are easy toundo, it may be easy to comment all your changes, check that the test passes again, and thenuncomment your changes one by one to find out which one makes the test fail. By experience, Ican tell you that this will probably take you less that 30mn to find what change introduced the bugagainst one day to debug the test if you are not familiar with it.

    3.2.3 Conclusion

    Our recommendation is to run tests as often as possible. You're about to change something thatmay affect a test: run the test before and after your change.then it's easy to identify if a changebreaks the test or not. Testing should be integrated in your development process, you need toplan time for it and expect to spend some time for writing or fixing tests.

    We should never say "I've been working on this c-set for 2 weeks now, I need to submit todayand I just noticed the tests are failing.I don't have time to fix them". If testing is part of yourdevelopment process and schedule, you should not end up in such a situation.

    Confidential Actel Corporation, 2012 Page 25 of 50

  • 7/30/2019 Testing With Actel SDE

    26/50

    Testing code with Actel SDE Jean Porcherot

    4. Tips for writing unit and integration tests

    This section lists some tips to write efficient tests, easy to debug and maintain. It first gives tipsthat apply to both unit and integration tests and then will cover tips more specific to each one

    (4.14 and 4.15).

    4.1 Negative testing

    Negative testing is testing the tool/module with improper inputs (through unit or integration test).Its a way to validate that a module/program does work below and beyond its limit. It tests therobustness of the code.

    The test done in Example 3 only tests the good flow. It checks that a design can be created,saved and then closed. The program should not crash if we save before we create a design, orwe forget the file name in the save_design command. Those corner cases should be tested too.

    Example of negative testing, using a Tcl integration test:

    if { [catch {save_design} ] } {

    #Command failed -> thats expected

    puts "save_design failed because no design is opened, that's OK"

    } else {

    puts "Error: save_design succeeded...it shouldn't."

    exit

    }

    new_design -family PA

    if { [catch {save_design} ] } {

    #Command failed -> thats expected

    puts "save_design failed due to missing arguments, that's OK"

    } else {puts "Error: save_design succeeded...it shouldn't."

    exit

    }

    save_design -name foo.adb

    close_design

    puts TEST_OK

    Example 16

    If executed, this will be the output of the program (red highlights messages coming from theprogram being tested, blue highlights messages coming from the Tcl script itself).

    Error: No design loaded

    save_design failed because no design is opened, that's OKNew design createdError in save_design command: Missing design namesave_design failed due to missing arguments, that's OKSaved design foo.adbClosed designTEST_OK

    When debugging a test because it does not end correctly (ie: if it fails with a non-expected

    Confidential Actel Corporation, 2012 Page 26 of 50

  • 7/30/2019 Testing With Actel SDE

    27/50

    Testing code with Actel SDE Jean Porcherot

    failure). You must read the log from bottom to top in order to detect the latest failure, which willbe, most likely the significant one:

    For instance, if the last call to close_design does not work anymore, the output will be:

    Error: No design loaded

    save_design failed because no design is opened, that's OKNew design createdError in save_design command: Missing design namesave_design failed due to missing arguments, that's OKSaved design foo.adbError: Unable to close design This is why the script fails.

    If reading from top to bottom, you will see the No design loaded error first, but this one isexpected and must be ignored.

    4.2 Test sizes, dependencies and runtime

    Try to make tests as short as possible. You must cover as much source code as possible with a

    test program or script as small as possible.No need to cover and validate 10 times the same functionality in 10 different manners. This slowsdown testing, makes it hard to debug tests and makes it hard to maintain the test when onebehavior changes in the module tested.

    But it's also acceptable to have very long tests in some cases. There's two kind of tests:

    - Short tests that should run very quickly because you want them to be part of thedevelopment process (like integration tests performed by 24_7 or unit tests you want torun on every change-set you'll submit)

    - Long tests that does more sanity testing and that will probably only be ran once before

    each code freeze to check that a functionality is not broken. For instance, ProjectManager has an integration test for Precision: for each package of each die of eachfamily, this runs Synthesis and verifies it works fine (to detect problems with die/packagemapping between Actel software and the OEM software). This test runs in more than 4hours.but we only run it when we get a new version of Precision.

    4.3 Test dependencies

    Try not to have dependencies between tests. If one test runs several test cases (or sub-tests),make sure that they are all independent and that one test does not reuse data or results fromothers. Because if one test case fails, you want to be able to comment all the other onestemporarily so that you can easily focus on the one that fails during debug. If the failing one usesresults from another one you won't be able to do this.

    4.4 Follow coding guidelines

    When we write some code for production, we try to follow guidelines. Those guidelines also applyto code we write for tests.they must be commented, clear, easy to maintain and understand.

    This also applies to scripts, you can use Tcl procedures, variables, includes to split your scriptcode in different files.

    Confidential Actel Corporation, 2012 Page 27 of 50

  • 7/30/2019 Testing With Actel SDE

    28/50

  • 7/30/2019 Testing With Actel SDE

    29/50

  • 7/30/2019 Testing With Actel SDE

    30/50

    Testing code with Actel SDE Jean Porcherot

    if ( mylibASK_QUESTION_QRY( NULL, MdbNO ) == MdbYES )

    {

    DoSomething();

    }

    Example 18

    If executed from the GUI, the user will see a Yes/No popup asking him a question. Defaultanswer is "No" and if the user answers "Yes", then, DoSomething() will be executed.

    If executed from an integration test (Tcl) or a unit test, where no query box can be displayed, thedefault answer is picked up automatically by the message system and we consider "No" as ananswer.

    If you want to test the call to DoSomething() from a test, here's how to do.Update the code to be:

    MdbQuery_result default_res = MdbNO;

    int bTestingQuery = 0;DefGetBool( "TESTING_DO_SOMTHING", & bTestingQuery );

    if ( bOnlyOneCorePerProject == 1 )

    default_res = MdbYES; // for testing

    if ( mylibASK_QUESTION_QRY( NULL, default_res ) == MdbYES )

    {

    DoSomething();

    }

    Example 19

    Then, set the TESTING_DO_SOMETHING def variable to 1 from your unit test (using DefSet) orintegration test (using a Tcl command your program supports to change a def variable, or set it

    directly on the command line).

    4.10 Write 100 tests or 1 test with 100 sub-tests.

    When you will have to test 100 functionalities of a new module, you will have to take this decision:should you write 100 different testsor should you write a single test with 100 test cases in it(100 sub-tests, but we will have only one unit or integration test).

    It simply depends what's the cost of setting up the model for your tests.- If you dont need to load or create any design for your test, if the test can be performed

    without any big engine initialization, then, you can create a 100 different tests.- If you need a design to be loaded, models to be initialized, then repeating this test set-up

    100 times will have a real runtime cost for your tests. So then, you may prefer to write a

    single test with 100 sub-tests working on the same model you set-up once only.

    We will take an integration test as an example:/vobs/test/reg_test/testcases/levpicasso/smartpower_tcl

    This test was written to test all the power Tcl commands from SmartPower. We support morethan 50 different commands. SmartPower needs place and route to be completed, so, we musteither open, either create a new design before we can execute a single command, this wouldprobably take several seconds

    Confidential Actel Corporation, 2012 Page 30 of 50

  • 7/30/2019 Testing With Actel SDE

    31/50

    Testing code with Actel SDE Jean Porcherot

    If we created 50 tests, one per command, running them all will spend several minutes only to setup the design.So, we decided to create a single test, this one sets up the design, and then runs 50 sub-tests onit: one per command. Then we only spend a few seconds seconds setting up my design (create,compile, run place and route). To keep all sub-tests dependants (see 4.3), we restore

    SmartPower (rather than commit) between two sub-tests.

    Debugging such a big test with 50 sub-tests can be a pain. 4.15.4 and 4.15.5 shows how to makeit easy to isolate one sub-test and run only this one rather than all.

    4.11 Check coverage

    Once you wrote the test for a module, running Pure Coverage (or any other coverage tool) withyour test (unit or iteration) is a good way to see if your test covers all the code from your module.

    4.12 OEM tools dependency

    Your test may need some OEM tools to be installed (Synplify, ModelSim.). Then, make sure

    you add in the test folder a script file that developer can run to set up his environment so that thetools can be found and ran (set LM_LICENSE_FILE and possibly PATH environment variables).

    Note that such scripts already exist for all OEM tools integrated in Project Manager (ie: LiberoIDE). They are located here:Workstation (Linux) version: /vobs/afi/prg/ide/reg_tests/common/setenv.unix.scrWindow version: /vobs/afi/prg/ide/reg_tests/common/setenv.pc.ksh

    Those will update your PATH so that it finds the latest versions of each tool available (Precision,Leonardo, Synplify, ModelSim, WFL)

    4.13 Disabling a test temporarily

    When one test is broken and you "don't have time to fix it", you may try to find some time to do itanyway ;-)

    Now, if you really can't fix it because you know your c-set breaks something and you plan tomake it work again in a next iteration. You can possibly remove the test temporarily. But, then,you must understand that the test needs to be restored as soon as possible, this should becomea very high priority because other functionalities validated by the test may be unintentionallybroken too (by you or other persons), and then you may have a very hard time to enable the testback later.

    For instance, One test validates 3 functionalities of the code (from the same module, so theyhave been put in the same test): FuncA, FuncB and FuncC. You know your c-set breaks FuncA,

    and plan to make it work again later. So you disable the full test. Unfortunately your c-set alsobreaks FuncB, but you did not know (you expected the test to fail anywayhow to know if it failedbecause FuncA is broken or FuncBor both). Moreover, in parallel, another developer breaksFuncC, as you disabled the test, he did not notice he introduced a bug and then he submitted.One week after that, you fix FuncA in a c-set and try to enable the test backwe wish yougood luck to figure out why it keeps failing and fix both FuncB and FuncC..Conclusion:

    - Having three tests (one per functionality) is recommended- If you have one single test, see if you can disable FuncA testing and maintain FuncB

    Confidential Actel Corporation, 2012 Page 31 of 50

  • 7/30/2019 Testing With Actel SDE

    32/50

    Testing code with Actel SDE Jean Porcherot

    and FuncC testing rather than disabling the full test.

    4.14 Unit testing tips

    4.14.1 Testing non-exported code

    If you want to test a class, but this ones only exported through an interface:

    mylib/ifc/MylibClassIf.h is:

    class MylibClass

    {

    public:

    MylibClass& GetInstance();

    virtual void DoSomething() = 0;

    };

    Example 20

    mylib/inc/MylibClassImpl.h is:

    class mylibClassImpl : public MylibClass

    {

    public:

    mylibClassImpl () { m_bDidSomething = false; }

    virtual void DoSomething() { m_bDidSomething = true; };

    virtual bool SomethingWasDone() { return m_bDidSomething; }

    private:

    bool m_bDidSomething;

    };

    Example 21

    mylib/src/MylibClassImpl.cpp is:

    MylibClass& MylibClass::GetInstance()

    {

    static mylibClassImpl impl;

    return impl;

    }

    Example 22

    From your test, you'd like to call DoSomething() and then verify that something was done (by

    calling and checking SomethingWasDone()). But SomethingWasDone() is not part of the interfaceand then can't be accessed from your executable.

    One solution to this problem is to extend the interface, but you don't want to do that.You can easily include MylibClassImpl.h, using "#include ../inc/MylibClassImpl.h", but usingmylibClassImpl won't work anyway as this object is not exported by mylib (because it starts withlower cases, if named, MylibClassImpl, it could work).

    Then, the only solution is to make your executable link directly with MylibClassImpl.o. This will

    Confidential Actel Corporation, 2012 Page 32 of 50

  • 7/30/2019 Testing With Actel SDE

    33/50

    Testing code with Actel SDE Jean Porcherot

    make it possible for your test to use the mylibClassImpl class.

    You will add a testlibinfo.txt file in your test folder. This one will contain:MODULE = mylibOBJECTS = MylibClassImpl

    Then, your executable will link with MylibClassImpl.o and any object from MylibClassImpl.h willbecome available from your test. You can then write:

    #inlude "../inc/MylibClassImpl.h"

    void TestRunner::MyTest()

    {

    mylibClassImpl* impl = (mylibClassImpl*) &(MylibClass::GetInstance());

    CPPUNIT_ASSERT( !impl->SomethingWasDone() );

    MylibClass::GetInstance().DoSomething();

    CPPUNIT_ASSERT( impl->SomethingWasDone() );

    }

    Example 23

    To avoid the cast, you can also make it possible to retrieve the implementation from outside:

    mylib/ifc/MylibClassIf.h is:

    class mylibClassImpl;

    class MylibClass

    {

    public:

    MylibClass& GetInstance();

    mylibClassImpl& GetInstanceImpl();

    virtual void DoSomething() = 0;

    };

    Example 24

    mylib/src/MylibClassImpl.cpp is:

    mylibClassImpl& MylibClass::GetInstanceImpl()

    {

    static mylibClassImpl impl;

    return impl;

    }

    MylibClass& MylibClass::GetInstance()

    {

    return GetInstanceImpl();

    }

    Example 25

    This makes mylibClassImpl only visible with forward declaration from the interface, so it does notexpose it for real nor add any compilation dependency.

    Confidential Actel Corporation, 2012 Page 33 of 50

  • 7/30/2019 Testing With Actel SDE

    34/50

    Testing code with Actel SDE Jean Porcherot

    4.14.2 Minimize dependencies

    If the module you are testing does not have dependency on other modules of your library. Youmay not list the library itself in linkfile.txt and only list the .o files you need in testinfo.txt (see4.14.1).

    Then, your test executable does not need the library to be fully compiled and up to date. It willonly link with .o files and this can save compilation time for your test program.

    4.14.3 Create mock up objects/models/data

    If the module you want to test needs some data, you may need to create mock-up data withinyour test program. When doing this, you should minimize the amount of code you duplicate fromthe library that creates the real data when modules are integrated together.

    For instance, when trying to test power or timing engine from a unit test, you may need to createa gdev instance using afl/adl files. This makes it possible to initialize models without needing toload an adb for real.

    4.14.4 Create mock up listeners/views

    If a model sends events/notifications to other systems, it may be interesting for you to test those.You may consider creating a mock-up event listener that will check that events are propagatedcorrectly and that the listener is in sync with the model.

    This is the code you want to test and validate:

    Confidential Actel Corporation, 2012 Page 34 of 50

  • 7/30/2019 Testing With Actel SDE

    35/50

    Testing code with Actel SDE Jean Porcherot

    class IntView

    {

    public:

    virtual void AddObjectDisplay( int iObject ) = 0;

    virtual void RemoveObjectDisplay( int iObject ) = 0;

    };

    class IntContainer

    {

    public:

    void Init()

    {

    for ( int i = 0 ; i < 100 ; i++ )

    {

    m_vObject.push_back( i );

    m_pView->AddObjectDisplay( i);

    }

    }

    void Clear(){

    while ( !m_vObjects.empty() )

    {

    m_pView->RemoveObjectDisplay(vObjects.front());

    m_vObjects.erase( m_vObjects.begin() );

    }

    }

    void SetView( IntView* pView ) { m_pView = pView; }

    const std::set& GetObjects() { return m_vObjects; }

    private:

    std::set m_vObjects;IntView* m_pView;

    };

    Example 26

    Here is how your test program could look like:

    Confidential Actel Corporation, 2012 Page 35 of 50

  • 7/30/2019 Testing With Actel SDE

    36/50

    Testing code with Actel SDE Jean Porcherot

    class MyView : public IntView

    {

    MyView( IntContainer& cont ) : m_container(cont) {}

    virtual void AddObjectDisplay( int iObject )

    {

    CPPUNIT_ASSERT(m_vDisplayed.find(iObject) == m_vDisplayed.end());

    m_vDisplayed.insert( iObject );

    Validate();

    }

    virtual void RemoveObjectDisplay( int iObject ) = 0;

    {

    CPPUNIT_ASSERT(m_vDisplayed.find(iObject) != m_vDisplayed.end());

    m_vDisplayed.erase( iObject );

    Validate();

    }

    void Validate(){

    CPPUNIT_ASSERT(m_vDisplayed == m_container.GetObjects());

    }

    private:

    IntContainer& m_container;

    std::set m_vDisplayed;

    };

    void TestRunner::MyTest()

    {

    IntContainer container;

    MyView view( container );container.SetView( &MyView );

    container.Init();

    container.Validate();

    container.Clear();

    container.Validate();

    }

    Example 27

    4.14.5 Create static members to test calls

    4.14.4 show how to set up your own listener to check that events are propagated correctly. If themodule you want to test notifies another module and does not let you change the listener itself,you may need the trick below to validate notification is done.

    Here's the code you want to test:

    Confidential Actel Corporation, 2012 Page 36 of 50

  • 7/30/2019 Testing With Actel SDE

    37/50

    Testing code with Actel SDE Jean Porcherot

    #include "MylibClassIf.h"

    class IntContainer

    {

    public:

    void Init()

    {

    for ( int i = 0 ; i < 100 ; i++ )

    {

    m_vObject.push_back( i );

    m_pView->AddObjectDisplay(i);

    if ( i % 2 == 0 )

    MyLibClass::GetInstance().EvenNumberFound(i);

    }

    }

    };

    Example 28

    Init method now notifies MyLibClass when an even number is loaded.

    We want to validate that EvenNumberFound is called when i is an even number. Unfortunately,we can't easily have IntContainer call a mock-up MyLibClass instance. Then, we can add a staticmember in MyLibClass to track accesses to EvenNumberFound. This has low impact on codecomplexity, memory usage and runtime, and will allow me to verify the behavior we expect.

    We will then need to modify mylibClassImpl:

    mylib/inc/MylibClass.h is:

    class mylibClassImpl : public MylibClass

    {

    public:

    ...

    static std::set*& GetEvenFoundDebug()

    {

    static std::set* debug = NULL;

    return debug;

    }

    void EvenNumberFound( int i )

    {

    SysPrintf("FoundEvenNumber");

    DoSomething();

    if (GetEvenFoundDebug())

    GetEvenFoundDebug()->insert(i);

    }

    ...

    };

    Example 29

    Then, we can test that myLibClassImpl has been notified as expected:

    Confidential Actel Corporation, 2012 Page 37 of 50

  • 7/30/2019 Testing With Actel SDE

    38/50

    Testing code with Actel SDE Jean Porcherot

    void TestRunner::MyTest()

    {

    IntContainer container;

    std::set my_debug;

    MyLibClass::GetInstanceImpl().GetEvenFoundDebug() = &my_debug;

    CPPUNIT_ASSERT( my_debug.size() == 0 );

    container.Init();

    CPPUNIT_ASSERT( my_debug.size() == 50 );

    MyLibClass::GetInstanceImpl().GetEvenFoundDebug() = NULL;

    }

    Example 30

    4.14.6 Implement a GUI unit test

    A unit test is a program linking to some libraries to be validated. Unit tests we presented aboveare supposed to run automatically and return 0 or 1 depending if they pass or fail.

    If you want to validate that a GUI component works fine (a grid, tooltip, dialog, control base classyou implemented), you can also consider having a unit test program that will open a GUI and letyou do the component validation by hand. We have this sort of tests for amfc, for instance,/afi/tst/amfc/cpptooltip is a GUI program, if you compile and run it, it will open the GUI below sothat you can play with the amfc tooltip class.

    Figure 12

    Such tests must not appear in your unit test suite as they cannot be automatically ran andvalidated by a test runner.

    Confidential Actel Corporation, 2012 Page 38 of 50

  • 7/30/2019 Testing With Actel SDE

    39/50

    Testing code with Actel SDE Jean Porcherot

    4.15 Integration testing tips

    4.15.1 Negative testing, use quit, rather than exit

    As script execution stops on failure, you need to use the catch statement if you want to run a

    command supposed to fail.

    if { [catch {save_design} ] } {

    #Command failed -> thats expected

    puts "save_design failed because no design is opened, that's OK"

    } else {

    puts "Error: save_design succeeded...it shouldn't."

    quit

    }

    new_design -family PA

    if { [catch {save_design} ] } {

    #Command failed -> thats expected

    puts "save_design failed due to missing arguments, that's OK"

    } else {puts "Error: save_design succeeded...it shouldn't."

    quit

    }

    save_design -name foo.adb

    close_design

    puts TEST_OK

    Example 31

    On failure, we prefer to do a quit rather than a exit. quit is not a valid Tcl command, so callingit will fail and will end up exiting cleanly. exit asks to exit the software instantly and this may notwrite the log file in some cases.

    4.15.2 Dont catch all commands

    Many old integration tests are catching every command:

    if {[ catch {import_source -format "edif" top_edac1_ax1000s.edn }]} {

    puts "IMPORT FAILED"

    } else {

    puts "IMPORT_EDIF_SUCCESSFULL"

    }

    Example 32

    Because script execution exits on failure, the script below is probably easier to read, will give the

    same output on success and will exit on failure.

    import_source -format "edif" top_edac1_ax1000s.edn

    puts "IMPORT_EDIF_SUCCESSFULL"

    Example 33

    Confidential Actel Corporation, 2012 Page 39 of 50

  • 7/30/2019 Testing With Actel SDE

    40/50

    Testing code with Actel SDE Jean Porcherot

    4.15.3 Organize your code

    Use Tcl procedures (proc) to define procedures and avoid having a big non-organized test with alots of duplicated statements in it.

    4.15.4 Organize your test in sub-tests

    When you want to test several aspects of a functional unit, you can either write one integrationtest for each, or write a single integration test with sub-tests for each. See 4.9.

    Heres how you can create a test with several sub-tests (example picked-up from Picasso tetssuite):

    set all_tests "test1 test2 test3

    set DESIGN "smartpower_tcl.adb"

    new_design -name "foo" -family "ProASIC3" -path {.} -block "off"

    import_source -format "edif" -edif_flavor "GENERIC" {netlist.edn}

    set_device -die "A3P125" -package "132 QFN"

    save_design $DESIGN

    foreach the_test $all_tests {

    puts "Stating test: $the_test"

    source tests/${the_test}.tcl

    puts "${the_test}_SUCCESSFUL"

    close_design

    open_design $DESIGN

    }

    close_design

    puts "ALL_TESTS_SUCCESSFUL"

    Example 34

    Then, your test folder will contain a sub-folder tests, containing test1.tcl test2.tcl test3.tcl eachone testing a different functionality.

    4.15.5 Use script arguments

    Arguments can be passed to a Tcl script. They can then be accessed in the script using $argcand $argv.

    Example 34 shows a test that runs three sub-tests (test1, test2 and test3). If you know one test isfailing and want to run only this one, you can use script arguments to modify the $all_testsvariable and then have only a single test run:

    set all_tests "test1 test2 test3

    if { $argc != 0 } {

    set all_tests $argv

    }

    ...

    Confidential Actel Corporation, 2012 Page 40 of 50

  • 7/30/2019 Testing With Actel SDE

    41/50

    Testing code with Actel SDE Jean Porcherot

    Example 35

    Refer to 2.4.4 to see how to pass arguments to your script when running it.

    4.15.6 Test GUI content

    If the main frame of the program you are testing is loaded, then, you can test the content of GUIcomponents through a Tcl script. Your main frame will be created (OnCreate will be called), all itscontrol bars will also be created but not displayed. Some MFC functions may crash in thissituation (PostMessage, SendMessage, SetWindowPos), and you should then protect them by atest on GetSafeHwnd() (this will return NULL if the frame is created from a Tcl script).

    This may require some minimal code change, but will then makes it possible for a script to checkthat:

    - A view (tree, list) contains what it should contain- Controls are greyed out when they should be.

    For instance, your program can provide a Tcl command to dump a tree control content. Then,your script can ask the tree control to dump its content to a file, and then your script can compareit to a golden file.

    To have the main frame loaded you need to add SCRIPT_MODE:query on the command linewhen executing the script (see 2.4.4.1). Under Linux, this will require DISPLAY environmentvariable to be set. Then, such tests cannot be added to the tests executed by 24_7 for c-setprocessing as this one does not set DISPLAY.

    You dont need that when running the script directly from the programs GUI (using FileExecute script).

    4.15.7 Test model content

    As you can add Tcl commands in your program to dump GUI controls content, you can addcommands to dump model content or to return a specific model value (Tcl commands does notonly pass or fail, they can also return a double, integer, Boolean or string value).

    Then, you can test your model from the Tcl script.

    Example from smartpower_tcl integration test:

    smartpower_set_cooling -style {custom} -teta {26}

    if { [smartpower_get_tetaja -style custom] != 26 } {

    puts Invalid Teta JA value

    quit

    }

    puts TEST_OK

    Example 36

    Confidential Actel Corporation, 2012 Page 41 of 50

  • 7/30/2019 Testing With Actel SDE

    42/50

    Testing code with Actel SDE Jean Porcherot

    4.15.8 Error Checking Methods

    4.15.8.1 More on Error Checking Methods

    These can only be specified in the readme file. Each readme file must specify at least one error

    checking method. The error checking method determines how the results of the test areevaluated to determine success or failure. It is important to keep the concept of executing thetest separate from the concept of evaluating the outcome of the test.String is: Error Checking Methods: :

    Where :, etc., is a colon separated list of error checking methods. The methodcan be specified in one of 3 ways (listed in order of highest to lowest precedence):

    - the fully qualified path to a script- the name of a predefined error checking method (these are methods hard-coded intothe regression script examples are exp, qtf, mvn_exp)- the name of an error checking script that is located in /vobs/test/reg_test/scripts(example is new script retval which has been created for use by the actgen tests)

    Examples:Error Checking Methods: retvalError Checking Methods: /export/home/err_check.kshError Checking Methods: exp:qtf

    4.15.8.2 Error Checking Method Parameters

    This is where you specify parameters that are passed to the error checking script. For all non-predefined error checking methods, there is an API that is always used (i.e. these parameters arealways passed to the error checking method):

    -e -l -r

    Where is the full path to the executable that was called, is the fullpath to the directory where the test was run, and is the return value that theapplication returned.

    Additional parameters that are specified in the readme are appended to the end of the pre-defined API. These parameters can only be specified in the readme file.

    String is: ERROR_CHECKING_PARAMS=

    Where are the additional parameters that are to be passed to the error checkingmethods in addition to the above mentioned API. Note that this does not apply to pre-defined

    error checking methods their parameters are hard coded into the regression testing script.

    Example:Error Checking Method: retvalERROR_CHECKING_PARAMS=200

    Results in the following call:/vobs/test/reg_test/scripts/retval e -l -r 200

    Confidential Actel Corporation, 2012 Page 42 of 50

  • 7/30/2019 Testing With Actel SDE

    43/50

    Testing code with Actel SDE Jean Porcherot

    5. Using and taking advantage of your tests

    By following the steps above, you could easily create a new unit or integration test. This sectionwill provide you with some more advanced information about how to use your tests.

    5.1 More about test runners

    5.1.1 Unit test runner

    The script run_tests.rb (/vobs/dtools/ruby/run_tests.rb) allows you to run and validate unit testsuites.

    This script takes an output directory and a list of unit tests to be performed.For each unit test, it will:

    - Step into the unit test folder- Build and stage the test- Execute the generated executable- Verify the test's exit code

    Then, it will provide a status reporting how many tests passed and which ones failed. The outputdirectory will contain the log of each test executed.

    Call run_tests.rb -h for more information on this tool.

    This test runner is the one used when specifying the build_top recipe flags:DO_UNIT_TEST=true

    UNIT_TEST_LEVELS=all@idebase,sys@base,deftest@baseThis will run all tests below:

    - All tests from /afi/idebase/testinfo.txt- All the sys group from /afi/base/testinfo.txt- deftest from base (/afi/tst/base/deftest)

    The output directory for integration tests can be specified in the recipe with UNIT_TEST_DIR. Ifnot specified, it will be $SDE_LOCAL_ROOTDIR/utst (if the SDE variable is set).

    Refer to build_top documentation for more information on those options.

    This test runner is fully compatible with sdcmds, idebase and sgcore tests. It could easily supportother test suites (base, tfc.any library that has tests). We would just need to write the testinfo.txt

    files and verify that tests can be validated by their exit code.

    Pre-requisites:

    - This Ruby script needs the following gem components to be installed: optparse, obstruct, pp,platform. They can be installed using gem install. Refer to Ruby help.- You must tell the script where to build and stage the tests. Use run_tests.rb -h to see where theyare picked up from by default. If using the runner from build_top, make sureSDE_LOCAL_ROOTDIR, ACTEL_SW_DIR or ALSDIR are set so that the runner knows where tostage the executables once compiled.

    Confidential Actel Corporation, 2012 Page 43 of 50

  • 7/30/2019 Testing With Actel SDE

    44/50

    Testing code with Actel SDE Jean Porcherot

    Recommendations:

    Always use the latest SDE recommendation for your environment, then the script should workwith minimal parameters specified:run_tests.rb --tests all@idebase

    5.1.2 Integration test runner

    The script top_regs (/vobs/rtools/scripts/top_regs) allows you to run and validate integration testsuites. This script takes an output directory and an "lst" file as parameter.For each test of the suite (lst file specified), it will:

    - Copy the whole integration test folder in the working output specified- Run the test based on the information from the 'readme' file- Use the "Error Checking Method' to validate the test.

    Then, it will provide a status reporting how many tests passed and which ones failed.

    Call top_regs -h for more information on this tool.

    This test runner is the one used when specifying the build_top recipe flags:DO_TEST=trueTEST_LEVELS=ideThis will run all tests from /vobs/test/reg_test/lists/reglevide.lst

    The output directory for integration tests can be specified in the recipe with TEST_DIR. If notspecified, it will be $SDE_LOCAL_ROOTDIR/tst (if the SDE variable is set).

    Refer to build_top documentation for more information on those options.

    5.2 Customizing test validation/execution

    5.2.1 Unit test

    By default:- To run a test, we simply need to call the executable. No arguments are passed.- To validate the test, we simply need to check the exit code.

    This can be customized per test. Those customizations are fully supported by the unit test runner.

    But we would not recommend using those customizations, it may be hard for a human person notfamiliar with the test to understand how to run and execute it (worst, if he does not understandhow to determine if the test passes or fail, he may consider it passed when its actually broken).

    Basically, any argument that needs to be passed to the test can be hard-coded in the test's mainprogram and any specific validation can also be done in C++ by the main program, or theCPPUNIT test class and guarantee that only the exit code should be checked for validation.

    Anyway, here is how to customize this if you really need to:

    5.2.1.1 Passing arguments to the test

    You can ask to pass some arguments to the test executable. You simply need to create a

    Confidential Actel Corporation, 2012 Page 44 of 50

  • 7/30/2019 Testing With Actel SDE

    45/50

    Testing code with Actel SDE Jean Porcherot

    arglist.txt file in the test folder.

    The line it contains will be used as a list of arguments to be passed to the executable.

    Note that if your test has a "arglist.txt" file, this one it not handled by Visual Studio, you will needto specify the arguments to be used by the debugger manually. Thats one of the reason why we

    dont recommend to do that. Developers may miss this file and try to run the test from thedebugger without arguments.

    5.2.1.2 Multiple runs of the same test

    You may want one test (ie: executable) to be executed several times by the test runner, withdifferent arguments. Then, you simply need to add one new line to arglist.txt per execution youwant.

    For instance, if your test is family specific, your arglist may contain:

    ACT1

    ACT2PA

    ...

    Example 37

    Then, the program will be run many times by the test runner with ACT1 as single argument, thenACT2, the PA....

    This is not handled by Visual Studio when youll try to debug the test and moreover, youhardcoded family names that should be accessed through the def system. That's one morereason why we would recommend to loop through all families directly in the main program itself.

    5.2.1.3 Customize test validation

    run_tests.rb will create a test runner class for each unit test to be performed. This class is the onewho will build, run and validate the test. By default, it only checks the executable exit code. Thereis a mechanism to customize the test runner class used to validate one test suite or one specifictest.

    You just need to add a sub_test_runner.rb file under the module test folder (/afi/tst/) tocustomize the validation for all tests it contains or under the test itself to customize a single testvalidation (/afi/tst//).You will need to be familiar with Ruby, heres the file youll need to add:

    Confidential Actel Corporation, 2012 Page 45 of 50

  • 7/30/2019 Testing With Actel SDE

    46/50

    Testing code with Actel SDE Jean Porcherot

    #! /bin/env ruby

    class MyDoRunTest < DoRunTest

    def doValidate( exit_code )

    result = super( exit_code )

    if @cur_options.verbose

    puts "#{@folder}: Parsing #{@logfile} for result"

    end

    if result == 0

    # Do some extra validation from here

    end

    result

    end

    end

    class MyPrototype < TestPrototype

    def DoCreateConcreteObject(folder,options,num,thread)

    MyDoRunTest.new(folder,options,num,thread)

    endend

    # add the new test runner to the factory

    TestFactory.register( MyPrototype.new() )

    Example 38

    Once again, this is really not recommended. This extra validation mechanism will be used by therunner, but not by a human person running the test by hand or debugging it.

    5.2.2 Integration test

    'readme' file from the test folder is the main entry point to customize test execution (change

    argument list) and validation (change "Error Checking Method').

    5.3 Overnight runs

    You can use Windows' "Scheduled Tasks" to plan a build to be done and tests to be performedevery night. Note that you need administrator privilege to be able to add new scheduled tasks.

    Figure 13

    Define the task so that it runs a ksh script for instance:

    Confidential Actel Corporation, 2012 Page 46 of 50

  • 7/30/2019 Testing With Actel SDE

    47/50

    Testing code with Actel SDE Jean Porcherot

    Figure 14

    Then, make the script enter the view, synchronize it and then call buildtop with a recipe. Thisrecipe can run either unit or integration tests and can send you an email when done.

    echo "echo \"Starting nightly build\"" >> e:/user/script.ksh

    echo "sv_sync" >> e:/user/script.ksh

    echo "build_top myrecipe" >> e:/user/script.ksh

    setview -e e:/user/script.ksh f:/user/sn/myview_sn

    Example 39

    Confidential Actel Corporation, 2012 Page 47 of 50

  • 7/30/2019 Testing With Actel SDE

    48/50

    Testing code with Actel SDE Jean Porcherot

    6. Test Driven Development

    Test-Driven Development (TDD) is a software development technique consisting of short

    iterations where new test cases covering the desired improvement or new functionality are writtenfirst, then the production code necessary to pass the tests is implemented, and finally thesoftware is refactored to accommodate changes. The availability of tests before actualdevelopment ensures rapid feedback after any change. Practitioners emphasize that test-drivendevelopment is a method of designing software, not merely a method of testing. [Wikipedia]

    See how to take advantage of TDD when fixing bugs at the end of3.1.2.

    As a unit test can be loaded in Visual Studio (see 2.3.9) it's possible to do TDD from VisualStudio.

    In this case, you will write the test b