Topic 12 Summer 2003 1 ICS 52: Introduction to Software Engineering Lecture Notes for Summer Quarter, 2003 Michele Rousseau Topic 12 Partially based on lecture notes written by Sommerville, Frost, Van Der Hoek, Taylor & Tonne. Duplication of course material for any commercial purpose without the written permission of the lecturers is prohibited
44
Embed
Topic 12Summer 2003 1 ICS 52: Introduction to Software Engineering Lecture Notes for Summer Quarter, 2003 Michele Rousseau Topic 12 Partially based on.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Topic 12 Summer 2003 1
ICS 52: Introduction to Software Engineering
Lecture Notes for Summer Quarter, 2003
Michele RousseauTopic 12
Partially based on lecture notes written by Sommerville, Frost, Van Der Hoek, Taylor & Tonne. Duplication of course material for any commercial purpose
without the written permission of the lecturers is prohibited
Topic 12 Summer 2003 2
Verification and Validation
Informal Requiremen
ts
Informal Requiremen
ts
Formal Specificatio
n
Formal Specificatio
n
Software Implementati
on
Software Implementati
on
Validation
Verification
Verification: is implementation consistent with requirements specification?Validation: does the system meet the customer’s/user’s needs?
Topic 12 Summer 2003 3
Software Quality: assessment by V&V
Software process must include verification and validation to measure product qualities • correctness, reliability, robustness• efficiency, usability, understandability • verifiability, maintainability• reusability, portability, interoperability,• real-time, safety, security, survivability, accuracy
Products can be improved by improving the process by which they are developed and assessed
Topic 12 Summer 2003 4
Testing Terminology
Failure: Incorrect or unexpected output, based on specifications• Symptom of a fault
Fault: Invalid execution state• Symptom of an error• May or may not produce a failure
Error: Defect or anomaly or “bug” in source code• May or may not produce a fault
Topic 12 Summer 2003 5
1: input A,B
2: A>0?
3: C :=0 4: C := A*B
5: B>0?
6: X := C*(A+2*A) 7: X := A+B
8: output X
Examples: Faults, Errors, and Failures
Suppose node 6 should be X:= C*(A+2*B)
• Failure/Fault-less error: - Suppose the inputs are
(A=2,B=1) – the executed path
will be (1,2,4,5,7,8) which will not reveal this fault because 6 is not executed
- Suppose the inputs are (A=-2,B=-1) – the executed path will be (1,2,3,5,6,8) which will not reveal the fault because C = 0
Need to make sure proper test cases are selected
• the definitions of C at nodes 3 and 4 both affect the use of C at node 6- executing path (1,2,4,5,6,8) will reveal the failure, but only if B != 0 - (e.g. Inputs (A=1,B=-2) )
Topic 12 Summer 2003 6
Software Testing
Exercising a system [component] • on some predetermined input data• capturing the behavior and output data• comparing with test oracle• for the purposes of
» identifying inconsistencies » verifying consistency between actual results and specification
to provide confidence in consistency with requirements and measurable qualities
to demonstrate subjective qualities
» validating against user needs Limitations
• only as good as the test data selected• subject to capabilities of test oracle
Topic 12 Summer 2003 7
Definition to execution (Levels of testing)
System Testing • Defined at Requirements -> Run after integration testing
Integration Testing• Defined at Design -> Run after Module Testing
Module Testing• Defined at Design -> Run after Unit Testing
Unit Testing• Defined at Implementation -> Run after Implementation of
each unit Regression Testing (testing after Change)
• Defined throughout the process -> Run after modifcations
Topic 12 Summer 2003 8
Requirements Analysis
Architectural (Preliminary)
Design
Algorithmic (Detailed)
Design
Implemen- tation
Requirements Specification
Module Interface
Specification
Module Design
Specification
Source Code
System Testing
Integration Testing
Module Testing
Unit Testing
Problem Definition
System Test Plan
Integration Test Plan
Module Test Plan
Unit Test Plan
Behavior verification
against oracle
MaintenanceRS/MIS/ MDS/SC
Modification
Regression Testing
Regression Test Plan
Testing-oriented Life Cycle Process
Topic 12 Summer 2003 9
How to make the most of limited resources?
Fundamental Testing Questions
Test Criteria: What should we test? Test Oracle: Is the test correct? Test Adequacy: How much is enough? Test Process: Is our testing effective?
Topic 12 Summer 2003 10
Practical Issues
Purpose of testing• Fault detection• High assurance of reliability• Performance/stress/load• Regression testing of new versions
Test Selection is a sampling technique• choose a finite set from an infinite domain
Topic 12 Summer 2003 11
Test Criteria
Testing must select a subset of test cases that are likely to reveal failures
Test Criteria provide the guidelines, rules, strategy by which test cases are selected• actual test data• conditions on test data• requirements on test data
Equivalence partitioning is the typical approach• a test of any value in a given class is equivalent to a
test of any other value in that class• if a test case in a class reveals a failure, then any
other test case in that class should reveal the failure• some approaches limit conclusions to some chosen
class of errors and/or failures
Topic 12 Summer 2003 12
Test Adequacy
Coverage metrics• when sufficient percentage of the program
structure has been exercised Empirical assurance
• when failures/test curve flatten out Error seeding
• percentage of seeded faults found is proportional to the percentage of real faults found
Independent testing• faults found in common are representative of
total population of faults
Topic 12 Summer 2003 13
Functional and Structural Testing
Functional Testing• Test cases selected based on specification• Views program/component as black box
Structural Testing• Test cases selected based on structure of
code• Views program /component as white box
(also called glass box testing)
Topic 12 Summer 2003 14
Black Box vs. White Box Testing
SELECTED INPUTS
RESULTANT OUTPUTS
INTERNAL BEHAVIOR
DESIRED OUTPUT
SOFTWARE DESIGN
“BLACK BOX” TESTING
“WHITE BOX” TESTING
SELECTED INPUTS
RESULTANT OUTPUTS
DESIRED OUTPUT
Topic 12 Summer 2003 15
Structural (White Box) Testing
Use source code to derive test cases Build flow graph of program State test requirements in terms of graph
coverage Various types of coverage identified Choose test cases that guarantee different types of