Software Engineering

Post on 07-Feb-2016

27 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Software Engineering. Software Metrics James Gain ( jgain@cs.uct.ac.za ) http://people.cs.uct.ac.za/~jgain/courses/SoftEng/. Objectives. Introduce the necessity for software metrics Differentiate between process, project and product metrics - PowerPoint PPT Presentation

Transcript

Software EngineeringSoftware Engineering

Software MetricsJames Gain

(jgain@cs.uct.ac.za)http://people.cs.uct.ac.za/~jgain/courses/SoftEng/

ObjectivesObjectives Introduce the necessity for software metrics Differentiate between process, project and

product metrics Compare and contrast Lines-Of-Code (LOC) and

Function Point (FP) metrics Consider how quality is measured in Software

Engineering Describe statistical process control for managing

variation between projects

Measurement & MetricsMeasurement & MetricsAgainst:

Collecting metrics is too hard ... it’s too time consuming ... it’s too political ... they can be used against individuals ... it won’t prove anything

For:

In order to characterize, evaluate, predict and improve the process and product a metric baseline is essential.

“Anything that you need to quantify can be measured in some way that is superior to not measuring it at all” Tom Gilb

TerminologyTerminology Measure: Quantitative indication of the extent, amount, dimension, or size

of some attribute of a product or process. A single data point Metrics: The degree to which a system, component, or process possesses a

given attribute. Relates several measures (e.g. average number of errors found per person hour)

Indicators: A combination of metrics that provides insight into the software process, project or product

Direct Metrics: Immediately measurable attributes (e.g. line of code, execution speed, defects reported)

Indirect Metrics: Aspects that are not immediately quantifiable (e.g. functionality, quantity, reliability)

Faults: Errors: Faults found by the practitioners during software development Defects: Faults found by the customers after release

A Good Manager A Good Manager MeasuresMeasures

measurement

What do weuse as abasis? • • size? • • function?

project metrics

process metricsprocess

product

product metrics

“Not everything that can be counted counts, and not everything that counts can be counted.” - Einstein

Process MetricsProcess Metrics Focus on quality achieved as a consequence of a repeatable

or managed process. Strategic and Long Term. Statistical Software Process Improvement (SSPI). Error

Categorization and Analysis: All errors and defects are categorized by origin The cost to correct each error and defect is recorded The number of errors and defects in each category is computed Data is analyzed to find categories that result in the highest cost

to the organization Plans are developed to modify the process

Defect Removal Efficiency (DRE). Relationship between errors (E) and defects (D). The ideal is a DRE of 1:

)/( DEEDRE

Project MetricsProject Metrics Used by a project manager and software team to adapt

project work flow and technical activities. Tactical and Short Term.

Purpose: Minimize the development schedule by making the necessary

adjustments to avoid delays and mitigate problems Assess product quality on an ongoing basis

Metrics: Effort or time per SE task Errors uncovered per review hour Scheduled vs. actual milestone dates Number of changes and their characteristics Distribution of effort on SE tasks

Product MetricsProduct Metrics Focus on the quality of deliverables Product metrics are combined across several

projects to produce process metrics Metrics for the product:

Measures of the Analysis Model Complexity of the Design Model1. Internal algorithmic complexity2. Architectural complexity3. Data flow complexity Code metrics

Metrics GuidelinesMetrics Guidelines Use common sense and organizational sensitivity when

interpreting metrics data Provide regular feedback to the individuals and teams who

have worked to collect measures and metrics. Don’t use metrics to appraise individuals Work with practitioners and teams to set clear goals and

metrics that will be used to achieve them Never use metrics to threaten individuals or teams Metrics data that indicate a problem area should not be

considered “negative.” These data are merely an indicator for process improvement

Don’t obsess on a single metric to the exclusion of other important metrics

Normalization for MetricsNormalization for Metrics How does an organization combine metrics that

come from different individuals or projects? Depend on the size and complexity of the projec Normalization: compensate for complexity aspects

particular to a product Normalization approaches:

Size oriented (lines of code approach) Function oriented (function point approach)

Typical Normalized MetricsTypical Normalized Metrics

Size-Oriented: errors per KLOC (thousand lines of code), defects per KLOC, R per

LOC, page of documentation per KLOC, errors / person-month, LOC per person-month, R / page of documentation

Function-Oriented: errors per FP, defects per FP, R per FP, pages of documentation per

FP, FP per person-month

Project LOC FP Effort (P/M)

R(000) Pp. doc

Errors Defects People

alpha 12100 189 24 168 365 134 29 3

beta 27200 388 62 440 1224 321 86 5

gamma 20200 631 43 314 1050 256 64 6

Why Opt for FP Measures?Why Opt for FP Measures? Independent of programming language. Some programming

languages are more compact, e.g. C++ vs. Assembler Use readily countable characteristics of the “information

domain” of the problem Does not “penalize” inventive implementations that require

fewer LOC than others Makes it easier to accommodate reuse and object-oriented

approaches Original FP approach good for typical Information Systems

applications (interaction complexity) Variants (Extended FP and 3D FP) more suitable for real-

time and scientific software (algorithm and state transition complexity)

Computing Function PointsComputing Function Points

Establish count for input domain and system interfaces

Analyze information domain of the application and develop counts

Weight each count by assessing complexity

Assign level of complexity (simple, average, complex) or weight to each count

Grade significance of external factors, F_i, such as reuse, concurrency, OS, ...

Assess the influence of global factors that affect the application

Compute function points

FP = SUM(count x weight) x C where complexity multiplier C = (0.65+0.01 x N) degree of influence N = SUM(F_i)

Analyzing the Information DomainAnalyzing the Information Domain

complexity multiplier

function points

number of user inputs number of user outputs number of user inquiries number of files number of ext.interfaces

measurement parameter

3 4 3 7 5

countweighting factor

simple avg. complex

4 5 4 10 7

6 7 6 15 10

= = = = =

count-total

X X X X X

Taking Complexity into AccountTaking Complexity into Account Complexity Adjustment Values (F_i) are rated on a scale of 0 (not

important) to 5 (very important):1. Does the system require reliable backup and recovery?2. Are data communications required?3. Are there distributed processing functions?4. Is performance critical?5. System to be run in an existing, heavily utilized environment?6. Does the system require on-line data entry?7. On-line entry requires input over multiple screens or operations?8. Are the master files updated on-line?9. Are the inputs, outputs, files, or inquiries complex?10. Is the internal processing complex?11. Is the code designed to be reusable?12. Are conversion and instillation included in the design?13. Multiple installations in different organizations?14. Is the application designed to facilitate change and ease-of-use?

Exercise: Function PointsExercise: Function Points Compute the function point value for a project

with the following information domain characteristics:Number of user inputs: 32Number of user outputs: 60Number of user enquiries: 24Number of files: 8Number of external interfaces: 2Assume that weights are average and external complexity

adjustment values are not important. Answer:

7.40165.0)72108424560432(

Example: SafeHome FunctionalityExample: SafeHome Functionality

User SafeHome System

Sensors

User

Monitor and

Response System

System Config Data

Password

Zone Inquiry

Sensor Inquiry

Panic Button

(De)activate

Messages

Zone Setting

Sensor Status

(De)activate

Alarm AlertPassword, Sensors, etc.

Test Sensor

Example: SafeHome FP CalcExample: SafeHome FP Calc

complexity multiplier

function points

number of user inputs number of user outputs number of user inquiries number of files number of ext.interfaces

measurement parameter

3 4 3 7 5

countweighting factor

simple avg. complex

4 5 4 10 7

6 7 6 15 10

= = = = =

count-total

X X X X X

3

2

2

1

4

9

8

6

7

22

58

52]46.065.0[]01.065.0[ iF 1.11

Exercise: Function PointsExercise: Function Points Compute the function point total for your project.

Hint: The complexity adjustment values should be low ( )

Some appropriate complexity factors are (each scores 0-5):1. Is performance critical?2. Does the system require on-line data entry?3. On-line entry requires input over multiple screens or operations?4. Are the inputs, outputs, files, or inquiries complex?5. Is the internal processing complex?6. Is the code designed to be reusable?7. Is the application designed to facilitate change and ease-of-use?

10 iF

OO Metrics: Distinguishing OO Metrics: Distinguishing CharacteristicsCharacteristics

The following characteristics require that special OO metrics be developed: Encapsulation — Concentrate on classes rather than

functions Information hiding — An information hiding metric will

provide an indication of quality Inheritance — A pivotal indication of complexity Abstraction — Metrics need to measure a class at different

levels of abstraction and from different viewpoints Conclusion: the class is the fundamental unit of

measurement

OO Project MetricsOO Project Metrics Number of Scenario Scripts (Use Cases):

Number of use-cases is directly proportional the number of classes needed to meet requirements

A strong indicator of program size

Number of Key Classes (Class Diagram): A key class focuses directly on the problem domain NOT likely to be implemented via reuse Typically 20-40% of all classes are key, the rest support

infrastructure (e.g. GUI, communications, databases)

Number of Subsystems (Package Diagram): Provides insight into resource allocation, scheduling for parallel

development and overall integration effort

OO Analysis and Design MetricsOO Analysis and Design Metrics Related to Analysis and Design Principles Complexity:

Weighted Methods per Class (WMC): Assume that n methods with cyclomatic complexity are defined for a class C:

Depth of the Inheritance Tree (DIT): The maximum length from a leaf to the root of the tree. Large DIT leads to greater design complexity but promotes reuse

Number of Children (NOC): Total number of children for each class. Large NOC may dilute abstraction and increase testing

icWMC

nccc ,...,, 21

Further OOA&D Metrics Further OOA&D Metrics Coupling:

Coupling between Object Classes (COB): Total number of collaborations listed for each class in CRC cards. Keep COB low because high values complicate modification and testing

Response For a Class (RFC): Set of methods potentially executed in response to a message received by a class. High RFC implies test and design complexity

Cohesion: Lack of Cohesion in Methods (LCOM): Number of methods in a

class that access one or more of the same attributes. High LCOM means tightly coupled methods

OO Testability MetricsOO Testability Metrics Encapsulation:

Percent Public and Protected (PAP): Percentage of attributes that are public. Public attributes can be inherited and accessed externally. High PAP means more side effects

Public Access to Data members (PAD): Number of classes that access another classes attributes. Violates encapsulation

Inheritance: Number of Root Classes (NRC): Count of distinct class hierarchies.

Must all be tested separately Fan In (FIN): The number of superclasses associated with a class. FIN

> 1 indicates multiple inheritance. Must be avoided Number of Children (NOC) and Depth of Inheritance Tree (DIT):

Superclasses need to be retested for each subclass

)/( DEEDRE

Quality MetricsQuality Metrics Measures conformance to explicit requirements, following

specified standards, satisfying of implicit requirements Software quality can be difficult to measure and is often

highly subjective1. Correctness:

The degree to which a program operates according to specification

Metric = Defects per FP

2. Maintainability: The degree to which a program is amenable to change Metric = Mean Time to Change. Average time taken to analyze,

design, implement and distribute a change

Quality Metrics: Further MeasuresQuality Metrics: Further Measures3. Integrity:

The degree to which a program is impervious to outside attack

Summed over all types of security attacks, i, where t = threat (probability that an attack of type i will occur within a given time) and s = security (probability that an attack of type i will be repelled)

4. Usability: The degree to which a program is easy to use. Metric = (1) the skill required to learn the system, (2) the time

required to become moderately proficient, (3) the net increase in productivity, (4) assessment of the users attitude to the system

Covered in HCI course

i

ii st )1(

Quality Metrics: McCall’s ApproachQuality Metrics: McCall’s Approach

PRODUCT TRANSITIONPRODUCT TRANSITIONPRODUCT REVISIONPRODUCT REVISION

PRODUCT OPERATIONPRODUCT OPERATIONCorrectness

ReliabilityUsability

IntegrityEfficiency

MaintainabilityFlexibilityTestability

PortabilityReusabilityInteroperability

McCall’s Triangle of Quality

Quality Metrics: Deriving McCall’s Quality Metrics: Deriving McCall’s Quality MetricsQuality Metrics

Assess a set of quality factors on a scale of 0 (low) to 10 (high) Each of McCall’s Quality Metrics is a weighted sum of

different quality factors Weighting is determined by product requirements Example:

Correctness = Completeness + Consistency + Traceability Completeness is the degree to which full implementation of required

function has been achieved Consistency is the use of uniform design and documentation techniques Traceability is the ability to trace program components back to analysis

This technique depends on good objective evaluators because quality factor scores can be subjective

Managing VariationManaging Variation How can we determine if metrics collected over a

series of projects improve (or degrade) as a consequence of improvements in the process rather than noise?

Statistical Process Control: Analyzes the dispersion (variability) and location

(moving average) Determine if metrics are: (a) Stable (the process exhibits only natural or

controlled changes) or (b) Unstable (process exhibits out of control changes and metrics cannot be used to predict changes)

Control ChartControl Chart

Compare sequences of metrics values against mean and standard deviation. e.g. metric is unstable if eight consecutive values lie on one side of the mean.

0

1

2

3

4

5

6

1 3 5 7 9 11 13 15 17 19Projects

Er, E

rror

s fo

und/

revi

ewho

ur

Mean

- std. dev.

+ std. dev.

top related