Top Banner
August 2010 Bachelor of Science in Information Technology (BScIT) – Semester 6 BT0056 – Software Testing and Quality Assurance – 2 Credits (Book ID: B0649) Assignment Set – 1 (30 Marks) Answer all questions 5 x 6 = 30 Q1. Explain the origin of the defect distribution in a typical software development life cycle. Software testing is a critical element of software quality assurance and represents the ultimate process to ensure the correctness of the product. The quality product always enhances the customer confidence in using the product thereby increases the business economics. In other words, a good quality product means zero defects, which is derived from a better quality process in testing. The definition of testing is not well understood. People use a totally incorrect definition of the word testing, and that this is the primary cause for poor program testing. Examples of these definitions are such statements as “Testing is the process of demonstrating that errors are not present”, “The purpose of testing is to show that a program performs its intended functions correctly”, and “Testing is the process of establishing confidence that a program does what it is supposed to do”. Testing the product means adding value to it, which means raising the quality or reliability of the program. Raising the reliability of the product means finding and removing errors. Hence one should not test a product to show that it works; rather, one should start with the
25
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: BT0056

August 2010

Bachelor of Science in Information Technology (BScIT) –

Semester 6

BT0056 – Software Testing and Quality Assurance – 2

Credits

(Book ID: B0649)

Assignment Set – 1 (30 Marks)

Answer all questions 5 x 6 = 30

Q1. Explain the origin of the defect distribution in a typical software development life cycle.

Software testing is a critical element of software quality assurance and represents the ultimate process to ensure the correctness of the product. The quality product always enhances the customer confidence in using the product thereby increases the business economics. In other words, a good quality product means zero defects, which is derived from a better quality process in testing.

The definition of testing is not well understood. People use a totally incorrect definition of the word testing, and that this is the primary cause for poor program testing. Examples of these definitions are such statements as “Testing is the process of demonstrating that errors are not present”, “The purpose of testing is to show that a program performs its intended functions correctly”, and “Testing is the process of establishing confidence that a program does what it is supposed to do”. Testing the product means adding value to it, which means raising the quality or reliability of the program. Raising the reliability of the product means finding and removing errors. Hence one should not test a product to show that it works; rather, one should start with the assumption that the program contains errors and then test the program to find as many errors as possible. Thus a more appropriate definition is: Testing is the process of executing a program with the intent of finding errors.

Purpose of Testing To show the software works: It is known as demonstration-oriented To show the software doesn’t work: It is known as destruction-oriented To minimize the risk of not working up to an acceptable level: it is known as

evaluation-oriented

Page 2: BT0056

Need for Testing Defects can exist in the software, as it is developed by human beings who can make mistakes during the development of software. However, it is the primary duty of a software vendor to ensure that software delivered does not have defects and the customers day-to-day operations do not get affected. This can be achieved by rigorously testing the software. The most common origin of software bugs is due to:

Poor understanding and incomplete requirements Unrealistic schedule Fast changes in requirements Too many assumptions and complacency

Defect Distribution In a typical project life cycle, testing is the late activity. When the product is tested, the defects may be due to many reasons. It may be either programming error or may be defects in design or defects at any stages in the life cycle. The overall defect distribution is shown in fig 1.1 .

Q2. Explain the concept of quality.

The quality is defined as “a characteristic or attribute of something”. As an attribute of an item, Quality refers to measurable characteristics-things we are able to compare to known standards such as length, color, electrical properties, malleability, and so on. However, software, largely an intellectual entity, is more challenging to characterize than physical objects.

Page 3: BT0056

Quality design refers to the characteristic s that designers specify for an item. The grade of materials, tolerance, and performance specifications all contribute to the quality of design.

Quality of conformance is the degree to which the design specification is followed during manufacturing. Again, the greater the degree of conformance, the higher the level of quality of Conformance.

Software Quality Assurance encompasses

A quality management approach Effective software engineering technology Formal technical reviews A multi-tiered testing strategy Control of software documentation and changes made to it A procedure to assure compliance with software development standards Measurement and reporting mechanisms

What are quality concepts? Quality Quality control Quality assurance Cost of quality

Page 4: BT0056

The American heritage dictionary defines quality as “a characteristic or attribute of something”. As an attribute of an item quality refers to measurable characteristic-things, we are able to compare to known standards such as length, color, electrical properties, and malleability, and so on. However, software, largely an intellectual entity, is more challenging to characterize than physical object. Nevertheless, measures of a programs characteristics do exist. These properties include

1. Cyclomatic complexity2. Cohesion3. Number of function points4. Lines of code

When we examine an item based on its measurable characteristics, two kinds of quality may beencountered:

Quality of design Quality of conformance

QUALITY OF DESIGNQuality of design refers to the characteristics that designers specify for an item. The grade of materials, tolerance, and performance specifications all contribute to quality of design. As higher graded materials are used and tighter, tolerance and greater levels of performance are specified the design quality of a product increases if the product is manufactured according to specifications.

QUALITY OF CONFORMANCEQuality of conformance is the degree to which the design specifications are followed during manufacturing. Again, the greater the degree of conformance, the higher the level of quality of conformance. In software development, quality of design encompasses requirements, specifications and design of the system. Quality of conformance is an issue focused primarily on implementation. If the implementation follows the design and the resulting system meets its requirements and performance goals, conformance quality is high.

QUALITY CONTROL (QC)QC is the series of inspections, reviews, and tests used throughout the development cycle to ensure that each work product meets the requirements placed upon it. QC includes a feedback loop to the process that created the work product. The combination of measurement and feedback allows us to tune the process when the work products created fail to meet their specification. These approach views QC as part of the manufacturing process QC activities may be fully automated, manual or a combination of automated tools and human interaction. An essential concept of QC is that all work products have defined and measurable

Page 5: BT0056

specification to which we may compare the outputs of each process the feedback loop is essential to minimize the defect produced.

QUALITY ASSURANCE (QA)QA consists of the editing and reporting functions of management. The goal of quality assurance is to provide management with the data necessary to be informed about product quality, there by gaining insight and confidence that product quality is meeting its goals. Of course, if the data provided through QA identify problems, it is management’s responsibility to address the problems and apply the necessary resources to resolve quality issues.

Q3. Explain unit test method with the help of your own example.

Unit testing focuses verification efforts on the smallest unit of software design the module. Using the procedural design description as guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and uncovered errors are limited by the constraint scope established for unit testing. The unit test is normally white-box oriented, and the step can be conducted in parallel for multiple modules.

Unit test considerationThe tests that occur as part of unit testing are illustrated schematically in figure 6.5.The module interface is tested to ensure that information properly flows into and out of the programunit under test. The local data structure is examined to ensure the data stored temporarily maintains its integrity during all steps in an algorithm’s execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once. And finally, all error-handling paths are tested.

Page 6: BT0056

Tests of data flow across a module interface are required before any other test is initiated. If data donot enter and exit properly, all other tests are doubtful.

Checklist for interface tests1. Number of input parameters equals to number of arguments.2. Parameter and argument attributes match.3. Parameter and argument systems match.4. Number of arguments transmitted to called modules equal to number of parameters.5. Attributes of arguments transmitted to called modules equal to attributes of parameters.6. Unit system of arguments transmitted to call modules equal to unit system of parameters.7. Number attributes and order of arguments to built-in functions correct.8. Any references to parameters not associated with current point of entry.

Page 7: BT0056

9. Input-only arguments altered.10.Global variable definitions consistent across modules.11.Constraints passed as arguments.

When a module performs external I/O, following additional interface test must be conducted.1. File attributes correct?2. Open/Close statements correct?3. Format specification matches I/O statements?4. Buffer size matches record size?5. Files opened before use?6. End-of-File conditions handled?7. I/O errors handled8. Any textual errors in output information?

The local data structure for a module is a common source of errors .Test cases should be designed to uncover errors in the following categories1. Improper or inconsistent typing2. erroneous initialization are default values3. incorrect variable names4. inconsistent data types5. underflow, overflow, and addressing exception

In addition to local data structures, the impact of global data on a module should be ascertained during unit testing. Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors to erroneous computations; incorrect comparisons are improper control flow. Basis path and loop testing are effective techniques for uncovering a broad array of path errors.

Among the more common errors in computation are :1. misunderstood or incorrect arithmetic precedence2. mixed mode operation3. incorrect initialization4. precision Inaccuracy5. incorrect symbolic representation of an expression.

Comparison and control flows are closely coupled to one another.Test cases should uncover errors like:1. Comparison of different data types2. Incorrect logical operators are precedence3. Expectation of equality when precision error makes equality unlikely4. Incorrect comparison or variables5. Improper or non-existent loop termination.6. Failure to exit when divergent iteration is encountered

Page 8: BT0056

7. Improperly modified loop variables.

Good design dictates that error conditions be anticipated and error handling paths set up to reroute or cleanly terminate processing when an error does occur. Among the potential errors that should be tested when error handling is evaluated are:1. Error description is unintelligible2. Error noted does not correspond to error encountered3. Error condition causes system intervention prior to error handling4. Exception-condition processing is incorrect5. Error description does not provide enough information to assist in the location of the cause of the error.

Boundary testing is the last task of the unit tests step. software often files at its boundaries. That is, Errors often occur when the nth element of an n-dimensional array is processed; when the Ith repetition of a loop with i passes is invoke; or when the maximum or minimum allowable value is encountered. Test cases that exercise data structure, control flow and data values just below, at just above maxima and minima are Very likely to uncover errors.

Unit test proceduresUnit testing is normally considered as an adjunct to the coding step. After source-level code has been developed, reviewed, and verified for correct syntax, unit test case design begins. A review of design information provides guidance for establishing test cases that are likely to uncover errors in each of the categories discussed above. Each test case should be coupled with a set of expected results. Because a module is not a standalone program, driver and or stub software must be developed for each unit test. The unit test environment is illustrated in figure 5.6.In most applications a driver is nothing more than a “Main program” that accepts test case data, passes such data to the test module and prints relevant results. Stubs serve to replace modules that are subordinate to the module that is to be tested. A stub or “dummy sub program” uses the subordinate module’s interface may do minimal data manipulation prints verification of entry, and returns. Drivers and stubs represent overhead. That is, both are software that must be developed but that is not delivered with the final software product. If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately, many modules cannot be adequately unit tested with “simple” overhead software. In such cases, Complete testing can be postponed until the integration test step (Where drivers or stubs are also used). Unit test is simplified when a module with high cohesion is designed. When a module addresses only one function, the number of test cases is reduced and errors can be more easily predicted and uncovered

Page 9: BT0056

Ques 4 Develop an integration testing strategy for any of the system that you have implemented already. List the problems encountered during such process.

Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program structure that has been dictated by design.

Different Integration StrategiesIntegration testing is a systematic technique for constructing the program structure while conducting

Page 10: BT0056

tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program structure that has been dictated by design. There are often a tendency to attempt non-incremental integration; that is, to construct the program using a “big bang” approach. All modules are combined in advance. The entire program is tested as awhole. And chaos usually results! A set of errors is encountered. Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless loop.

Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small segments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied. We discuss some of incremental methods here:

Top down integrationTop-down integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module.

The integration process is performed in a series of five steps: The main control module is used as a test driver, and stubs are substituted

for all modules directly subordinate to the main control module. Depending on the integration approach selected (i.e., depth-or breadth

first), subordinate stubs are replaced one at a time with actual modules. Tests are conducted as each modules are integrated On completion of each set of tests, another stub is replaced with real

module Regression testing may be conducted to ensure that new errors have not

been introduced The process continues from step2 until the entire program structure is built.

Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels. Stubs replace low-level modules at the beginning of top-down testing; therefore, no significant data can flow upward in the program structure.

The tester is left with three choices

1. Delay many tests until stubs are replaced with actual modules.2. Develop stubs that perform limited functions that simulate the actual module3. Integrate the software from the bottom of the hierarchy upward

Page 11: BT0056

The first approach causes us to lose some control over correspondence between specific tests andincorporation of specific modules. this can lead to difficulty in determining the cause of errors tends to violate the highly constrained nature of the top down approach. The second approach is workable but can lead to significant overhead, as stubs become increasingly complex. The third approach is discussed in next section.

Bottom -Up IntegrationModules are integrated from the bottom to top, in this approach processing required for modules subordinate to a given level is always available and the needs for subs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:

1. Low-level modules are combined into clusters that perform a specific software sub function.2. A driver is written to coordinate test case input and output.3. The cluster is tested.4. Drivers are removed and clusters are combined moving upward in the program structure.

As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels of program structure are integrated top-down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.

Regression TestingEach time a new model is added as a part of integration testing, the software changes.New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of an integration test, strategy regression testing is the re-execution of subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects. Regression testing is the activity that helps to ensure that changes do not introduce unintended behavior or additional errors.

How is regression test conducted?Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture playback tools. Capture-playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison.

Page 12: BT0056

The regression test suite contains three different classes of test cases.

1. A representative sample of tests that will exercise all software functions.2. Additional tests that focus on software functions that are likely to be affected by the change.3. Tests that focus on software components that have been changed.

Note:It is impractical and inefficient to re-execute every test for every program function once a change has occurred. Selection of an integration strategy depends upon software characteristics and sometime project schedule. In general, a combined approach that uses a top-down strategy for upper levels of the program structure, coupled with bottom-up strategy for subordinate levels may be best compromise.

Regression tests should follow on critical module function.

What is critical module?A critical module has one or more of the following characteristics.

Addresses several software requirements Has a high level of control Is a complex or error-prone Has a definite performance requirement.

Integration Test DocumentationAn overall plan for integration of the software and a description of specific tests are documented in a test specification. The specification is deliverable in the software engineering process and becomes part of the software configuration.

Test Specification OutlineI. Scope of testingII. Test Plan

1. Test phases and builds2. Schedule3. Overhead software4. Environment and resources

III. Test Procedures1. Order of integration

q Purposeq Modules to be tested

2. Unit test for modules in buildq Description of test for module nq Overhead software descriptionq Expected results

Page 13: BT0056

3. Test environmentq Special tools or techniquesq Overhead software description

4. Test case data5. Expected results for build

IV. Actual Test ResultsV. ReferencesVI. Appendices

The Following criteria and corresponding tests are applied for all test phases. Interfaces integrity. Internal and external interfaces are tested as each module is incorporated into the structure. Functional Validity. Tests designed to uncover functional error are conducted. Information content. Tests designed to uncover errors associated with local or global data structures are conducted. Performance Test designed to verify performance bounds established during software design are conducted.

A schedule for integration, overhead software, and related topics are also discussed as part of the “test Plan” section. Start and end dates for each phase are established and availability windows for unit tested modules are defined. A brief description of overhead software(stubs and drivers) concentrates on characteristics that might require special effort. Finally, test environments and resources are described

Ques 5. Explain the use of ISO 9126 Standard Quality Model.

ISO/IEC 9126 Software engineering — Product quality is an international standard for the evaluation of software quality. The fundamental objective of this standard is to address some of the well known human biases that can adversely affect the delivery and perception of a software development project. These biases include changing priorities after the start of a project or not having any clear definitions of "success". By clarifying, then agreeing on the project priorities and subsequently converting abstract priorities (compliance) to measurable values (output data can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a common understanding of the project's objectives and goals.

The standard is divided into four parts:

quality model external metrics internal metrics quality in use metrics.

Page 14: BT0056

Quality Model

The quality model established in the first part of the standard, ISO/IEC 9126-1, classifies software quality in a structured set of characteristics and sub-characteristics as follows:

Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.

Suitability

Accuracy

Interoperability

Security

Functionality Compliance

Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time.

Maturity

Fault Tolerance

Recoverability

Reliability Compliance

Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.

Understandability

Learnability

Operability

Attractiveness

Usability Compliance

Efficiency - A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.

Time Behaviour

Resource Utilisation

Efficiency Compliance

Maintainability - A set of attributes that bear on the effort needed to make specified modifications.

Analyzability

Changeability

Page 15: BT0056

Stability

Testability

Maintainability Compliance

Portability - A set of attributes that bear on the ability of software to be transferred from one environment to another.

Adaptability

Installability

Co-Existence

Replaceability

Portability Compliance

Each quality sub-characteristic (e.g. adaptability) is further divided into attributes. An attribute is an entity which can be verified or measured in the software product. Attributes are not defined in the standard, as they vary between different software products.

Software product is defined in a broad sense: it encompasses executable, source code, architecture descriptions, and so on. As a result, the notion of user extends to operators as well as to programmers, which are users of components as software libraries.

The standard provides a framework for organizations to define a quality model for a software product. On doing so, however, it leaves up to each organization the task of specifying precisely its own model. This may be done, for example, by specifying target values for quality metrics which evaluates the degree of presence of quality attributes

Internal Metrics

Internal metrics are those which do not rely on software execution (static measures)

External Metrics

External metrics are applicable to running software.

Quality in Use Metrics

Quality in use metrics are only available when the final product is used in real conditions.

Ideally, the internal quality determines the external quality and external quality determines quality in use.

This standard stems from the model established in 1977 by McCall and his colleagues, who proposed a model to specify software quality. The McCall quality model is organized around three types of Quality Characteristics:

Factors (To specify): They describe the external view of the software, as viewed by the users.

Page 16: BT0056

Criteria (To build): They describe the internal view of the software, as seen by the developer.

Metrics (To control): They are defined and used to provide a scale and method for measurement.

ISO/IEC 9126 distinguishes between a defect and a non-conformity, a defect being The nonfulfillment of intended usage requirements, whereas a nonconformity is The nonfulfillment of specified requirements. A similar distinction is made between validation and verification, known as V&V in the testing trade.

Page 17: BT0056

August 2010

Bachelor of Science in Information Technology (BScIT) –

Semester 6

BT0056 – Software Testing and Quality Assurance – 2

Credits

(Book ID: B0649)

Assignment Set – 2 (30 Marks)

Answer all questions 5 x 6 = 30

Ques 1 Write a note on Quality Assurance in software support projects.

It is vital for software developers to recognize that the quality of support a products is normally as important to customers as that of the quality of product itself. Delivering software technical support has quickly grown into big business. Today software support is a business in its own right. Software support operations are not there because they want to be. They exist because they are a vital void in the software industry, helping customer use the computer systems in front of the them, a job that is getting more and more difficult. There is a phenomenal increase in the number of people who use their computer for “Mission Critical” Applications. This puts extra pressure on the software support groups in the organizations. During maintenance phase of the software project, the complexity metrics can be used to track and control the complexity level of modified module.

In this scenario, the software developer must ensure that the customer’s support requirement are identified and must design and engineer the business and technical infrastructure from which the product will be supported. This applied equally to those business producing software packages and to in-house information systems departments. Support for software can be complex and may include.

User Documentation Packaging and distribution arrangements Implementation and customization services and consulting Product training Help Desk Assistance Error reporting and correction Enhancement

Page 18: BT0056

For an application installed on a single site, the support requirement may be simply to provide telephone and assign a stall member to receive and follow up queries. For a shrink wrapped product , it may mean providing localization and world wide distribution facilities and implementing major administrative coin purer systems support global help-desk services.

Ques 2. Explain few of the quality assurance activities.

SQA is comprised of a variety of tasks associated with two different constituencies

1. The software engineers who do technical work like Performing Quality assurance by applying technical methods Conduct Formal Technical Reviews

Conduct formal technical reviews to assess the test strategy and test cases themselves. Formal technical reviews can uncover inconsistencies, omissions, and outright errors in the testing approach. This saves time and improves product quality.

Perform well-planed software testing.Develop a continuous improvement approach for the testing process. The test strategy should be measured. The metrics collected during testing should be used as part of a statistical process control approach for software testing.

2. SQA group that has responsibility for Quality assurance planning oversight Record keeping Analysis and reporting.

Ques 3. Explain the contents of SQA plan.QA is an essential activity for any business that produces products to be used by others. The SQA group serves as the customer in-house representative. That is the people who perform SQA must look at the software from customer’s point of views. The SQA group attempts to answer the questions asked below and hence ensure the quality of software.

The questions are1. Has software development been conducted according to pre-established standards?2. Have technical disciplines properly performed their role as part of the SQA activity?

SQA Activities

Page 19: BT0056

SQA Plan is interpreted as shown in Fig 2.2SQA is comprised of a variety of tasks associated with two different constituencies

1. The software engineers who do technical work like Performing Quality assurance by applying technical methods Conduct Formal Technical Reviews Perform well-planed software testing.

2. SQA group that has responsibility foro Quality assurance planning oversighto Record keepingo Analysis and reporting.o QA activities performed by SE team and SQA are governed by the

following plan.o Evaluation to be performed.o Audits and reviews to be performed.o Standards that is applicable to the project.o Procedures for error reporting and trackingo Documents to be produced by the SQA groupo Amount of feedback provided to software project team.

What are the activities performed by SQA and SE team? Prepare SQA Plan for a project Participate in the development of the project’s software description Review software-engineering activities to verify compliance with defined

software process. Audits designated software work products to verify compliance with those

defined as part of the software process. Ensures that deviations in software work and work products are

documented and handled according to a documented procedure.

Page 20: BT0056

Records any noncompliance and reports to senior management

Ques 4. Explain different methods available in white box testing with examples.

This testing technique takes into account the internal structure of the system or component. The entire source code of the system must be available. This technique is known as white box testing because the complete internal structure and working of the code is available. White box testing helps to derive test cases to ensure:

1. All independent paths are exercised at least once.2. All logical decisions are exercised for both true and false paths.3. All loops are executed at their boundaries and within operational bounds.4. All internal data structures are exercised to ensure validity.

White box testing helps to: Traverse complicated loop structures Cover common data areas, Cover control structures and sub-routines, Evaluate different execution paths Test the module and integration of many modules Discover logical errors, if any. Helps to understand the code

Why the white box testing is used to test conformance to requirements?

Logic errors and incorrect assumptions most likely to be made when coding for “special cases”. Need to ensure these execution paths are tested.

May find assumptions about execution paths incorrect, and so make design errors. White box testing can find these errors.

Typographical errors are random. Just as likely to be on an obscure logical path as on a Mainstream path.

“Bugs lurk in corners and congregate at boundaries”

Ques 5. Explain the use of SCM process.In software engineering, software configuration management (SCM) is the task of tracking and controlling changes in the software. Configuration management practices include revision control and the establishment of baselines.

SCM concerns itself with answering the question "Somebody did something, how can one reproduce it?" Often the problem involves not reproducing "it" identically, but with controlled, incremental changes. Answering the question thus becomes a

Page 21: BT0056

matter of comparing different results and of analysing their differences. Traditional configuration management typically focused on controlled creation of relatively simple products. Now, implementers of SCM face the challenge of dealing with relatively minor increments under their own control, in the context of the complex system being developed

The goals of SCM are generally

Configuration identification - Identifying configurations, configuration items and baselines.

Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.

Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.

Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.

Build management - Managing the process and tools used for builds. Process management - Ensuring adherence to the organization's

development process. Environment management - Managing the software and hardware that host

the system. Teamwork - Facilitate team interactions related to the process. Defect tracking - Making sure every defect has traceability back to the

source.