Top Banner
A Survey on Software Architecture Analysis Methods Liliana Dobrica and Eila Niemela ¨, Member, IEEE Computer Society Abstract—The purpose of the architecture evaluation of a software system is to analyze the architecture to identify potential risks and to verify that the quality requirements have been addressed in the design. This survey shows the state of the research at this moment, in this domain, by presenting and discussing eight of the most representative architecture analysis methods. The selection of the studied methods tries to cover as many particular views of objective reflections as possible to be derived from the general goal. The role of the discussion is to offer guidelines related to the use of the most suitable method for an architecture assessment process. We will concentrate on discovering similarities and differences between these eight available methods by making classifications, comparisons and appropriateness studies. Index Terms—Software architecture, analysis techniques and methods, quality attributes, scenarios. æ 1 INTRODUCTION O NE of the major issues in software systems development today is quality. The idea of predicting the quality of a software product from a higher-level design description is not a new one. In 1972, Parnas [44] described the use of modularization and information hiding as a means of high- level system decomposition to improve flexibility and comprehensibility. In 1974, Stevens et al. [52] introduced the notions of module coupling and cohesion to evaluate alternatives for program decomposition. During recent years, the notion of software architecture (SA) has emerged as the appropriate level for dealing with software quality. This is because the scientific and industrial communities have recognized that SA sets the boundaries for the software qualities of the resulting system [7]. Recent efforts towards the systematization of the im- plications of using design patterns and architectural styles contribute, frequently in an informal way, to the guarantee of the quality of a design [16], [19]. It is recognized that it is not possible to measure the quality attributes of the final system based on SA design [12]. This would imply that the detailed design and implementation represent a strict projection of the architecture. The aim of analyzing the architecture of a software system is to predict the quality of a system before it has been built and not to establish precise estimates but the principal effects of an architecture [27]. The purpose of the evaluation is to analyze the SA to identify potential risks and verify that the quality require- ments have been addressed in the design [38]. More formal efforts are concentrated on ensuring that the quality is addressed at the architectural level. Different communities of the software metrics, scenario-based, and attribute model-based analysts have developed their own techniques. The software metrics community has used module coupling and cohesion notions to define predictive measures of software quality [15]. Other methods include a more abstract evaluation of how the SA fulfills the domain functionality and other nonfunctional qualities [27]. Instead of presenting metrics for predictive evaluation, they exemplify the argument for performing a more qualitative or quantitative evaluation. Methods based on scenarios could be considered mature enough since they have been applied and validated over the past several years, but the development of attribute model-based architecture evalua- tion methods is still ongoing. Future work is needed to develop systematic ways of bridging quality requirements of software systems with their architecture. The open problem is how to take better advantage of software architectural concepts to analyze software systems for quality attributes in a systematic and repetitive way. Being a new research domain, most of the structural methods for assessing the quality of SAs have been presented in conference and journal papers. Although refinement and experiments for validating some of the methods are ongoing, they deserve our attention because they contribute to the development of what is still an immature research area. Therefore, we decided to study such methods in order to cover as many particular points of view of objective reflections as possible to be derived from the general goal. SA is considered the first product in an architecture-based development process and, from this point of view, the analysis at this level should reveal requirement conflicts and incomplete design descriptions from a particular stakeholder’s perspective. The analysis could be associated with the design of an iterative improvement of the architecture of a green software system or with the reengineering of an existent one. Prediction methods of a single quality attribute are meant to minimize 638 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002 . L. Dobrica is with the Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Spl. Independentei 313, Sect. 6, Bucharest, 77206 Romania. E-mail: [email protected]. . E. Niemela¨is with the Software Architectures Group, Embedded Software, VTT Electronics, P.O. Box 1100 FIN-90571 Oulu, Finland. E-mail: [email protected]. Manuscript received 3 July 2000; revised 22 Mar. 2001; accepted 9 July 2001. Recommended for acceptance by D. Rosenblum. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number 112394. 0098-5589/02/$17.00 ß 2002 IEEE
16

A survey on software architecture analysis methods - Software

Feb 03, 2022

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A survey on software architecture analysis methods - Software

A Survey on Software ArchitectureAnalysis Methods

Liliana Dobrica and Eila NiemelaÈ , Member, IEEE Computer Society

AbstractÐThe purpose of the architecture evaluation of a software system is to analyze the architecture to identify potential risks and

to verify that the quality requirements have been addressed in the design. This survey shows the state of the research at this moment,

in this domain, by presenting and discussing eight of the most representative architecture analysis methods. The selection of the

studied methods tries to cover as many particular views of objective reflections as possible to be derived from the general goal. The

role of the discussion is to offer guidelines related to the use of the most suitable method for an architecture assessment process. We

will concentrate on discovering similarities and differences between these eight available methods by making classifications,

comparisons and appropriateness studies.

Index TermsÐSoftware architecture, analysis techniques and methods, quality attributes, scenarios.

æ

1 INTRODUCTION

ONE of the major issues in software systems developmenttoday is quality. The idea of predicting the quality of a

software product from a higher-level design description isnot a new one. In 1972, Parnas [44] described the use ofmodularization and information hiding as a means of high-level system decomposition to improve flexibility andcomprehensibility. In 1974, Stevens et al. [52] introducedthe notions of module coupling and cohesion to evaluatealternatives for program decomposition. During recentyears, the notion of software architecture (SA) has emergedas the appropriate level for dealing with software quality.This is because the scientific and industrial communitieshave recognized that SA sets the boundaries for the softwarequalities of the resulting system [7].

Recent efforts towards the systematization of the im-

plications of using design patterns and architectural styles

contribute, frequently in an informal way, to the guarantee

of the quality of a design [16], [19]. It is recognized that it is

not possible to measure the quality attributes of the final

system based on SA design [12]. This would imply that the

detailed design and implementation represent a strict

projection of the architecture. The aim of analyzing the

architecture of a software system is to predict the quality of

a system before it has been built and not to establish precise

estimates but the principal effects of an architecture [27].

The purpose of the evaluation is to analyze the SA to

identify potential risks and verify that the quality require-

ments have been addressed in the design [38].

More formal efforts are concentrated on ensuring that thequality is addressed at the architectural level. Differentcommunities of the software metrics, scenario-based, andattribute model-based analysts have developed their owntechniques. The software metrics community has usedmodule coupling and cohesion notions to define predictivemeasures of software quality [15]. Other methods include amore abstract evaluation of how the SA fulfills the domainfunctionality and other nonfunctional qualities [27]. Insteadof presenting metrics for predictive evaluation, theyexemplify the argument for performing a more qualitativeor quantitative evaluation. Methods based on scenarioscould be considered mature enough since they have beenapplied and validated over the past several years, but thedevelopment of attribute model-based architecture evalua-tion methods is still ongoing.

Future work is needed to develop systematic ways ofbridging quality requirements of software systems withtheir architecture. The open problem is how to take betteradvantage of software architectural concepts to analyzesoftware systems for quality attributes in a systematic andrepetitive way. Being a new research domain, most of thestructural methods for assessing the quality of SAs havebeen presented in conference and journal papers. Althoughrefinement and experiments for validating some of themethods are ongoing, they deserve our attention becausethey contribute to the development of what is still animmature research area. Therefore, we decided to studysuch methods in order to cover as many particular points ofview of objective reflections as possible to be derived fromthe general goal. SA is considered the first product in anarchitecture-based development process and, from thispoint of view, the analysis at this level should revealrequirement conflicts and incomplete design descriptionsfrom a particular stakeholder's perspective. The analysiscould be associated with the design of an iterativeimprovement of the architecture of a green software systemor with the reengineering of an existent one. Predictionmethods of a single quality attribute are meant to minimize

638 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002

. L. Dobrica is with the Faculty of Automatic Control and Computers,University Politehnica of Bucharest, Spl. Independentei 313, Sect. 6,Bucharest, 77206 Romania. E-mail: [email protected].

. E. NiemelaÈ is with the Software Architectures Group, Embedded Software,VTT Electronics, P.O. Box 1100 FIN-90571 Oulu, Finland. E-mail:[email protected].

Manuscript received 3 July 2000; revised 22 Mar. 2001; accepted 9 July 2001.Recommended for acceptance by D. Rosenblum.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number 112394.

0098-5589/02/$17.00 ß 2002 IEEE

Page 2: A survey on software architecture analysis methods - Software

risks only from that attribute perspective at a fine level. Thismight not be sufficient if the quality of a system isrepresented by a variety of attributes that interact witheach other and a balance between them should beestablished.

The discussed methods include the scenario-basedarchitecture analysis method (SAAM) [26] and its threeparticular cases of extensions, one founded on complexscenarios (SAAMCS) [35], and two extensions for reusa-bility, ESAAMI [43] and SAAMER [41], the architecturetrade-off analysis method (ATAM) [29], scenario-basedarchitecture reengineering (SBAR) [8], architecture levelprediction of software maintenance (ALPSM) [10], and asoftware architecture evaluation model (SAEM) [18].

This survey puts all these developments in the sameperspective by reviewing the state of the software archi-tecture analysis methods. The beginning of this study isdedicated to the definitions of the terminology that isfrequently used in the context of the methods. Based onthese general elements and others related to methodologycharacterization, we define a conceptual framework forpresentation and comparison of the analysis methods whiletrying to look for 1) their progress towards refinement overtime, 2) their main contributions, and 3) advantagesobtained by using them. The discussions surrounding theselected methods focus on 1) discovering differences andsimilarities and 2) making classifications, comparisons andappropriateness studies. Finally, we will draw conclusionsfrom the real level of the current research as well as thefuture work in this domain defined by the presentedmethods.

2 DEFINITIONS OF THE MAIN TERMINOLOGY

2.1 Quality Attributes and the Quality Model

A quality attribute is a nonfunctional characteristic of acomponent or a system. A software quality is defined inIEEE 1061 [22] and it represents the degree to whichsoftware possesses a desired combination of attributes.Another standard, ISO/IEC Draft 9126-1 [23], defines asoftware quality model. According to this, there are sixcategories of characteristics (functionality, reliability, us-ability, efficiency, maintainability, and portability), whichare divided into subcharacteristics. These are defined bymeans of externally observable features for each softwaresystem. In order to ensure its general application, thestandard does not determine which these attributes are norhow they can be related to the subcharacteristics.

An investigation into the literature has shown that alarge number of definitions of quality attributes exist thatare related to similar abilities of a software system. Forexample, maintainability, flexibility, and modifiabilitydefinitions are described as follows:

Maintainability is a set of attributes that have a bearing on theeffort needed to make specified modifications [23]. Modifica-tions may include corrections, improvements or adapta-tions of software to changes in environment, and inrequirements and functional specification.

Modifiability is the ability to make changes quickly and cost-effectively [7]. Modifications to a system can be categorized

as extensibility (the ability to acquire new features), deletingunwanted capabilities (to simplify the functionality of anexisting application), portability (adapting to new operat-ing environments), or restructuring (rationalizing systemservices, modularizing, creating reusable components).

Flexibility is the ease with which a system or component can bemodified for use in applications or environments other thanthose for which it was specifically designed [21].

Although different in wording, the definitions are almostidentical in their semantics. The limitation of these defini-tions with respect to the purpose of analyzing SAs is thattheir scope is too broad. The scope has to be narrowed,based on the relevant context.

2.2 Software Architecture Definition andDescription

Definition. The software architecture of a system is definedas ªthe structure or structures of the system, whichcomprise software components, the externally visibleproperties of those components, and the relationshipsamong themº [7]. This definition focuses only on theinternal aspects of a system and most of the analysismethods are based on it. Another brief definition given byGarlan and Perry [45], [49] establishes SA as ªthe structureof components in a program or system, their interrelation-ships, and the principles and guides that control the designand evolution in time,º This process-centered definition isused by SAEM because it takes into account the presence ofprinciples and guidelines in the architecture description.For an analysis of flexibility, the external environment isjust as important as an internal entity of a system [37]. TheSA definition should consist of two parts, namely of amacroarchitecture which focuses on the environment of thesystem, and a microarchitecture which covers the internalstructure of a system.

Description. The research in the SA description ad-dresses the different perspectives one could have of thearchitecture. Each perspective is described as a view.Although there is still no general agreement about whichviews are the most useful, the reason behind multiple viewsis always the same: Separating different aspects into separateviews help people manage complexity. Several models havebeen proposed that introduce a number of views thatshould be described in the SA [33]. The view models sharethe fact that they address a static structure, a dynamicaspect, a physical layout, and the development of thesystem. Bass et al. [7] introduce the concept of architecturestructures as being synonymous to view. In general, it is theresponsibility of the architect to decide which view to use todescribe the SA.

From the point of view of quality analysis at thearchitectural level, the possible representations could bevery relevant in quality prediction and effort estimation(Fig. 1). An evaluation method may need structures, whichare concerned with the decomposition of the functionalitythat the products need to support, the realization in adetailed design of the conceptual abstractions from whichthe system is built, logical concurrency, hardware, files, anddirectories. Components and relations of each structure arerepresentative. For instance, logical concurrency structure

DOBRICA AND NIEMELAÈ : A SURVEY ON SOFTWARE ARCHITECTURE ANALYSIS METHODS 639

Page 3: A survey on software architecture analysis methods - Software

contains units that are refined to processes and threads. Itsrelations include synchronizes-with, is-higher-priority-than,sends-data-to, can't-run-without, etc. Properties relevant tothis structure include priority, preemptibility, and execu-tion time.

A taxonomy of formally-defined orthogonal propertiesof SAs (TOPSA) that extends the first SA definition is givenin [14]. The TOPSA space has three dimensions: abstractionlevel (conceptual, realization), dynamism (static, dynamic),and aggregation level that can facilitate discussions regardingSA during development and evolution. The TOPSA and thearchitecture representation based on multiple views com-plement each other (Fig. 1). Different views offer valuableexamples for abstraction, dynamism, and aggregationdimensions. An analysis method can exploit these relation-ships in the form of a defined set of rules, which stateswhich view in the TOPSA space is the most appropriate fora given quality attribute.

Descriptions of component types and their topology, ofpatterns of data and control interactions among thecomponents, and the benefits and drawbacks of using themare also documented in architectural styles [49], [16] anddesign patterns [19]. Compositions of patterns that might beused in a SA are evaluated in various quality terms in [32].

2.3 Evaluation Techniques at the Architecture Level

Two basic classes of evaluation techniques, questioning andmeasuring, available at the architecture level are defined intwo important research reports [1], [7]. Questioning techni-ques generate qualitative questions to be asked of anarchitecture and they can be applied for any given quality.This class includes scenarios, questionnaires, and check-lists. Measuring techniques suggest quantitative measure-ments to be made on an architecture. They are used toanswer specific questions and they address specific soft-ware qualities and, therefore, they are not as broadlyapplicable as questioning techniques. This class includesmetrics, simulations, prototypes and experiences. General-ity, level-of-detail, phase, and what is evaluated represent a

four-dimensional framework of comparison of these tech-niques [7]. Regarding generality, the techniques could begeneral (questionnaire), domain-specific (checklists, proto-type), or system-specific (scenarios). The level of detail(coarse-grained, medium, or fine) indicates how muchinformation about the architecture is required to performthe evaluation. There are three phases of interest toarchitecture evaluation: early-, middle-, and postdeploy-ment. These phases occur after the initial high-levelarchitectural decisions (questionnaire, prototype), at anypoint after some elaboration of the architecture design(scenarios, checklists), and after the system has beencompletely designed, implemented, and deployed.

In terms of quantitative and qualitative aspects, bothclasses of techniques are needed for evaluating architec-tures. Various analyzing models expressed in formalmethods are included in quantitative techniques. Qualita-tive techniques illustrate SA evaluations using scenarios. Adescription of the changes that are needed for each scenariorepresents a qualitative method of evaluation. From thisperspective, scenarios are necessary but not sufficient topredict and control quality attributes and they have to besupplemented with other evaluation techniques and, parti-cularly, quantitative interpretations. For example, includingquestions about quality indicators in the scenarios enrichesthe architecture evaluation. Quantitative interpretations ofscenario evaluations could be ranking between the effects ofscenarios (i.e., a five level scale ���;�;�=ÿ;ÿ;ÿÿ�) or anabsolute statement, which estimates the size of modifica-tions or different metrics, such as lines of code, functionpoints, or object points.

Most of the considered architecture analysis methods usescenarios. The existing practices with scenarios are system-atized in [30]. Scenarios are a good way of synthesizingindividual interpretations of a software quality into acommon view [34]. This view is more concrete than thegeneral definition of software quality [21] and it alsoincorporates the specifics of a system to be developed(i.e., it is more context-sensitive [3]).

640 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002

Fig. 1. Architecture description and the relevance to analysis of quality attributes.

Page 4: A survey on software architecture analysis methods - Software

Scenarios are a postulated set of uses or modifications ofthe system. The modifications could be a change to how one ormore components perform an assigned activity, the addition of acomponent to perform some activity, the addition of a connectionbetween existing components, or a combination of these factors. Increating and organizing scenarios, it is important that allroles relevant to that system are considered since designdecisions may be made to accommodate any of these roles.Different roles represent stakeholders related to a system.Such stakeholders may be the end user, who is responsiblefor executing the software; the system administrator, who isresponsible for managing the data repositories used by thesystem; the developer, who is responsible for modifying theruntime functions of the system and the organizationresponsible for approving new requirements.

2.4 A Framework for Characterisation andComparison of Analysis Methods

A framework for presentation and comparison of theSA analysis methods is described and motivated by themain concepts stated above and methodologies' character-izations (Table 1). Methods include a predefined andorganized collection of techniques. However, in additionto a set of techniques, a method should include a set of rulesthat establishes how to conduct an activity which has aprecise goal regarding the result to be achieved. The rulesstate by whom, in what order, and in what way the techniques areused to accomplish the method objective.

3 OVERVIEW OF ANALYSIS METHODS

3.1 Scenario-Based Architecture Analysis Method(SAAM)

SAAM appeared in 1993, corresponding with the trend for abetter understanding of general architectural concepts, as afoundation for proof that a software system meets morethan just functional requirements [26], [27]. Thus, in theearly stage of a system's development, the correction ofarchitectural mistakes detected by the analysis is stillpossible without causing excessively high costs. The mainmethod's activities are presented in an article where

different user interface architectures are assessed withrespect to modifiability [28].

Specific goals. A SAAM's goal is to verify basicarchitectural assumptions and principles against the docu-ments describing the desired properties of an application.Additionally, the analysis offers a contribution to assess therisks inherent to the architecture. SAAM guides theinspection of the architecture focusing on potential troublespots, such as requirement conflicts or incomplete designspecification from a particular stakeholder's perspective.The capability of SAAM to evaluate the suitability ofarchitecture with respect to the desired properties of aspecific system can also be used to compare differentarchitectures.

The evaluation technique. Scenarios represent thefoundation for illuminating the properties of SA. Theyillustrate the kinds of activities that the system mustsupport and the kinds of anticipated changes that will bemade to the system. During the analysis, it is determinedwhether a scenario requires modifications to the architec-ture. Scenarios that require no modifications are called directand those that require modifications are called indirect.

The quality attributes. The fundamental characteristic ofthis method is the concretization of any quality attribute inthe form of scenarios. However, it is considered thatmodifiability is still the quality attribute analyzed bySAAM.

The stakeholders' involvement. SAAM harmonizesvarious interests of the stakeholder groups, thus settingup a common understanding of the SA as a base for laterdecisions.

SA description. The method is applied to a final versionof the SA but prior to the detailed design. The description ofthe SA should be in a form that is easily understandable byall stakeholders. Functionality, structure, and allocation arethe three perspectives defined for describing SAs. Function-ality is what the system does. A small and simple lexicon isused for describing structures for a common level ofunderstanding and comparing different architectures. SAshould be presented in a static representation (system

DOBRICA AND NIEMELAÈ : A SURVEY ON SOFTWARE ARCHITECTURE ANALYSIS METHODS 641

TABLE 1Framework Elements for Characterization and Comparison of Analysis Methods

Page 5: A survey on software architecture analysis methods - Software

computation, data components, data and control connec-tions) and a dynamic representation of how the systembehaves over time. The allocation of function to structureidentifies how the domain functionality is realized in thesoftware structure. The components could be describedeither as modules in the sense of Parnas [44] or ascooperating sequential processes.

Method's activities. The main inputs of SAAM areproblem description, requirements statement and architec-ture description(s). Fig. 2 presents the inputs associated tothe activities of SAAM carried out either for a singlearchitecture or for comparison of multiple ones.

In the case of a single SA analysis, the activities arescenario development, SA description, individual scenarioevaluation and scenario interaction. In the scenario devel-opment, SAAM requires the presence of all stakeholdersthat identify possible scenarios as described above. SAAMconsiders the set of scenarios to be complete when theaddition of a new scenario no longer disturbs the design.The second activity, SA description, is recommended to becarried out in parallel with the first activity in an iterativemode. The final version of SA description together with thescenarios serves as the input for the subsequent activities ofthe method.

SAAM evaluates a scenario by investigating whicharchitectural elements are affected by that scenario. Table 2is an example of scenario evaluation for an architecture thatcontains components called A, B, C, D, and E. For a singlearchitecture analysis, the purpose is to determine which

scenarios interact, i.e., which ones affect the same compo-nent. The cost of the modifications associated with eachindirect scenario is estimated by listing the components andthe connectors that are affected and then counting thenumber of changes. If the analysis is performed with theintention of choosing among several architectural alter-natives, the results of the candidates can be compared in afinal SAAM activity. To this end, scenarios and the scenariointeractions are weighted in terms of their relativeimportance. This weighting is then used to determine anoverall evaluation of the candidate architectures.

Results interpretation. High interaction of unrelatedscenarios could indicate a poor separation of functionality.The amount of scenario interaction is related to metrics suchas structural complexity, coupling, and cohesion and,therefore, is correlated with the number of defects in thefinal product. SAAM cannot give precise measures ormetrics of fitness. The result is a set of small metrics thatpermits a comparison per scenario-basis of competing SA.

Reusability of the existing knowledge base. SAAMdoes not consider this issue.

Method validation. SAAM is a mature method, vali-dated in various case studies. An enumeration of the casestudies includes global information systems, air trafficcontrol, WRCS (revision control system), user interfacedevelopment environments, Internet information systems,keyword in context (KWIC) systems, and embedded audiosystems.

642 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002

Fig. 2. SAAM inputs and activities.

TABLE 2Scenario Evaluation

Page 6: A survey on software architecture analysis methods - Software

3.2 SAAM Founded on Complex Scenarios(SAAMCS)

SAAMCS considers that the complexity of scenarios is themost important factor for risk assessment [35]. SAAMCScontributions for extending SAAM are, on one hand,directed to the way of looking for the scenarios and, onthe other, to where their impact is evaluated.

Specific goal. Risk assessment represents the only goalof SAAMCS.

The included evaluation technique. SAAMCS is lookingfor scenarios that are possibly complex to realize. Based onthe initiator of the scenario, SA description, and versionconflicts, a list of classes of scenarios that are complicated toimplement is provided.

The quality attributes. Flexibility represents the qualityattribute analyzed by SAAMCS.

The stakeholders' involvement. The method appreciatesstakeholders' involvement and identifies the important roleof the initiator of a scenario. The initiator is the organiza-tional unit that has most interest in the implementation ofthat scenario.

SA description. SAAMCS is applied to the final versionof the architecture, which is described in sufficient detail. Inthis method, the idea of the systems within a domain notbeing isolated but instead integrated within an environmentis advanced. As a result, the description of the SA is dividedinto macroarchitecture and microarchitecture.

Method's activities. Fig. 3 describes the inputs andactivities of SAAMCS. In the scenario development, a two-dimensional framework diagram (five categories of com-plex scenarios, four sources of changes) that may help todiscover complicated scenarios is defined. Sources ofchanges are functional requirements, quality requirements,external components, and the technical environment.Categories of complex scenarios are adaptations to thesystem with external effects, to the environment with effects

to the system, to the macroarchitecture and to themicroarchitecture, and the introduction of version conflicts.

Regarding the scenario impact evaluation, SAAMCSintroduces and uses a measurement instrument to expressthe effect of scenarios. The defined instrument includesfactors that influence the complexity of scenarios. Threedifferent factors are identified: four levels of impact of thescenario (no impact 1), affects one component 2), affectsseveral components 3), affects SA 4)), the number of ownersinvolved in the information system and four levels regarding thepresence of version conflicts (no problem with differentversions 1), the presence is undesirable but not prohibitive2), creates complications related to configuration manage-ment 3), creates conflicts 4)). The results are expressed in atable similar to Table 3.

Reusability of the existing knowledge base. SAAMCSdoes not consider this issue.

Method validation. SAAMCS has been validated forbusiness information systems.

3.3 Extending SAAM by Integration in the Domain(ESAAMI)

Specific goal. The SAAM applied in an architectural-centricdevelopment process considers only the problem descrip-tion, requirements statement and architecture description.

ESAAMI is a combination of analytical and reuseconcepts and is achieved by integrating the SAAM in thedomain-specific and reuse-based development process [43].The degree of reuse is improved by concentrating on thedomain. ESAAMI is similar to SAAM with regards to theevaluation technique, the quality attributes, the stakehol-der's involvement, and SA description. However, animprovement is seen in the reuse of domain knowledgedefined by SAs and analysis templates. Fig. 4 describes themain inputs of ESAAMI and the relationship between them.A reusable SA is packaged with a tailored analysis templatefocused on the distinctive characteristics of the architecture.All these packages represent inputs for the selection process

DOBRICA AND NIEMELAÈ : A SURVEY ON SOFTWARE ARCHITECTURE ANALYSIS METHODS 643

Fig. 3. Inputs and activities of SAAMCS.

TABLE 3Result of Scenario Evaluation in SAAMCS

Fig. 4. ESAAMI inputs [43].

Page 7: A survey on software architecture analysis methods - Software

of a reusable architecture. The selected SA is a starting pointfor the architecture design, being adapted and refined tomeet the new system properties.

SA description. A reusable SA to be deployed in the newsystem is selected in the first step of ESAAMI. It has to beensured that SA provides an adequate basis for the systemto meet its requirements. Three factors influence thereusability of an architecture. The author identifies acommon basis for a variety of systems in a domain,sufficient flexibility to cope with variation among systems,and the documentation of properties to make SA availablefor selection.

Reusability of the existing knowledge base. ESAAMIproposes packages of analysis templates which representthe essential features of the domain. An analysis template isformulated on an abstraction level defined by the common-alities of the systems in the domain and without referring tosystem-specific architectural elements. Analysis templatescollect reusable products that can be deployed in thevarious steps of the method. These products are proto-scenarios, evaluation protocols, proto-evaluations, andarchitectural hints and weights. Protoscenarios are genericdescriptions of reuse situations or interactions with thesystem. These are intended for use in the scenariodevelopment phase of subsequent architecture analysesafter a selection and refinement process. The other productsare used in the scenario evaluation phase and are identifiedin the protocols of the earlier evaluations in differentprojects, examples of descriptions of how the scenario canbe performed using a set of abstract architecture elements,and hints associated to each scenario indicating whichstructures would make the scenario convenient to handle.Weights, established in old projects in the domain, canmake the results of the analysis comparable.

Method's activities are similar to SAAM, but theyconsider the existence of a reusable knowledge base. Theresults of the current analysis are part of the newly builtsystem.

Method validation. The method is still in the improve-ment process.

3.4 Software Architecture Analysis Method forEvolution and Reusability (SAAMER)

Specific goal. From the point of view of two particularquality attributes, evolution, and reusability, SAAM isextended in SAAMER [41]. SAAMER better suggests howa system could support each of the quality objectives or therisk levels for evolution or how to reuse it.

The evaluation technique. Scenarios are the maindrivers for evaluating various areas of SA. They describean important functionality that the system must support, orrecognize where the system may need to be changed overtime. Scenarios are developed based on the stakeholders'and architectural objectives and considering the funda-mental uses of the system. Scenarios and the structural vieware effective in identifying components that need to bemodified, or are useful for preventive and adaptivemaintenance activities.

The quality attributes. Evolution and reusability areconsidered. Evolution integrates new quality objectives(maintainability and modifiability) obtained from domainexperts.

Stakeholders' involvement is similar to SAAM. Addi-tionally, two kinds of sources of information, the requiredchanges and domain experts' experiences, are considered.

SA description. SAAMER considers the followingarchitectural views as critical: static, map, dynamic, andresource. The static view integrates and extends SAAM toaddress the classification and generalization of a system'scomponents and functions and the connections betweencomponents. These extensions facilitate the estimation ofcost or effort required for changes to be made. The dynamicview is appropriate for the evaluation of the behavioraspect, to validate the control and communication to behandled in an expected manner. The mapping betweencomponents and functions could reveal the cohesion andcoupling aspects of a system.

Method's activities. SAAMER provides a framework ofactivities that are useful for the analysis process. Thisframework consists of four activities: gathering informationabout stakeholders [13], SA, quality, and scenarios; model-ing usable artifacts; analysis; evaluation. The last twoactivities are similar to SAAM. However, in the scenariodevelopment phase of SAAMER, a practical answer to thequestion regarding when to stop generating scenarios isgiven. Two techniques are applied here. First, scenariogeneration is closely tied to various types of objectives:stakeholder, architecture, and quality. Based on the objec-tives and domain experts' knowledge, the scenarios areidentified and clustered to make sure that each objective iswell covered. The second technique applied to validate thebalance of scenarios with respect to the objective is QualityFunction Deployment (QFD) [13], [17]. From stakeholderand architectural objectives to quality attributes, a cascadeof matrices is generated to show the relational strengths.Finally, quality attributes are translated to scenarios toreveal the coverage of each one. An imbalance factor is thencalculated for each quality attribute by dividing the cover-age by the priority of the quality. If the factor is less than 1,more scenarios should be developed to address the attributein accord with the stakeholder, SA, and quality importance.

Result interpretation. Analysis of scenario interaction isconsidered a critical step that provides the result of theanalysis. A high degree of interaction may indicate that acomponent is poorly isolated. Still, an SA view may showthat this is just the nature of a particular pattern.

Reusability of the existing knowledge base. SAAMERdoes not consider this issue.

The method has been applied to a telecommunicationsoftware system.

3.5 The Architecture Trade-Off Analysis Method(ATAM)

ATAM has grown out of the work on architectural analysisof individual quality attributes: SAAM for analyzes ofmodifiability, performance, availability, and security.ATAM was considered a spiral model of designs in 1998[29] and in May 1999 [25] a spiral model of analysis anddesign, which explains its recent evolution and progress.

Specific goals. The objective of ATAM is to provide aprinciple way of understanding an SA's capability withrespect to multiple competing quality attributes [5]. ATAMrecognizes the necessity of a trade-off among multiplesoftware quality attributes when the SA of a system isspecified and before the system is developed.

644 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002

Page 8: A survey on software architecture analysis methods - Software

The quality attributes. Multiple competing qualityattributes are analyzed by ATAM. For the beginningmodifiability, security, performance and availability havebeen considered.

Stakeholders' involvement. ATAM requires all thestakeholders' involvement in the activities related toscenarios and requirements gathering. An SA designercan also be involved.

SA description. The space of architecture is constrainedby legacy systems, interoperability, and failures of theprevious projects. SA is described on the basis of fivefoundational structures, which are derived from Kruchten'sª4+1 viewsº [33] (his logical view is divided into functionand code structures). These structures plus the appropriatemappings between them can describe an architecturecompletely. Also, ATAM requires several different views:a dynamic view, showing how systems communicate; asystem view, showing how software was allocated tohardware; and a source view, showing how componentsand systems were composed of objects. SA description isannotated with a set of message sequences charts showing run-time interactions and scenarios. ATAM is applied duringSA design or on the final version of the SA by an externalteam of analysts.

The included evaluation techniques. ATAM can beconsidered a framework for different evaluation techniquesdepending on the quality attributes. It integrates the bestindividual theoretical model of each considered attribute inan efficient and practical way [20], [42], [52].

Another evaluation technique is scenario. Three types ofscenario probes the system from different architecturalviews. These are: use cases, which involve typical uses of thesystem and are exploited for the information elicitation;growth scenarios, which cover anticipated changes; andexploratory scenarios, which cover extreme changes that areexpected to ºstressº a system. There is a triple scenario rolein this method. This technique helps to put vague andunquantified requirements and constraints in concreteterms. Also, scenarios facilitate communication betweenstakeholders because they force them to agree on theirperception of the requirements. Finally, scenarios explorethe space defined by an attribute model by helping to putthe model parameters that are not part of the SA intoconcrete terms.

ATAM also considers qualitative analysis heuristics, thatare derived from an attribute-based architecture style(ABAS) [32] and are meant to be coarse-grain versions ofthe kind of analysis that is performed when a preciseanalytic model of a quality attribute is built. An existenttaxonomy of each attribute is another base for ATAM. Thetaxonomies help to ensure attribute coverage and offer arationale for asking elicitation questions. ATAM also usesscreening questions, which guide or focus the elicitation onmore ºinfluentialº places of the SA. These serve to limit theportion of the architecture under scrutiny. Asking thesequestions is more practical than building attribute quanti-tative models at a moment. They capture the essence of thetypical problems that are discovered by a more rigorousand formal analysis.

Method's activities. The method is divided into fourmain areas of activity, or phases [29]. These are thegathering of scenarios and requirements, architecturalviews and scenario realization, attribute model buildingand analysis, and trade-offs. Fig. 5 details activities thatinclude scenarios and Fig. 6 describes the steps associatedwith each phase and possible iterations for SA design andanalysis improvement.

Attribute experts independently create and analyze theirmodels and then they exchange information (clarifying andcreating new requirements). The attribute analyses areinterdependent because each attribute has implications onothers. The attribute interactions are discovered in twoways: using sensitivity analysis to find trade-off points andby examining the assumptions. Recognized from a knowl-edge base, unbounded sensitivity points are informallyreferred properties that have not yet been bound to thearchitecture. A sensitivity point is a property of one or morecomponents (and/or component relationship) that is criticalfor achieving a particular quality. In practice, therefore,changes to the architecture parameters significantly affectthe modeled values. This can be obtained by using thestimuli and architectural parameters branches of attributestaxonomies [4], [5]. Trade-off points are architecturalelements that multiple elements are sensitive to. A trade-off point is a property that affects more than one attributeand is a sensitivity point for at least one attribute.

DOBRICA AND NIEMELAÈ : A SURVEY ON SOFTWARE ARCHITECTURE ANALYSIS METHODS 645

Fig 5. ATAM activities that consider scenarios.Fig. 6. ATAM phases [29].

Page 9: A survey on software architecture analysis methods - Software

During the architecture design, ATAM provides aniterative improvement. In addition to the requirementstypically derived from scenarios that are generated throughinterviews with the stakeholders, there are assumptionsregarding behavior patterns and the execution environ-ment. Because attributes ªtrade offº against each other, eachassumption is subject to inspection, validation, and ques-tioning as a result of ATAM. When all these actions havebeen completed, the results of the analysis are compared tothe requirements. If the system-predicted behavior comesadequately close to its requirements, the designers canproceed to a more detailed level of the design orimplementation. In the event of the analysis revealing aproblem, an action plan for changing the SA, the models orthe requirements is developed. This leads to anotheriteration of the method. ATAM does not require that allattributes be analyzed in parallel, thus allowing thedesigner to focus on primary attributes, and then introduceothers later on. This leads to cost benefits since what may becostly analyses for secondary attributes need not be appliedto architecture that was unsuitable for the primaryattributes.

Reusability of a domain knowledge base is maintainedin ABASs. ABAS helps to move from the notion ofarchitectural styles toward the ability to reason based onquality attribute-specific models. The goals of having acollection of ABASs are to make architectural design moreroutine-like and more predictable, to have a standard set ofattribute-based analysis questions, and to tighten the linkbetween design and analysis [32].

The method has been applied to several softwaresystems but is still under research.

3.6 Scenario-Based Architecture Reengineering(SBAR)

The contribution of this method is not only in thearchitecture design but also in the scenario-based evalua-tion of the software qualities of a detailed architecture of asystem [8].

Specific goal. SBAR estimates the potential of thedesigned architecture to reach the software qualityrequirements.

The included evaluation techniques. Four differenttechniques for assessing quality attributes are identified:scenarios, simulation, mathematical modeling, and experi-ence-based reasoning. For each quality attribute, thesuitable technique is selected. Scenarios are recommendedfor the quality attributes of the development, such asmaintainability and reusability, which are exemplified inthe paper [8]. The selected scenarios concretize the actualmeaning of the attribute (i.e., scenarios that capture typicalchanges in requirements may specify the maintainability).The performance of the architecture in the context definedby each individual scenario for a quality attribute isassessed by the analysis. Simulation completes the scenario-based approach, being useful for evaluating operationalsoftware qualities such as performance or fault-tolerance.Mathematical models allow a static evaluation of architec-tural design models and are an alternative to simulationsince both approaches are primarily suitable for assessingoperational software qualities. To evaluate operationalsoftware qualities, the existent mathematical modelsdeveloped by various research communities for high

performance-computing [50], reliability [48], and real-timesystems [39] could be used. Experience-based reasoning isfounded on experience and logical reasoning based on thatexperience. This technique is different from the othersbecause is less explicit and more based on subjectivefactors such as intuition and experience and it makes useof the tacit knowledge of the people.

Quality attributes. SBAR focuses on multiple softwarequalities. A number of quality attribute research commu-nities have proposed their own methods for developingreal-time [39], high performance [50], and reusable systems[24]. All these methods focus on a single quality attributeand treat all others as having secondary importance, if anyat all. SBAR considers these approaches unsatisfactorybecause a balance of various quality attributes is needed inthe design of any realistic system.

Stakeholders' involvement. SBAR does not require theinvolvement of many stakeholders. The evaluator is thedesigner of the SA.

SA description. A particularity of this method is that forassessing the architecture of the existing system, the systemitself can be used. SBAR uses a detailed design of SA.

Method's activities. The assessment process consists ofdefining a set of scenarios for each software quality,manually executing the scenarios on the architecture andinterpreting the results (Fig. 7). The method can beperformed in a complete or statistical manner. In the firstapproach, a set of scenarios is defined and combinedtogether, they cover the concrete instances of the softwarequality. If all scenarios are executed without problems, thequality attribute of the architecture is optimal. The secondapproach is to define a set of scenarios that makes arepresentative sample without covering all possible cases.The ratio between scenarios that the architecture can handleand scenarios not handled well by the architecture providesan indication of how well the architecture fulfils thesoftware quality requirements. Both approaches obviouslyhave disadvantages. A disadvantage of the first approach isthat it is generally impossible to define a complete set ofscenarios. The definition of a representative set of scenariosis the weak point in the second approach since it is unclearhow one decides that a scenario set is representative. Theresults from each analysis of the architecture and scenarioare summarized into overall results, e.g., the number ofaccepted scenarios versus the number not accepted. SBAR

646 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002

Fig. 7. SBAR activities. SA analysis and design [8].

Page 10: A survey on software architecture analysis methods - Software

provides guidelines for SA improvements. A structuresimilar to Table 4 is organized to express the results. Designand analysis combination is performed for a number ofiterations until most of the scenarios per each qualityattribute are satisfied (+).

Reusability of the existing knowledge base. SBAR doesnot consider this issue.

SBAR has been validated for a measurement softwaresystem.

3.7 Architecture Level Prediction of SoftwareMaintenance (ALPSM)

Specific goal. ALPSM analyzes maintainability of a soft-ware system by looking at the impact of scenarios at theSA level [10]. Similar to the software maintenance commu-nity [11], it uses the size of changes as a predictor for theeffort needed to adapt the system into a scenario.

The included evaluation technique. ALPSM defines amaintenance profile, like a set of change scenarios repre-senting perfective and adaptive maintenance tasks. Ascenario describes an action or a sequence of actions thatmight occur as related to the system. Hence, a changescenario describes a certain maintenance task.

Stakeholder's involvement. Only the designer is in-volved in the method activities.

SA description. ALPSM is applied to the final versionof SA.

Method's activities. The method has a number of inputs:the requirements statement, the description of the archi-tecture, expertise from software engineers and possiblyhistorical maintenance data (Fig. 8). ALPSM consists of thefollowing six steps:

1. the identification of categories of maintenance tasks,2. synthesis scenarios,3. assignation of a weight to each scenario,4. estimation of the size of all elements,5. scripting the scenarios, and6. calculation of the predicted maintenance effort.

The first step formulates classes of expected changesbased on the application or program description, then foreach of the maintenance tasks, a representative set ofscenarios is defined. The scenarios are assigned a weightbased on their probability of occurring during a particulartime interval. To be able to assess the size of changes, thesize of all components of the system is determined. One ofthe three techniques can be used for estimating the size ofcomponents: using the estimation technique of choice, an

adaptation of an object-oriented metric or, when historicaldata from similar applications or earlier releases isavailable, existing size data can be used and extrapolatedto new components. The total maintenance effort ispredicted by summing up the size of the impact of thescenarios multiplied by their probability. The size of theimpact of each scenario realization is calculated bydetermining the components that are affected and to whatextent will they be changed.

Reusability of a knowledge base is not considered, buthistorical data from similar applications or earlier releasesare needed. Previous data are extrapolated to newcomponents.

The method has been applied to a haemodialysis system.

3.8 A Software Architecture Evaluation Model(SAEM)

The evaluation process of the quality requirements of theSA is rigorously formalized, especially in relation to metricsin the model described in [18]. A quality model based onstandard software quality assessment process [23] is chosenand a conceptual framework that relates quality require-ments, metrics, and internal attributes of the SA and thefinal system is proposed. The elements required for qualityevaluation of a software system, based on standardspecification, are a quality model, a method for evaluation,metrics, and the supporting tools.

Specific goal of the method. SAEM establishes the basisfor the SA quality evaluation and prediction of the finalsystem quality.

The evaluation technique. SAEM tries to define metricsof quality based on the goal-question-metric (GQM)technique. The goal of metrics is to discover whethercertain attributes meet the values specified in the qualityspecification for each software characteristic.

The quality attributes. The quality specification isdivided into external and internal categories. The externalquality expresses the user's view and the internal qualityexpresses the developer's view. The internal qualityattributes are composed of special elements (such as func-tional elements or data elements) denoting quality char-acteristics and intrinsic properties resulting from thedevelopment process (such as size, modularity, complexity,coupling, and cohesion). It is necessary to establish arelative importance between internal attributes and theirvalues; QFD [47] is recommended as a suitable techniquefor this purpose.

DOBRICA AND NIEMELAÈ : A SURVEY ON SOFTWARE ARCHITECTURE ANALYSIS METHODS 647

TABLE 4SBAR Method's Results Organization

Fig. 8. ALPSM inputs and result.

Page 11: A survey on software architecture analysis methods - Software

The stakeholders' involvement. Experts' knowledgeand a company's accumulated data are used in themapping of quality requirements to internal attributes.

SA description. SA is considered from two differentviewpoints, that of the developer and that of the user.Therefore, the SA is either a final product or an inter-mediate in the software system development process. Thearchitecture development process constrains the internalattributes, so the result of the measurement process canimprove architecture as a form of feedback. The architecturedescription language (ADL) model should have attachedquestioning or inspection techniques (such as SA modelswalkthrough) to detect the presence or absence of specialelements. The intrinsic properties can only be detected bymeasuring techniques applied to the SA representationformalized through an ADL.

Method's activities. SAEM gives a quality evaluationmodel based on data collection, measurement, and analysisof the results. The analysis process is divided into externaland internal processes and is adapted to users' ordevelopers' views. The specified quality requirements aremapped to internal attributes that will be present on the SA,based on the experts' knowledge and company accumu-lated data.

Reusability of the existing knowledge base is notconsidered. However, the evaluation model assumes theexistence of a previous internal quality specification, whichdefines the expected internal attributes with their valuesand their evaluation procedure.

SAEM has not been validated on any software system.

4 DISCUSSION

The purpose of this discussion is to offer guidelines related tothe selection of the most suitable method for an architectureassessment process. Fig. 9 presents the main issues exam-ined. The opening part focuses on a study that identifies thecollective goal and how this goal is divided in each of theanalysis methods. Then, several classifications of themethods are established. Included evaluation techniques,the number of quality attributes, the stakeholders' involve-ment, and SA description, or when the method is applied inthe architecture-based development process are the maincriteria of classification. To maintain a pertinent discussionin exemplification, we consider only the most representative

methods. Common activities such as scenarios developmentand evaluation with their different approaches about whento stop generating scenarios and how the scenarios' impacton a considered architecture is evaluated are identified in thescenario-based analysis methods. The final part discusses thespecial case of the evolution of ATAM from SAAM and howthe existing knowledge is reused by the analysis methods.Finally, we conclude with a summary of the consideredSA analysis methods.

4.1 Appropriateness StudyÐMethods' SpecificGoals

Objective views are considered a basis for establishingwhich analysis method is most suitable for an architectureassessment process. Although each method has its particu-larity in the definition of its objectives, we can identify in allof them a collective goal, which is the prediction of thequality of a system before it has been built. In each method,this goal is reflected under different angles and perspec-tives. The reflections are oriented to: guide the inspection ofthe SA, focusing on potential trouble spots (SAAM,ESAAMI); risk assessment (SAAMCS); evaluate the poten-tial of the designed SA to reach the software qualityrequirements (SBAR, SAAMER); predict one quality attri-bute of a software system based on its architecture(ALPSM); establish the basis for SA evaluation andprediction of the final system quality (SAEM); locate andanalyze trade-offs in an SA, for these are the areas of highestrisk in an architecture (ATAM). The collective andparticular characteristics of the goals lead to similaritiesand differences between all these presented methods.

4.2 Classifications of the Methods

Based on the evaluation techniques, we can establish a possibleclassification of the methods considering the techniquesthey use. From this point of view, some of the methods are:purely scenario-based, like SAAM; scenario-based andattribute model-based analysis technique, like ATAM;proposing various evaluation techniques depending onthe attribute, like SBAR, and related to metrics, like SAEM.A quality model of attributes for quantitative evaluation istreated during the assessment process in two of themethods, but from this angle we identified differentapproaches. SAEM tries to define metrics based on theGQM technique, while ATAM considers that analysis

648 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002

Fig. 9. Main elements of the comparison framework.

Page 12: A survey on software architecture analysis methods - Software

techniques indigenous to the various quality attributecommunities can provide a foundation for performing SAevaluation. It is not necessary to invent attribute-specifictechniques and metrics, but to integrate existing ones intosystematic methods. ATAM provides flexibility in theintegration of the best individual theoretical model of eachconsidered attribute.

Based on the considered number of quality attributes, somemethods are centered on the evaluation of a single qualityattribute. However, for a better understanding of thestrengths and weaknesses of a complex real system andits parts, the performance of a multiattribute analysis isrequired. An important feature revealed by studying theanalysis methods is the number of quality attributes amethod focuses on. We can distinguish multiple qualityattributes (ATAM, SBAR). For example, ATAM considersthe architectural elements where multiple attributes inter-act. There are also single quality attributes (SAAM) and aspecific quality model (SAEM).

Based on stakeholders' involvement, although it is recog-nized that the involvement in the evaluation process of allthe stakeholders facilitates communication between them,not all the methods consider their presence mandatory.ALPSM differs from SAAM in that it does not involve allstakeholders and, thus, requires less resources and time.Instead, it provides an instrument for the softwarearchitects that allows them to repeatedly evaluate thearchitecture during design. Due to the need of a stakehol-der's commitment, this method could be used in combina-tion with SAAM. In SAAM and ATAM, architecture isevaluated by the analysts in cooperation with stakeholdersprior to the detailed design, while, in SBAR, architecture isevaluated on a detailed design for reengineering withoutthe stakeholders' involvement, although at the same timeposing typical quality questions.

When is the method applied? This question gets differentresponses when considering the architecture-based devel-opment process. A similar approach, which combinesarchitecture analysis and design into an iterative improve-ment process, could be identified in ATAM and SBAR. But,while SBAR includes guidelines on how to transform thearchitecture in order to meet certain quality requirements,ATAM concentrates on identifying sensitivities and trade-off points. However, ATAM could also be applied to theevaluation on the SA final version. SAAM, SAAMCS,ESAAMI, and SAAMER are also applied to the final versionof the SA. ALPSM is applied during the design to predictadaptive and perfective software maintenance. SAEM isapplied to the final version, but, here, it should be notedthat the evaluation model considers SA from two differentviewpoints, the developer's and the user's. Therefore, theSA is either a final product or an intermediate one in thesystem development process. The rigorous ambition ofSAEM makes it hard to believe that it will be suitable forusage in an iterative SA design process.

4.3 Common Activities and Different Approaches inScenario-Based Methods

The activities of the methods differ in complexity andgranularity/aggregation level. Complexity represents thedifficulty, measured in time or the number of other toolsor documents needed to perform that activity. All themethods are performed manually and, for the moment,

there is no requirement for any software tools. Documentsare necessary in some of the methods and are contained in areusable knowledge base. Granularity/aggregation levelmeans that an activity may represent a group of subactiv-ities or a phase that is divided into steps. For example,SAAMER defines a framework of four activities and one ofthem includes all the activities of SAAM. ATAM alsoconsists of four phases, each one with multiple steps.

Scenario-based assessment is particularly appropriate forqualities related to software development. Software quali-ties such as maintainability, reusability, modifiability,adaptability, and portability can be expressed very natu-rally through change scenarios. As Poulin [46] concludedwhen considering reusability, no predominant approach forassessing this quality attribute exists. The use of scenariosfor evaluating architectures is recommended as one of thebest industrial practices [1]. To this end, we discuss, in thefollowing, different proposals for scenarios developmentand scenario impact evaluation.

Scenarios development. A common activity of scenario-based methods is scenarios development. We identifieddifferent solutions that try to answer to the question, ªwhento stop generating scenarios?º during this activity. SAAMconsiders that the set of scenarios is complete when theaddition of a new scenario no longer disturbs the design.Scenarios are also elicited considering all the stakeholders'opinions. In SBAR, two approaches are discussed. One is todefine a complete set, which is generally impossible. Theother is to define a representative set, which has the weakpoint of how to define which is the representative set. The lastone is based only on the creativity and subjectivity of thesoftware engineer. SAAMCS considers that the relevantscenarios are those that are possibly complex to realize. Atwo-dimensional framework diagram (five categories ofcomplex scenarios, four sources of changes) that may helpto discover complicated scenarios is defined. SAAMERdefines a practical two-steps procedure. In the first step, acoverage guarantee is obtained. The scenarios are identifiedand clustered based on the objectives and domain experts'knowledge and the coverage is checked against the objectivesof stakeholders, architecture, and quality. The second stepvalidates the balance of scenarios with respect to the objectivebased on the QFD technique. The decision to develop morescenarios is made based on comparison against one of acalculated imbalance factor for each quality attribute. ATAMuses a set of standard quality attribute-specific questions toensure proper coverage of an attribute by the scenarios. Theboundary conditions should be covered. A standard set ofquality-specific questions gives one the possibility to elicitthe information needed to analyze that quality in a pre-dictable, repeatable fashion. ALPSM defines a representativeset of scenarios for each expected maintenance task.

Scenario Evaluation. There are variances in the evalua-tion of the scenarios' effects on the considered architecture.SAAM investigates which architectural elements are af-fected by each scenario. The cost of the modificationsassociated with each indirect scenario is estimated byrelationshipsÐlisting the components and the connectorsthat are affected and then counting the number of changes.In ALPSM, the effort needed to implement the scenario ispredicted by estimating the size of the components and theextent to which they are affected. This activity may need

DOBRICA AND NIEMELAÈ : A SURVEY ON SOFTWARE ARCHITECTURE ANALYSIS METHODS 649

Page 13: A survey on software architecture analysis methods - Software

historical maintenance data. SAAMCS defines and uses ameasurement instrument to express the effect of scenarios.The instrument indicates the impact of a scenario, whethermultiple owners are involved and whether it leads toversion conflicts. In SAAMER, a classification and general-ization of the architectural elements facilitates the estima-tion of the cost or effort required for changes to be made.The required changes, specified in scenarios and domainexperts' experiences, suggest how each of the objectives orthe risks for the systems evolution or reuse acrossapplications could be supported. Finally, in SBAR, theevaluation can be performed in a complete or statisticalmanner. The optimality of a quality attribute could beobtained using the former approach and the fulfilment of aquality attribute using the latter one.

4.4 A Discussion of Methods' Evolution

A special case of qualitative and quantitative progress isobserved at ATAM. Considering the uses of scenarios,ATAM is based on SAAM. Unlike SAAM, which focuses onthe use of scenarios for architectural modifiability evalua-tion, ATAM focuses on finding trade-off points in thearchitecture from the perspective of quality requirements ofthe product. In addition, ATAM prescribes formal orinformal analytic models for assessing the quality attributesof the system, but relies on the existence of such techniquesfor the quality attributes relevant to each case. In the case ofmodifiability analysis, ATAM builds an informal model likeSAAM with inspection and review methods. Scenariointeractions are interpreted as sensitivity points.

4.5 The Reusability of an Existing Knowledge Base

Similarities at a coarse-grain level could also be identifiedbetween ATAM and ESAAMI. Both methods are based onSAAM. Considering the reusability of the existing knowl-edge, ATAM uses ABASs and ESAMI proposes packages ofanalysis templates and reusable architectures. However,when we talk about systematization of information, there isno possible comparison. ESAAMI allows making domain-respective architecture-specific experience available in anintuitive form, while ATAM is anchored in a very well-structured knowledge base of quality attributes commu-nities and architectural styles. ABASs provide a set ofprepackages of analyses and questions including knownsolutions to commonly recurring problems and knowndifficulties in employing those solutions. ATAM is based ona set of materials that describe many of the evaluationartifacts, like ABASs, quality attribute-specific questionsthat aid the evaluator in probing an architecture andquestions that aid the analyst in gathering the informationneeded to build an analytic model of the quality attribute.

4.6 About Choosing Analysis Methods in Practice

The selection of a suitable method depends on how welleach comparison element fits into the problem context. It isnot the purpose of this survey to suggest a ranking list ofanalysis methods to the practitioners, but to give anunderstanding of how the methods differ.

A summary of this discussion is depicted in Table 5and 5a.

In practice, one of the purposes for using a softwarearchitecture analysis method is to decrease the costs causedfrom corrections and to increase quality of products. For

that reason, a method 1) should be able to be used in anearly or middle phase of the SA design process (smaller andcheaper errors), 2) should support all possible qualityattributes or as many as possible (the scope of use), and3) should be easy to apply and integrate to the designprocess (it takes less time to apply, by a designer). If allthese criteria are supported by a method, it can be selectedand applied in practice. From Table 5, we can see that,except SBAR (applied on a reengineered SA design) andESAAMI (applied on a selected architecture design from adomain database), all the others may fulfill 1). Point 2 issupported by ATAM and SBAR and point 3 by SBAR,ALPSM, and ATAM (when analysis is performed by adesigner). The result is that ATAM satisfies as many aspossible of the proposed criteria.

Only one selection criterion could be insufficient toindicate the most suitable method for a defined purpose.The included evaluation techniques, the ease with whichthe method's activities are performed, and the existence of aknowledge base may represent other criteria that have to beconsidered in the selection process. An important elementto think about is how well and in what software domain amethod has been validated in practice.

5 CONCLUSIONS AND FUTURE WORK

This survey has shown the real level of the research at thismoment, in this domain, by presenting and discussing eightof the most representative architecture analysis methods.This section is organized to reveal the general progress,existing problems, and future work for improvement andrefinement.

5.1 Progress Identification and MethodsImprovement Techniques

Progress in risk assessment. The purpose of the evaluationis to analyze the architecture to identify potential risks, bypredicting the quality of the system before it has been built.Regarding potential risks identification, the reflections ofthis general goal have been distinguished in all the studiedmethods. In this sense, the uses of change scenarios andscenario interaction reveal potential problem areas in thearchitecture. The degree of modification captured whenevaluating a system's response to a scenario represents ameasured risk. The complexity of scenarios is also animportant factor for risk assessment. The required changesand domain experts' experiences represent another mod-ality of suggestion of how the system could support the risklevels for evolution or reuse. The chances of surfacingdecisions at risk are optimized by using exploratoryscenarios. The potential risk is also minimized by analyzingattribute interactions. Iterative methods promote analysis atmultiple resolutions as a means of minimizing risk to anacceptable level of time and effort. Areas of high risk areanalyzed more profoundly (simulated, modeled, or proto-typed) than the rest of the architecture. Each level ofanalysis helps to determine where to analyze more deeplyin the next iteration.

A possible combination of methods. Looking at theexisting analysis techniques, the possibility of combining acoarse-grain and broad technique with a fine-level onewould provide an improved result, but the costs of time andeffort would also be increased. Scenario-based analysis

650 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002

Page 14: A survey on software architecture analysis methods - Software

techniques can be combined with a specific analysis

technique for quality attributes. For example, a scenario

may identify a critical path of execution, which can then be

examined in detail using a real-time analysis method like

rate monotonic analysis (RMA) [2], [31], or other analysis

techniques for scrutinizing the dynamic properties of an

application.MetricsÐmore precise techniques in evaluating attri-

butes in terms of architecture. Most of the researchers in

the domain consider metrics to be a more precise techniquein evaluating attributes in terms of architecture [28], [41].Metrics specification must contain the selected measure fora quality attribute, a measurement scale, and a set ofmethods for measurement. Two approaches could beidentified: to adapt existing metrics [9] or to define newones [18]. The adaptation of object-oriented metrics whichwere validated as good predictors of software maintenance[38] is required because the metrics suite uses data that canonly be collected from the source code and, at the

DOBRICA AND NIEMELAÈ : A SURVEY ON SOFTWARE ARCHITECTURE ANALYSIS METHODS 651

TABLE 5Software Architecture Analysis Methods

TABLE 5aTABLE 5 (cont.)

Page 15: A survey on software architecture analysis methods - Software

architecture level, no prototype or source exists. Consider-ing the other approach, GQM [6] is a good technique todefine new metrics following a certain reasoning process.The main activities of GQM are: to define a goal in terms ofpurpose, perspective, and environment; to establish thequestions that indicate the attributes related to the goal; andto answer to each question. The purpose is related to theSA evaluation, indication, and comparison, and the endproduct quality prediction. The perspective depends on theaims of the assessment and it is closely related to the role ofevaluation staff: a developer, user, management, andmaintainer. There are two suitable environments: theSA representation considered as an intermediate designproduct or as an end product in itself.

QFDÐa technique to be considered. QFD is a techniquethat should be considered as a future research topic. It hasbeen used in SAAMER and it is recommended by SAEM todevelopers, in order to establish the relative importancebetween attributes and their values. Equally, this techniqueis seen as useful in the process of formalization of therelationship between internal quality attributes and thequality characteristics/subcharacteristics which must bestudied for specific application domains, developmentprocesses, and ADL.

5.2 Open Problems and Future Work

Scenarios and quality attributes naming problems. Oneproblem with scenario-based analysis is that the result andexpressiveness of the analysis are dependent on theselection of the scenarios and their relevance for identifyingcritical assumptions and weaknesses within the architec-ture. There is no fixed minimum number of scenarios, theevaluation of which guarantees that the analysis is mean-ingful. According to this, the definition of a set of complexscenarios and a two-dimensional framework is a solution,but a future study of this set completeness and on therelative importance of each of the framework cells isneeded. The idea of using an instrument which shouldinclude all aspects relevant to the complexity of changes isoriginal and useful, but the measures must be comparableto allow results' interpretation.

Future studies are needed in order to investigate howdomain knowledge and the degree of expertise affect thecoverage of the selected scenarios. By the same token,quality attributes prediction methods could be improved bystudying their sensitivities for different variations of theinputs and how significant the used assumptions para-meters are for the results. For instance, how sensitiveALPSM is to the representative sample of the maintenancescenario profile, or how critical the size estimation for theresults is.

An examination of the existent methods reveals a lack ofunderstanding of quality attributes in the software engi-neering community at the moment. The same interpretationbut with different attribute names could be identified forflexibility [37], which has the same meaning as modifiabilityin [26] or maintainability in [10].

Future work for methods improvement and refining.Until now, SAAM has been the only method that hasappeared in a book [7]. This is a confirmation of itsmaturity. SAAM has been used for different qualityattributes like modifiability, performance, availability, and

security. It has also been applied in several domains. This isanother validation of its completeness. The other methodsare still young and are undergoing refinement andimprovement. Future work is needed to evaluate the effectsof their various usages and to create a repeatable methodbased on repositories of scenarios, screening, and elicitationquestions (ATAM). In this respect, ABASs and qualitativeanalysis heuristics are developing. Building a handbook ofABASs requires collection, documentation, and testing ofmany examples of problems, quality attributes measures,stimuli, and parameters.

The extension of the reengineering method for morenonfunctional requirements and the application of themethod in more industrial case studies are the main futureobjectives of SBAR. The authors of this method consider itimportant to obtain a reasonable balance between thedifferent quality requirements in the top-level architecturaldesign. A small taxonomy is defined for performance andmodifiability, and eight design guidelines are formulated.Each guideline is associated with a quality requirement inthe taxonomy. Future work is desirable, in order to extendthese guidelines to other quality requirements.

A stronger methodical integration in the developmentprocess is also required. ESAAMI needs to providecomplete support for the reuse-based and architecture-driven development approaches. Integrating the techniqueinto reuse-based and architecture-centric developmentprocesses should provide a refinement of the method.

REFERENCES

[1] G. Abowd, L. Bass, P. Clements, R. Kazman, L. Northop, and A.Zaremski, ªRecommended Best Industrial Practice for SoftwareArchitecture Evaluation,º Technical Report, CMU/SEI-96-TR-025,1997.

[2] A. Alonso, M. Garcia-Valls, and J.A. de la Puente, ªAssessment ofTiming Properties of Family Products,º Proc. Second Int'l ESPRITARES Workshop, pp. 161-169, Feb. 1998.

[3] M. Barbacci, M. Klein, and C. Weinstock, ªPrinciples forEvaluating the Quality Attributes of a Software Architecture,ºTechnical Report, CMU/SEI-96-TR-036, ESC-TR-96-136, 1997.

[4] M. Barbacci, M. Klein, T. Longstaff, and C. Weinstock, ªQualityAttributes,º Technical Report CMU/SEI-95-TR-021, ESC-TR-95-021, 1995.

[5] M. Barbacci, S. Carriere, P. Feiler, R. Kazman, M. Klein, H. Lipson,T. Longstaff, and C. Weinstock, ªSteps in an Architecture TradeoffAnalysis Method: Quality Attribute Models and Analysis,ºTechnical Report, CMU/SEI-97-TR-029 ESC-TR-97-029, 1998.

[6] V.R. Basili and H.D. Rombach, ªGoal/Question/Metric Paradigm:The TAME Project: Towards Improvement-Oriented SoftwareEnvironments,º IEEE Trans. Software Eng., vol. 14, no. 6, 1988.

[7] L. Bass, P. Clements, and R. Kazman, Software Architecture inPractice. Reading, Mass.: Addison-Wesley, 1998.

[8] P.O. Bengtsson and J. Bosch, ªScenario-Based ArchitectureReengineering,º Proc. Fifth Int'l Conf. Software Reuse (ICSR 5), 1998.

[9] P.O. Bengtsson, ªTowards Maintainability Metrics on SoftwareArchitecture: An Adaptation of Object Oriented Metrics,º Proc.First Nordic Workshop Software Architecture (NOSA '98), Aug. 1998.

[10] P.O. Bengtsson and J. Bosch, ªArchitecture Level Prediction ofSoftware Maintenance,º Proc. Third European Conf. SoftwareMaintenance and Reeng., pp. 139-147, Mar. 1999.

[11] R.S. Arnold and S.A. Bohner, Software Change Impact Analysis. LosAlamitos, Calif.: IEEE Computer Society, 1996.

[12] J. Bosch and P. Molin, ªSoftware Architecture Design: Evaluationand Transformation,º Proc. IEEE Eng. of Computer Based SystemsSymp. (ECBS `99), Dec. 1999.

[13] S. Bot, C.-H. Lung, and M. Farrell, ªA Stakeholder±CentricSoftware Architecture Analysis Approach,º Proc. Int'l SoftwareArchitecture Workshop (ISAW 2), 1996.

652 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 28, NO. 7, JULY 2002

Page 16: A survey on software architecture analysis methods - Software

[14] L. Bratthall and P. Runeson, ªA Taxonomy of OrthogonalProperties of Software Architecture,º Proc. Second Nordic SoftwareArchitecture Workshop (NOSA '99), 1999.

[15] L.C. Briand, S. Morasca, and V.R. Basili, ªMeasuring andAssessing Maintainability at the End of High Level Design,º Proc.IEEE Conf. Software Maintenance, 1993.

[16] F. Buschmann, R. Meunier, P. Sommerland, and M. Stal, Pattern-oriented Software Architectures, a System of Patterns. Chichester:Wiley & Sons, 1996.

[17] R. Day, Quality Function Deployment. Linking a Company with ItsCustomers. Milwaukee, Wisc.: ASQC Quality Press, 1993.

[18] J.C. Duenas, W.L. de Oliveira, and J.A. de la Puente, ªA SoftwareArchitecture Evaluation Model,º Proc. Second Int'l ESPRIT ARESWorkshop, pp. 148-157, Feb. 1998.

[19] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, DesignPatternsÐElements of Reusable Object-Oriented Software. Reading,Mass.: Addison-Wesley, 1995.

[20] A. Iannino, ªSoftware Reliability Theory, Encyclopedia of SoftwareEng., J.J. Marciniak, ed., vol. 2, pp. 1237-1253, 1994.

[21] IEEE Standard Glossary of Software Engineering Terminology. IEEEStd. 610.12-1990, 1990.

[22] IEEE Standard 1061-1992, Standard for Software Quality MetricsMethodology, New York: Institute of Electrical and ElectronicsEngineers, 1992.

[23] ISO/IEC91ÐInt'l Organization of Standardisation and Int'lElectrotechnical Commission, Information TechnologyÐSoftwareProduct EvaluationÐQuality Characteristics and Guidelines for TheirUse, ISO/IEC 9216: 1991(E), 1991.

[24] Software Reuse, a Holistic Approach. E. Karlsson ed., Chichester:Wiley & Sons, 1995.

[25] R. Kazman, M. Barbacci, M. Klein, S.J. Carriere, and S.G. Woods,ªExperience with Performing Architecture Tradeoff Analysis,ºProc. Int'l Conf. Software Eng. (ICSE `99), pp. 54-63, May 1999.

[26] R. Kazman, G. Abowd, L. Bass, and P. Clements, ªScenario-BasedAnalysis of Software Architecture,º IEEE Software, pp. 47-55, Nov.1996.

[27] R. Kazman, G. Abowd, L. Bass, and M. Webb, ªAnalyzing theProperties of User Interface Software Architectures,º TechnicalReport, CMU-CS-93-201, Carnegie Mellon Univ., School ofComputer Science, 1993.

[28] R. Kazman, L. Bass, G. Abowd, and M. Webb, ªSAAM: A Methodfor Analyzing the Properties of Software Architectures,º Proc. 16thInt'l Conf. Software Eng., pp. 81-90, 1994.

[29] R. Kazman, M. Klein, M. Barbacci, H. Lipson, T. Longstaff, and S.J.CarrieÁre, ªThe Architecture Tradeoff Analysis Method,º Proc.Fourth Int'l Conf. Eng. of Complex Computer Systems (ICECCS '98),Aug. 1998.

[30] R. Kazman, S.J. Carriere, and S.G. Woods, ªToward a Discipline ofScenario-Based Architectural Engineering,º Annals of SoftwareEng., vol. 9, 2000, http://www.cgl.uwaterloo.ca/~rnkazman/SE-papers.html.

[31] M. Klein, T. Ralya, B. Pollak, R. Obenza, and M. GonzalesHarbour, A Practitioner's Handbook for Real-Time Analysis. Boston:Kluwer Academic, 1993.

[32] M. Klein, R. Kazman, L. Bass, S.J. Carriere, M. Barbacci, and H.Lipson, ªAttribute-Based Architectural Styles,º Proc. First WorkingIFIP Conf. Software Architecture (WICSA 1), pp. 225-243, Feb. 1999.

[33] P.B. Krutchen, ªThe 4+1 View Model of Architecture,º IEEESoftware, pp. 42-50, Nov. 1995.

[34] N.H. Lassing, D.B.B. Rijsenbrij, and J.C. van Vliet, ªFlexibility inComBAD Architecture,º Proc. First Working IFIP Conf. SoftwareArchitecture (WICSA 1), Feb. 1999.

[35] N. Lassing, D. Rijsenbrij, and H. van Vliet, ªOn SoftwareArchitecture Analysis of Flexibility, Complexity of Changes: SizeIsn't Everything,º Proc. Second Nordic Software Architecture Work-shop (NOSA '99), pp. 1103-1581, 1999.

[36] N. Lassing, D. Rijsenbrij, and H. van Vliet, ªThe Goal of SoftwareArchitecture Analysis: Confidence Building or Risk Assessment,ºProc. First Benelux Conf. State-of-the-art of ICT Architecture, 1999.

[37] N. Lassing, D. Rijsenbrij, and H. van Vliet, ªTowards a BroaderView on Software Architecture Analysis of Flexibility,º Proc.Asian-Pacific Software Eng. Conf. (APSEC '99), 1999.

[38] W. Li and S. Henry, ªObject-Oriented Metrics that PredictMaintainability,º J. Systems and Software, vol. 23, no. 2, pp. 111-122, 1993.

[39] J.W.S. Liu and R. Ha, ªEfficient Methods of Validating TimingConstraints,º Advances in Real-Time Systems, S.H. Son ed., pp. 199-223, 1995.

[40] L. Lundberg, J. Bosch, D. HaÈggander, and P.O. Bengtsson,ªQuality Attributes In Software Architecture Design,º Proc.IASTED Third Int'l Conf. Software Eng. and Applications, pp. 353-362, Oct. 1999.

[41] C. Lung, S. Bot, K. Kalaichelvan, and R. Kazman, ªAn Approachto Software Architecture Analysis for Evolution and Reusability,ºProc. CASCON '97, Nov. 1997.

[42] J.A. McCall, ªQuality Factors,º Encyclopedia of Software Eng.,J.J. Marciniak ed., vol. 2, pp. 958-971, 1994.

[43] G. Molter, ªIntegrating SAAM in Domain-Centric and Reuse-Based Development Processes,º Proc. Second Nordic WorkshopSoftware Architecture (NOSA '99), pp. 1103-1581, 1999.

[44] D. Parnas, ªOn the Criteria to be Used in Decomposing Systemsinto Modules,º Comm. ACM, vol. 15, no. 12, pp. 1053-1058, 1972.

[45] D. Perry and A. Wolf, ªFoundation for the Study of SoftwareArchitecture,º SIGSOFT Software Eng. Notes, vol. 17, no. 4, pp. 40-52, 1992.

[46] J.S. Poulin, ªMeasuring Software Reusability,º Proc. Third ConfSoftware Reuse, Nov. 1994.

[47] B.M. Reed and D.A. Jacobs, Quality Function Deployment for LargeSpace Systems. Nat'l Aeronautics and Space Administration, 1993.

[48] P. Runeson and C. Wohlin, ªStatistical Usage Testing for SoftwareReliability Control,º Informatica, vol. 19, no. 2, pp. 195-207, 1995.

[49] M. Shaw and D. Garlan, Software architecture. Perspectives on anEmerging Discipline. Upper Saddle River, N.J.: Prentice Hall, 1996.

[50] C. Smith, Performance Engineering of Software Systems. Reading,Mass.: Addison-Wesley, 1990.

[51] C. Smith and L. Williams, ªSoftware Performance Engineering: ACase Study Including Performance Comparison with DesignAlternatives,,º IEEE Trans. Software Eng., vol. 19, no. 7, pp. 720-741, July 1993.

[52] W.P. Stevens, G.J. Myers, and L.L. Constantine, ªStructuredDesign,º IBM Systems J., vol. 13, no. 2, pp. 115-139, 1974.

Liliana Dobrica received the MSc degree in1991 in process control software engineeringand PhD degree in 1998 in control systems fromthe Politehnica University of Bucharest, Roma-nia. She is an associate professor in theDepartment of Control and Industrial Informaticsof Faculty of Automation and ComputersScience at the Politehnica University of Buchar-est, Romania. In 2000, she joined SoftwareArchitectures Research Group, VTT Electronics,

Oulu, Finland, where she was a postdoctoral research associate for ninemonths. Her research interests include software design and analysis forembedded, real-time, and distributed systems, software architecture,and product-line architecture, quality attributes analysis techniques, withemphasis on integrating quality attribute analysis techiques into thesoftware development process. Her current research projects includemodeling and analysis of software product-line architecture for middle-ware services domain. She has published several journal andconference technical papers.

Eila NiemelaÈ received the MSc degree ininformation processing science from the Uni-versity of Oulu, Finland, in 1995. Between 1995and 1998, she worked as a researcher in theSoftware Architectures Group at VTT Electro-nics. She was a visiting researcher at NapierUniversity, Edinburg, UK in 1998-99. SinceOctober 1999, she has worked as a groupmanager in the Software Architectures Group ofthe Embedded Software research area. In 2000,

she obtained the PhD degree in information processing science from theUniversity of Oulu, a component framework of a distributed controlsystems family as a topic. Since 2001, she has worked as a researchprofessor at VTT Electronics. She has published several conferencepapers about software architectures and components, as well asembedded middleware services. She is a member of the IEEEComputer Society.

. For more information on this or any computing topic, please visitour Digital Library at http://computer.org/publications/dilb.

DOBRICA AND NIEMELAÈ : A SURVEY ON SOFTWARE ARCHITECTURE ANALYSIS METHODS 653