Top Banner
Process Maturity and Inspector Proficiency: Feedback Mechanisms for Software Inspections Thomas Lee Rodgers Douglas L. Dean [email protected] [email protected] University of Arizona, Tucson, Arizona Abstract Recent research suggests that people related issues account for as much as 50% of explained sources of variation in software code inspections and that process related issues account for less than 30%. This paper examines the impact of two feedback mechanisms (process maturity and inspector proficiency) on software inspections. Results of a survey of thirty-one experienced software developers and follow-up interviews are presented. Key findings include that significant process variations appear to exist within relatively mature organizations and that teams own inspection processes. The paper also addresses the significance and potential of feedback mechanism research. The findings extend to other formal technical reviews and provide insights into managing different-place, different time inspections. 1. Introduction "Understanding the Sources of Variation in Software Inspections" is the topic of a recent research article by Porter, Siy, Mockus and Votta [13]. They find that about 50% of variation in defect detection relates to people input factors (reviewers, coders and code units) and very little variation relates to process treatment factors (team size, type of session, and repair strategy). The implication is that the "real beef" is in people-related and not process-related factors. While such assertion has intuitive appeal, further definition and exploration of underlying factors is warranted. This paper explores perceptions of experienced software inspectors concerning two feedback mechanisms, process maturity and inspector proficiency. Similar to control mechanisms in state machines such as speedometers or gauges, feedback mechanisms measure both causes and effects. For process maturity, the focus is at the inspection team level and not at the overall organizational level. For inspector proficiency, the focus is on individual ability and motivation to work effectively during an inspection. Properly defined, the two mechanisms represent major components of people and process related factors. Is it possible to measure and use feedback mechanisms to increase effectiveness and efficiency of inspections? Although this paper does not completely answer this question, it provides a foundation for further exploration. Recognition of feedback mechanisms should enable better allocation of (people) resources and assignment of (process) tasks. The expected benefits include improved quality, increased productivity, and faster time-to-market. Within distributed contexts, feedback mechanisms should enable different-time, different-place work processes. 2. Previous Research During the past twenty-two years, much has been written about inspections and various types of formal technical reviews. In a 1976 article within the IBM Systems Journal, Michael Fagan described formal inspections of design and of code that are now referred to as Fagan inspections. Fagan asserted "substantial net improvements in programming quality and productivity " [5]. In subsequent years, over four hundred academic and practitioner articles have been written about inspections. Two teams of widely recognized “how-to” experts are Tom Gilb and Dorothy Graham [7], as well as Robert Ebenau and Susan Strauss [3]. Inspections are now integrated into many software engineering practices including the Capability Maturity Model [10] and Personal Software Processes [9]. Inspections have been widely adopted by world class organizations including AT&T [6] and Hewlett Packard [8]. In short, inspections are widely recognized as a means for increasing software quality. In spite of technology advances, little has changed concerning format and content of inspections. Fagan inspections are a six-step process consisting of planning, overview, preparation, inspection, rework, Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999 0-7695-0001-3/99 $10.00 (c) 1999 IEEE Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999 1
13

Process maturity and inspector proficiency: feedback mechanisms for software inspections

May 06, 2023

Download

Documents

Marion Forest
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

Process Maturity and Inspector Proficiency:Feedback Mechanisms for Software Inspections

Thomas Lee Rodgers Douglas L. [email protected] [email protected]

University of Arizona, Tucson, Arizona

Abstract

Recent research suggests that people related issuesaccount for as much as 50% of explained sources ofvariation in software code inspections and that processrelated issues account for less than 30%. This paperexamines the impact of two feedback mechanisms(process maturity and inspector proficiency) onsoftware inspections. Results of a survey of thirty-oneexperienced software developers and follow-upinterviews are presented. Key findings include thatsignificant process variations appear to exist withinrelatively mature organizations and that teams owninspection processes. The paper also addresses thesignificance and potential of feedback mechanismresearch. The findings extend to other formaltechnical reviews and provide insights into managingdifferent-place, different time inspections.

1. Introduction"Understanding the Sources of Variation in

Software Inspections" is the topic of a recent researcharticle by Porter, Siy, Mockus and Votta [13]. Theyfind that about 50% of variation in defect detectionrelates to people input factors (reviewers, coders andcode units) and very little variation relates to processtreatment factors (team size, type of session, and repairstrategy). The implication is that the "real beef" is inpeople-related and not process-related factors. Whilesuch assertion has intuitive appeal, further definitionand exploration of underlying factors is warranted.

This paper explores perceptions of experiencedsoftware inspectors concerning two feedbackmechanisms, process maturity and inspectorproficiency. Similar to control mechanisms in statemachines such as speedometers or gauges, feedbackmechanisms measure both causes and effects. Forprocess maturity, the focus is at the inspection teamlevel and not at the overall organizational level. For

0-7695-0001-3/99 $1

inspector proficiency, the focus is on individual abilityand motivation to work effectively during aninspection. Properly defined, the two mechanismsrepresent major components of people and processrelated factors. Is it possible to measure and usefeedback mechanisms to increase effectiveness andefficiency of inspections? Although this paper does notcompletely answer this question, it provides afoundation for further exploration.

Recognition of feedback mechanisms shouldenable better allocation of (people) resources andassignment of (process) tasks. The expected benefitsinclude improved quality, increased productivity, andfaster time-to-market. Within distributed contexts,feedback mechanisms should enable different-time,different-place work processes.

2. Previous ResearchDuring the past twenty-two years, much has been

written about inspections and various types of formaltechnical reviews. In a 1976 article within the IBMSystems Journal, Michael Fagan described formalinspections of design and of code that are now referredto as Fagan inspections. Fagan asserted "substantialnet improvements in programming quality andproductivity " [5]. In subsequent years, over fourhundred academic and practitioner articles have beenwritten about inspections. Two teams of widelyrecognized “how-to” experts are Tom Gilb and DorothyGraham [7], as well as Robert Ebenau and SusanStrauss [3]. Inspections are now integrated into manysoftware engineering practices including the CapabilityMaturity Model [10] and Personal Software Processes[9]. Inspections have been widely adopted by worldclass organizations including AT&T [6] and HewlettPackard [8]. In short, inspections are widelyrecognized as a means for increasing software quality.

In spite of technology advances, little has changedconcerning format and content of inspections. Faganinspections are a six-step process consisting ofplanning, overview, preparation, inspection, rework,

0.00 (c) 1999 IEEE 1

Page 2: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

and follow-up. Participants have pre-defined rolesincluding moderator, scribe, inspector, and sourceauthor [4]. Although similar to Fagan inspections,Gilb inspections add defect prevention and processimprovement objectives [7]. Within IBM, Faganinspections were the basis for defect prevention efforts[12] and subsequently for orthogonal defectclassification [2].

During recent years, inspections remain a topic ofcontinued research interest. Lawrence Votta, fromAT&T, suggests that the value of an inspection comesfrom preparation and that need for face-to-facemeetings might be diminishing [20] . Adam Porter,from the University of Maryland, finds that people-input factors are most important [14]. and thattraditional checklist and ad hoc review approaches areineffective [15]. Based on industrial experience inEurope, Michiel van Genuchten finds that inspectiondefect yields improve significantly when using anelectronic meeting system [19]. Recently, PhilipJohnson, from the University of Hawaii, envisionsradical inspection changes on the horizon includingmethod-specific inspections, minimal meetings, defect-correction emphasis, organizational guidelineknowledge bases, outsourcing review, computer-mediation, and review mega-groups [11]. Do advancesin technology including object-orientation and widerinformation access call for radical changes ininspections? Technology can enable widerparticipation in asynchronous settings; however thequestion remains as to whether the inspection processneeds to be reengineered.

Feedback mechanisms are widely recognizedwithin engineering literature and to a lesser extentwithin the inspection literature. Defect preventionefforts are based on casual analysis which is considereda feedback mechanism [12]. Similar causal analysisform the basis for the second level of Personal SoftwareProcesses [9]. Analyzing prior defect patterns helpsfocus inspections and remove recurring errors.

This paper focuses on two feedback mechanismsthat have not been considered in prior research. Theywere initially recognized in a conference paper andshown in the causal model on the following page [17].The model suggested that inspection processes arecognitive processes in which people (players) areallocated along with causes in order to identify defects.The two feedback mechanisms both cause and affectthe cognitive processes. The conference paper positsthat feedback mechanisms help explain sources ofvariation in inspection processes. This paper exploresthe assertion by surveying experienced inspectors.

0-7695-0001-3/99 $10

Causal Code Inspection Model

Motiviation

Code andDocumentation

Tools

Management

Coder(s)

Reviewer(s)

ProcessMATURITY

IdentifiedDefects

Goals,Scenarios,

Scripts

SearchProductions

DefectDiscovery

Cognitive ProcessesPeople(Players)

Causes Effects

ReviewerPROFICIENCY

______Coding, Tools,

Process

Res

ourc

e A

lloca

tion

Source is [17]

4. Research ObjectivesAs a beginning to a study of feedback mechanisms,

this paper explores process maturity and inspectorproficiency. The primary issues are how to define andmeasure both constructs. Two specific researchobjectives are considered.

Research Objective #1: Does process maturitysignificantly affect results of software inspections?

Maturity is the condition of being fully-grown ordeveloped. Inspection processes can be consideredmore mature as they become established, predictable,and ultimately managed. Process maturity does notnecessarily imply formality or predictability. Formalityis unnecessary if participants understand their roles andexpectations. A process can be mature in the sense ofbeing well established, yet ineffective and lackpredictable outcomes. For example, a mature processwould be one in which the participants know exactlyhow many defects they are expected to find and aregiven opportunity to manipulate the results.

Three distinct over-lapping phases of processmaturity are shown in the following graph. The first isa process definition phase in which a team forms aconsensus about how and what to do during theprocess. During this first phase, the team is learningand establishing the process. The second phasesfocuses on productivity and doing the processeffectively. The third phase focuses on proficiency andprocess efficiency. During this later phase, the teamrealizes that the process must be more selective becausethe issues are more complex.

.00 (c) 1999 IEEE 2

Page 3: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

Process Maturity

ProcessDefinition

Productivity Proficiency

An alternative method of representing processmaturity is using a framework similar to the CapabilityMaturity Model. [10] which posits five progressivelevels: ad hoc, repeatable, defined, managed, andoptimized. Although intended for organizationalassessment of key process areas, CMM levels can beadopted to inspection processes. Level one, ad hoc,applies to new or infrequent use of inspections within aprofessional business environment. Level two,repeatable, refers to inspections conducted on asomewhat regular basis. Level three, defined, refers torepeatable inspections in which the process, standardsand defect types are well understood. Level four,managed, refers to defined inspections in which thenumber of major issues to be found is anticipated basedon experience. Level four, optimized, refers tomanaged inspections which is or can be tailored to findspecific types of defects or the results of which can bemanipulated by the reviewers. In part, this paperexplores whether such a classification can be intuitivelyperceived or predicted based on inspection practices.

Research Objective #2: Does inspectorproficiency significantly affect results of softwareinspections? If so, should inspector proficiency beassessed formally or informally?

Proficiency refers to the ability to do somethingwell. Inspector proficiency refers to the ability to finddefects in another person's work. This paper exploresperceptions concerning whether inspector proficiency isimportant and how acceptable it is to formally recordproficiency information.

Prior knowledge of inspector proficiency has valuewhen inspection teams are formed. For example,assume use of a scenario based code inspection inwhich three inspectors are assigned different roles assuggested by Porter, Votta, and Basili [13]. The firstreviewer focuses on data type consistencies includingcoding and documentation standards; the secondfocuses on incorrect functionality; and the third focuseson ambiguity or missing functionality. Eachassignment is progressively more difficult and maysuggest assigning individuals with different skill levelsto each role. The first inspector might be a juniorprogrammer for which the review process serves as a

0-7695-0001-3/99 $1

learning experience. The second inspector might be aprogrammer or analyst with proficiency in matchingrequired functionality with existing code. The thirdinspector should be the most proficient and capable ofdiscovering unexpected and ambiguous defects. Is itnecessary to identify the levels of inspector proficiencyformally? Is it sufficient to formally identify proficientinspectors with recognized areas of expertise? Thispaper explores whether perceptions are more acceptablethan formally recorded performance.

Broader research objectives are to identify feedbackmechanisms within primary processes and to determinetheir impact on collaborative processes. Although notprecisely defined, primary processes are integratedwithin normal and recurring practices. Michiel vanGenuchten used the term "primary process" to illustratetechnology diffusion of collaborative systems [18].First generation collaborative systems focused onresearch interests and established the importance offactors such as anonymity, process losses and processgains. Second generation focused on widespreadorganization diffusion by consultant and facilitatorsusing commercially available software such asVentana's GroupSystems and IBM's Lotus Notes.Third generation will focus on primary processes thatsupport workgroups engaged in recurring activities.Genuchten identified software inspections as a primaryprocess and a rich area for exploring collaborativeissues. This paper explores whether two feedbackmechanisms are important within the inspectionprocess. If significance is established, a broaderquestion is whether feedback mechanisms haveimportance for other collaborative primary processes.

5. MethodThis paper is based on observation of experienced

inspectors through survey and qualitative interviews.The findings also help develop a theoretical foundationfor the study of collaboratively enabled softwareinspections.

The three-part survey is available on request. Thefirst section covers background information includingyears of relevant experience and number of inspectionsduring the last year and over the career. The secondsection covers process maturity related issues. Thethird section covers inspector proficiency related issues.

Three experienced inspectors reviewed apreliminary version of the survey. Based on theirsuggestions, changes were made, including replacingthe term "reviewer" with "inspector" and addingquestion choices such as an expectation to "learn theinspection practice of the company".

0.00 (c) 1999 IEEE 3

Page 4: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

This survey is primarily an exploratory devise.Most questions are open-ended or contain open-endedanswer choices. Only one question (#10) measures asingle construct (process maturity using a CMMframework). All other questions explore perceptionsabout aspects of the two primary constructs, processmaturity and inspector proficiency.

The process maturity section contains twelvequestions. Question #8 solicits the type and teamcomposition of the participant’s last inspection.Question #10 solicits an assessment of the processmaturity using a CMM framework. Questions #18 and#19 solicit open-ended discussion about the impact andvariability of process maturity. The remaining eightquestions cover nine different aspects: (1) expectedaccomplishments, (2) type of inspection, (3) inspectorroles, (4) process formality, (5) volume inspected, (6)team composition, (7) type of defects, (8) number ofanticipated defects originally expected to be found, and(9) number of defects actually found. For question #17(number of defects) choices are given to indicate thatthe quantities are unknown.

The inspector proficiency section contains fivequestions. Questions #23 and #24 solicit open-endeddiscussion about past performance and formalrecording of proficiency information. The other threequestions use seven point Likert scales to assessperceptions. Question #20 asks how important variousfactors are to the inspection proficiency construct.Question #21 asks whether specific metrics indicateproficiency. Question #22 asks whether perceptionsabout proficiency should be considered when inspectorsare selected and assigned to inspections. Most questionchoices are closely associated and labeled with similarwordings between the three questions. The choices areloosely categorized into (1) experience, (2)productivity, (3) review rate, and (4) efficiency.

6. ResultsThis section summarizes survey results. First, it

establishes that experienced inspectors participated inthe survey. Second, quantitative survey responses aresummarized. Last, factor analysis is used to identifyunderlying dimensions of the two primary constructs.

6.1. Survey Participation

Thirty-one experienced software developers weresurveyed. Two participants were interviewedconcerning survey contents and specific issuessuggested by the tabulated results. Survey participantsrepresent eleven organizations ranging from smallsoftware development teams to large internationalfirms.

0-7695-0001-3/99

• Thirteen represent an international hardware andsoftware development firm with a reputation formaturity (CMM level 4).

• Six represent small United State softwaredevelopment firms with international markets.

• Four represent European software developmentfirms most of which have multinational developmentefforts.

• Eight are graduate students who also work forsoftware development firms. A large internationalhardware and software development firm employsfive of the eight students.On average, those surveyed have 12.6 years of

software development experience and 12.2 years ofemployment with their current employer. Individualswere asked to participate based on having softwareinspection experience. On average within the lasttwelve months, they participated in 19.8 code reviews,21.6 design reviews and 4.5 other formal technicalreviews. Out of sixty individuals asked to participatethirty-one responded (with a 52% return rate). The ratewas significantly higher for the twenty-threeindividuals that were directly asked to participate (with18 responses and a 78% return rate). This compares tothirty-seven individuals who were asked to participateby a quality/process leader within a single largeinternational firm (with 13 responses and a 35% returnrate). In summary, the individuals surveyed havesignificant relevant experience.

6.2. Survey Responses

Detailed survey responses are available uponrequest. Major results are presented.

Significant process variations exist withinmature development organizations. The thirteenindividuals from the same international firm stated thatinspection practices for their last inspection processranged the entire spectrum of process maturity (2 adhoc, 1 repeatable, 4 defined, 3 managed, and 3optimized). The volume reviewed was bipolar with sixreporting that they last inspected less than 50 lines ofcode or 5 pages and the other seven reporting that theyinspected over 4,000 lines of code or 25 pages. Onlythree had expectations about the number of majorissues to be found and only five reported the actualnumber of issues uncovered. Only one reported asignificant number of issues uncovered (141 total issuesof which 54 were major and 25 majors were anticipatedbased on inspection of 5,000 lines of code).

Following are quantitative results for all surveyparticipants. The majority surveyed expect to findgeneral defects (29 of 31) and specific types of defects(19 of 31). Additional expectations includerequirement assurance, product usability, and

$10.00 (c) 1999 IEEE 4

Page 5: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

verification of known defect fixes. Similar to thefindings reported for the single firm, last inspectionprocess ranged the entire spectrum of process maturity(6 ad hoc, 9 repeatable, 9 defined, 3 managed, and 4optimized). For most, inspectors came from the samedevelopment team (22 of 31) and the same project (20of 31). Methods for assigning inspector roles variedwidely (8 informally, 14 to the same tasks, 8 todifferent main responsibilities, 2 involved only oneinspector, and 1 used a checklist). For most, writtenstandards and guidelines exist (17 of 31); however theformality of the process definition was subject to widevariations (10 informally defined, 11 written processes,8 defect codes and type classification, 14 checklists, 1script, and 2 scenarios). Most worked with individualinspection members on similar reviews numerous timespreviously (28 of 31 with 7 reporting an average of 2.4times during the last twelve months).

The preceding table summarizes perceptions aboutwhether inspector proficiency can and should beassessed formally, informally, or not at all. Surveyorswere asked to use a seven-point Likert scale (with 1 =no importance, 3 = some importance, 5 = important,and 7 = extremely important and expected). Theresponses are ranked from most important down toleast important.

6.3 Factor Analysis

Quantitative answers were used to assessunderlying dimensions of the two primary constructs.Five different types of quantitative answers areincluded or derived from the survey. Multiple choiceanswers are treated as qualitative variables and codedwith a nominal value (true="1" or false="0"). Likertscale answers are coded with the corresponding ordinalvalue (from 1 to 7). Numeric counts (such as thenumber of inspections or defects expected) are codedwith ratio scales; however these counts have little valuewithin the following analysis other than to indicatewhether the quantity is known or equal to zero.Instead, aggregated ordinal scales (unknown = "(1)",zero value = "0", and positive value = "1") are used tocharacterize numeric count questions. Finally,aggregated counts of nominal values are used forseveral questions with similar answer choices and forwhich a count of similar choices is presumed to infer astronger relationship (such as within question #11 todistinguish whether the reviewer comes from the sameproject, company, division, or development team).

Given thirty-one participants and seventy-onequantitative answers, it is difficult to make statisticalinferences or generalize findings. A larger sample sizewould provide greater statistical power; however, given

0-7695-0001-3/99 $

selective participation in the survey, large sample sizesmight not enhance the ability to generalize findings.For statistical analysis, the number of factors to beanalyzed is limited by the sample size of thirty-oneparticipants. In addition, using a number of factorsapproaching the sample size prevents obtaining somestandardized statistical metrics such as Cronbach'salpha. Nevertheless, given the exploratory nature ofthe survey some insight can be gained by exploringlatent factors.

Process maturity construct measurement ismore complex than participant perceptions.Attempts to use regression to predict the processmaturity level (from question #10) resulted in modelswith very low statistical significance (adjusted R2 <.15) and the only factor having some significance is thenumber of defects expected (probability value = .032).As previously stated, participants within relativelymature organizations perceive wide variations ininspection process maturity. During a follow-upinterview, it was suggested that the classificationwording might imply "goodness" of personal practicesand thus is difficult to assess. Inspector proficiency iseven more difficult to measure and subject tomeasurement dysfunction [1] as illustrated in thefollowing participant quotes. "Yes, although this isdifficult to quantify." "People will be less open aboutthe inspections and their results." "People will not logif they think management will base performanceevaluation of author on the results." "Counting isridiculous especially if you have to justify what you findis over or under an expected amount." Using a heuristics based on inspection practicesmight be more stable than using perceptions ofmaturity level. A heuristic was adopted based on theCMM assessment method. The heuristic requiresdemonstration of competence in all key process areas atthe current and lower levels before being designated atthat current level. For analysis purposes, qualitativeanswers were aggregated by how relevant they were toeach process maturity level. Process maturity level wasdetermined based on the presence of at least oneindicator within the level and the lower levels (with theexception of level one - ad hoc for which no keyprocess area is assumed). The only apparent difficultywith the heuristic occurred with an indication of a levelfive (optimized) process and none for level four(managed). In this case, the process was classified as alevel five. For the thirteen participants from the singlefirm, the distribution changed significantly. Ten of thethirteen participants indicate either level four or five(which is more similar to their organization's level 4CMM designation). Three participants indicated a

10.00 (c) 1999 IEEE 5

Page 6: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

level two (repeatable). Perhaps, process maturityshould be assessed based on inspection practices withinkey processing areas in a manner similar to CMM.

0-7695-0001-3/99 $10.00 (c) 1999 IEEE 6

Page 7: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

Average Standard In order to be a proficient inspector, how important are the following? Rank

Deviation• programming language proficiency (for code inspections) 6.1 1.0• cognitive ability to find defects 5.9 1.0• experience as an inspector 5.3 1.5• development environment proficiency 5.2 1.4• experience as an author being reviewed 4.1 1.7 Assuming the following information can be record or derived about prior individualperformance, how acceptable is it to use the following information to select andassign inspectors to an inspection team?• experience (number of prior inspections) 5.0 1.4• efficiency (percent of defects found per defects suggested) 4.5 2.2• experience (pages or lines of code inspected) 3.9 1.5• review rate (pages/lines of code reviewed per hour) 3.7 1.9• productivity (number of defects per page/lines of code) 3.7 2.1• productivity (defects found per preparation hour 3.6 1.9• productivity (defects found per meeting hour) 3.5 1.9 Assume the inspector proficiency information is not formally recorded or available,what factors should be considered when inspectors are selected and assigned toinspections?• experience with programming language 5.5 1.4• experience as an inspector 5.1 1.5• productivity as a person who can find defects 5.1 1.9• efficiency as a finder of defects 5.0 1.8• experience with the development environment 4.9 1.8• review rate as a through and detailed inspector 4.3 2.0• experience as an author of reviewed materials 4.0 1.7

Table 1 - Ranking of Inspector Proficiency Perceptions

Exploratory factor analysis reveals significantunderlying dimensions. Factor analysis was used forboth major constructs. For process maturity, analysiswas done at summary and detailed levels. Forinspector proficiency, analysis was done at the detailedlevel. The factors identified might not generalize;however, they give insight into perceptions of thosesurveyed.

Using summary level analysis, process maturitycontains two major dimensions (with Cronback'salpha = .63). The first relates to process formationbased on defined roles, process formality, and teaminteractions. The second relates to anticipated andactual results based on known volumes inspected andnumber of issues found. However, aggregated codingmight cause distortion. Therefore, detailed levelanalysis was also conducted.

0-7695-0001-3/99 $1

Using detailed level analysis, process maturitycontains nine significant dimensions. Eachdimension is briefly described along with significantlypositive and negatively correlated variables with largerthan a ± .40 correlation. (1) Immature process relatesto an expectation of establishing standards andunknown numbers of issues expected or actually found.(2) Informal process relates to informally defined role,a perception of ad hoc level one maturity level, andworking together previously. Informal processes arenegatively related to written processes, standards,guidelines, working on similar reviews, working on thesame discovery task. (3) Specific or targeted issuesrelate to an expectation of finding specific issues,reviewing an unknown volume, finding an unknownnumber of issues, and working within the samedevelopment team. (4) Corporate practices relate toworking within the same company or development

0.00 (c) 1999 IEEE 7

Page 8: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

team, and written standards, defect codes and types.Corporate practices are negatively related to anexpectation of establishing standards. (5) Recurringissues relate to an expectation of finding generaldefects, use of checklists, and written standards andguidelines. Recurring issues are negatively related tolooking for unknown defect types. (6) Issuerecognition relates to first-time experiences, use ofscenarios, written defect codes and types, andperception of managed level-four maturity level. (7)Learning expectation relates to an expectation oflearning and assignment to different main tasks.Learning expectation is negatively related to workingwithin the team and assignment of the same task. (8)Project formality relates to working together, withreviewers from the same project, using a writtenprocess, and not being a first-time experience. (9) Non-recurring issues relate to looking for pre-designatedissue type and a perception of an optimized level-fiveprocess maturity. The first factor (immature process)from the detailed analysis is similar in nature to thesecond factor (anticipated and actual results) from thesummary analysis. Also the remaining eight factorsfrom the detailed analysis appear to be dimensions ofthe first factor (process formation) from the summaryanalysis.

Using detailed level analysis, inspectorproficiency contains five significant dimensions. (1)Problem solving skills relate to the perception of beinga finder of defects through productivity, review rate,and efficiency, as well as having a high percentage ofdefects found per defects suggested. (2) Inspectionproductivity relates to the prior individualperformance in terms of the number of defects found intotal, in preparation, and during meetings. (3)Inspection experience relates to prior inspectionexperience whether perceived or actually counted andas either an author or inspector. (4) Programmingskills relate to the importance of programminglanguage proficiency, and perceptions of experiencewith the programming language or developmentenvironment. (5) Environmental awareness relates tothe importance of development environmentproficiency and the cognitive ability to find defects.

7. Discussion This section discusses the stated research objective

of exploring whether the two feedback mechanismssignificantly affect software inspection results. Thispaper does not prove the research objectives; rather itexplores perceptions of experienced inspectors.Nevertheless, the findings generally support the

0-7695-0001-3/99 $1

research objectives and suggest additional avenuesof inquiry.

The first objective is to determine whether processmaturity significantly affects results of softwareinspections. Question #18 most directly addresses thestated objective. This open-end question asks whatimpact, if any, does process maturity have oninspections? More specifically, does the nature of theinspection change as the process matures? Twenty-eight of the thirty-one participants responded to thequestions. Only five of the responses suggestedminimal or no impact. Most of the remainingcomments supported the assertion that processmaturity impacts inspections. One of the moreinteresting lines of reasoning was established in afollow-up interview. The participant asserted that theteam composition is a given constraint, and “althoughpeople are more important, you must manage theprocess.” Another insightful comment follows, “Thisis a soft issue; if the organization has experience,culture, and willingness to work along defined routes,inspections are taken seriously and actually carried outand followed-up. In a low maturity organization it isdifficult to actually prove the effects of inspections andthey tend to get postponed or canceled.”

Measuring process maturity presents problems.As previously documented, use of a CMM frameworkmight provide a more consistent assessment, especiallygiven establishment of key practice areas. Can processmaturity be assessed based on overlapping phases suchas definition, productivity, and proficiency? In thisalternate phase-related framework, the category is notas important as understanding the support requirementsfor each phase and identifying how much associatedsupport is required for the current inspection. Onequote supports the notion that process mature in a non-linear manner, specifically “Our process matureschaotically.” As shown in the preceding table, factoranalysis and previous qualitative observations can becategorized within such a framework. Suchcategorization serves as a foundation for furtherdefining and exploring each phase. Whether threephases is sufficient or necessary remains unresolved.

A potentially important aspect of processmaturity is the inspection review rate. Most expertsconsider review rates as very important. Inspectionexpert, Tom Gilb states, "check at your organization’soptimum rates to find major defects" (Gilb, 1998).Michiel van Genuchten finds that review rate must becontrolled in order for electronic meeting systeminspections to be more effective than traditional Faganinspections. He also finds that very few defects arefound for either method when the review rate is too

0.00 (c) 1999 IEEE 8

Page 9: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

fast. Survey participants appear to place relatively littleemphasis on review rates. Use of review rate is given arelatively low 3.9 ranking with a relatively large 1.9standard deviation (see Table 1). The observation issimilar for the related issue of perceived thoroughnessthat is given a 4.3 ranking and a 2.0 standarddeviation. The only quote related to thoroughness andindirectly to review rates follows. "You want peoplewho spend time and find problems.".

0-7695-0001-3/99 $10.00 (c) 1999 IEEE 9

Page 10: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

Definition Phase Productivity Phase Proficiency Phase

FactorAnalysis

• Process formation • Anticipated andactual results

• Specific ortargeted issues

• Immature process • Corporatepractices

• Issue recognition

• Informal process • Recurring issues • Non-recurringissues

• Learningexperience

QualitativeReponses

• Team ownership • Inspection rates • Problem-solvingskills

• Training • Workload

• Projectcommitment

Table 1 – Categorization of Factors and Responses by Process Maturity Phases

Apparently, this is an issue that many inspectors eitherdo not remember or have forgotten the importance

Further, process maturity should focus on how-to support teams. Teams own inspection processes assupported in the following quotes. "Our goal is to finddefects ourselves as a development team, not to createprofessional super-inspectors. Every member shouldinspect once in a while." "The team typically knowswho are the good inspectors. The data about it shouldbe owned by the individual or the team." Follow-upinterviews supported this assertion. One participantobserved that there are at most ten people in the worldwho have sufficient expertise to inspect their code.This observation appears to apply for both large andsmall organizations. In large organizations, inspectionresponsibility falls on development teams that usuallyconsist of ten or less members. In small organizations,there might only be a few developers in the entireorganization.

Process maturity is directly affected byworkload and project commitments. This assertionis supported by the following quotes. "We barely havetime for reviews, much less all the formalization.""Our environment is changing given shortened time-to-market constraints. We need to tailor processes to fitthe life cycle." "Don’t include a person that is notmotivated to participate." Do not "increase overhead.""Inspection is not a popular pastime. Many of us findit drudgery even though it is of critical importance toquality. People may decide to find fewer defects if they

0-7695-0001-3/99 $1

know the more they find, the more review time theymust spend."

As processes mature, opportunity must be givenfor training of new team members and transferringinstitutional knowledge. This assertion is supportedin the following quotes. "Others ought to be includedto improve their skills." "Other members can be intraining." "Everybody needs to be in the game." For anorganization with responsibility for maintaining largeamounts of legacy code, the observation was made thatinspections provide a significant source for trainingand knowledge transfer even when few defects arefound. Individuals who maintain the code willeventually retire and inspections provide a structuredopportunity to transfer knowledge from senior to juniorlevel team members.

The inspector proficiency research objective hastwo parts. The first is to establish significance and thelater addresses whether proficiency should be assessedformally or informally.

There appears to be support for the assertionthat inspector proficiency affects results of softwareinspections. Question #23 most directly addresses thestated objective. This open-end question asks whethera person’s past performance should be considered whenan inspection review team is formed? If so, how? Ifnot, why not? Only three of the twenty-eight responsessuggest that inspector proficiency does not affect theresults and one of these merely state concerns aboutprivacy and formal recording of performanceinformation. However, the tenor of the discussion is

0.00 (c) 1999 IEEE 10

Page 11: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

supportive. Two supportive quotes follow. “The teamtypically knows who are the good reviewers.” “Onlycertain people can be considered good reviewers, or canlearn to be good reviewers.”

A significant aspect of inspector proficiency isproblem-solving skill. Both quantitative andqualitative supports exist for this assertion. Aspreviously stated and based on factor analysis, problemsolving is the most significant proficiency factor. Thefactor relates to the perception of being a finder ofdefects through productivity, review rate, andefficiency, as well as having a high percentage ofdefects found per defects suggested. The qualitativesupport comes from the following quotes. "Are theygood engineers?" Inspectors need a "knack" and abilityto "see to the heart of the problem." "At least oneperson who can find defects is a must to make thereview worthwhile." "Only certain people can beconsidered good reviewers, or can learn to be goodreviewers."

The issue of formally recording inspectorproficiency information appears to depend on theorganizational culture. Question #24 asks whatconcerns, if any, do you have about formally recordinginspector proficiency information. The thirty responses,out a possible total of thirty-one, are divided into tenstrongly supportive comments and thirteen stronglyopposing comments. Many expressed concerns like “itis valuable information for performance evaluation.But I don’t think it needs to be a formal process torecord. A close working relationship among the teammembers would yield better information.” Theassertion that the response depends on theorganizational culture is best stated in the followingquote. “As with all data concerning a person, it can beused constructively or destructively. Whether formalrecording should be used is a function of the person’sorganization personality.” This finding is alsosupported by Robert Austin’s research [1]. Austinrecommends that, for difficult-to-measure performanceconstructs, organizations need to either develop reliablemeasurement instruments or else rely on the team to dowhat is right without quantitative measures.

Perceptions about individual performance mightbe preferable to formal recordings of actualperformance. Maintaining formal records addsoverhead, forces formally defined metrics and raisesprivacy concerns. Can perceptions sufficiently measureindividual proficiency? Most individuals are painfullyaware of their inadequacies. To formally recordpainful information might not be advantageous. Also,most team members have a sense of who to rely upon

0-7695-0001-3/99 $1

within their team. The assertion is supported by thefollowing quotes. "If you can record the original errormaker, you can record who can find the error." Butrecording "can hurt a person’s privacy" "Sure, if Iknow that they are not checking details, I probablywon’t use them." "It is valuable information forperformance evaluation. But I don’t think it needs tobe a formal process to record. A close workingrelationship among the team members would yieldbetter information." "Don’t waste time doingrecording." Although not statistically significant,rankings for the second question (see Table 1 on page12) related to formal recordings are lower than theother two questions related to perceptions.

Perhaps, the focus of inspector proficiencyshould shift to how-to-best support recognizedexpertise within an inspection process. Inspectorsknow when they are beginners or even productive teammembers who are expected to contribute. What mightnot be known is how to recognize expertise, andconstructively use the expertise without burning it out.Thus there might be a need to manage a list ofcorporate or outside consulting inspection expertise.One of the follow-up interviews bought out the need fora non-team member with expertise in the Windows/NToperating system. What if the inspection was tailoredto maximize knowledge transfer for specific questionsasked of recognized experts?

8. Significance This paper explores perceptions of two feedback

mechanisms, process maturity and inspectorproficiency. Some feedback mechanisms might beintuitive and used by teams to enhance theirperformance. For example, the team might sense whothe good inspectors are and assign important inspectiontask accordingly. As Robert Austin suggests,performance measures are best left unmeasured if theteam collectively moves along the best performancepath [1]. The challenge is to identify the impact offeedback mechanisms or calibrate their effects.

Feedback mechanisms might impact thedevelopment of tools. Providing access to easilyquantifiable factors such as number of major issuesexpected and actually found should have a majorimpact on group performance. For more difficult toquantify factors like process maturity and inspectorproficiency, the question is will workgroups recognizeand support their integration within a collaborativetool.

Process maturity might impact how inspectiontasks are defined and shared among inspectors.

0.00 (c) 1999 IEEE 11

Page 12: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

Task templates might be used to match processmaturity expectations. More specifically, the toolmight prompt for inspection type, objectives andanticipated inspectors. The system would then providea base template of inspection tasks assigned toinspectors. The moderator can modify inspectors andtask assignments. The tool would assess the optimumreview rates and identify any potential mismatches oftask and proficiency.

Inspection processes might be tailored tosupport expert inspectors. Perhaps the inspection toolshould interface with corporate personnel skilldatabases or to an external cottage industry of expertinspection consultants. If a particular expertise isneeded, enable the inspection team with the ability toidentify and obtain appropriate expertise.

9. ConclusionThis paper explored perceptions of experienced

software engineers concerning the impact of processmaturity and inspector proficiency on softwareinspections. The findings generally support assertionsthat process maturity and inspector proficiency affectresults of software inspections. In some cases, teamsmight prefer using perceptions of performance insteadof formally recording actual prior performance.

Feedback mechanisms might provide a means tomeasure and manage performance at the workgrouplevel. Doing so in an unobtrusive and effective manneris a challenge. The goal is to enable teams to self-define tasks and assign of responsibilities in a mannerthat encourages productivity and decreases time tomarket. At a minimum, developing a betterunderstanding provides insight into asynchronous workprocesses.

References

[1] Austin, R. D. (1996). Measuring and Managing Performance inOrganizations. New York, NY: Dorset House Publishing.

[2] Chillarege, R., Bhandari, I. S., Chaar, J. K., Halliday, M. J.,Moebus, D. S., Ray, B. K., & Wong, M.-Y. (1992).Orthogonal defect classification-a concept for in-processmeasurements. IEEE Transactions on Software Engineering,18(11), 943-956.

[3] Ebenau, R. G., & Strauss, S. H. (1994). Software inspectionprocess. New York: McGraw-Hill.

[4] Fagan, M. E. (1976). Design and Code Inspections and ProcessControl in the Development of Programs (Technical Report TR00.2763). Poughkeepsie, NY: IBM Corp.

0-7695-0001-3/99 $1

[5] Fagan, M. E. (1976). Design and code inspections to reduceerrors in program development. IBM Systems Journal, 15(3),182-211.

[6] Fowler, P. J. (1986). In-process inspections of workproducts at AT&T. AT&T Technical Journal, 65(2 (Mar.-Apr.)), 102-112.

[7] Gilb, T., & Graham, D. (1993). Software inspections. Reading,MA: Addison-Wesley.

[8] Grady, R. B., & Van Slack, T. (1994). Key lessons inachieving widespread inspection use. IEEE Software (July),46-57.

[9] Humphrey, W. (1995). A discipline for software engineering. ( SEIseries in software engineering ed.). Reading,Massachusetts: Addison-Wesley Publishing Company.

[10] Humphrey, W. S. (1989). Managing the Software Process.Reading, Mass.: Addison-Wesley.

[11] Johnson, P. M. (1998). Reengineering Inspection.Communications of the ACM, 41(2), 49-52.

[12] Jones, C. L. (1985). A process-integrated approach todefect prevention. IBM Systems Journal, 24(2), 150-166.

[13] Porter, A., Siy, H., Mockus, A., & Votta, L. (1998).Understanding the Sources of Variation in SoftwareInspections. ACM Transactions on Software Engineering andMethodology, 7(1 (January)), 41-79.

[14] Porter, A. A., Siy, H. P., Toman, C., A., & Votta, L. G.(1997). An experiment to assess cost-benefits of codeinspections in large-scale software development. IEEETransactions on Software Engineering, 23(6), 329-346.

[15] Porter, A. A., & Votta, L. G. (1994 (May), October 1995).An experiment to assess different defect detection methods for softwarerequirements inspections. Paper presented at the ICSE-16. 16thInternational Conference on Software Engineering (Cat.No.94CH3409-0) (1994) p103-12, Sorrento, Italy.

[16] Porter, A. A., Votta, L. G., Jr., & Basili, V. R. (1995).Comparing detection methods for software requirementinspections: a replicated experiment. IEEE Transactions onSoftware Engineering, 21(6 (June)), 563-75.

[17] Rodgers, T. L., Vogel, D. R., Purdin, T., & Saints, B. (1998,January). In Search of Theory and Tools to Support CodeInspections. Proceedings of the 31st Hawaii InternationalConference on Systems Science, IEEE Computer Societyvol.3 p.370-9.

[18] van Genuchten, M., Vogel, D., & Nunamaker, J.. ((1998,January). Group Support Systems in Primary Processes. Paperpresented at the Proceedings of the 31st HawaiiInternational Conference on Systems Science, Big Island.,vol. 1, p.580-7.

[19] van Genuchten, M., Cornelissen, W., & van Dijk, C.(1997). Supporting inspections with an electronic meeting system.Paper presented at the Proceedings of the 30th HawaiiInformation Conference on Systems and Software.

0.00 (c) 1999 IEEE 12

Page 13: Process maturity and inspector proficiency: feedback mechanisms for software inspections

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

[20] Votta, L. G. (1993, December). Does every inspection need ameeting? Paper presented at the SIGSOFT Symposium on

0-7695-0001-3/99 $10.00

foundations of software engineering.

(c) 1999 IEEE 13