Top Banner
Intent Specifications: An Approach to Building Human-Centered Specifications Nancy G. Leveson, Member, IEEE Abstract—This paper examines and proposes an approach to writing software specifications, based on research in systems theory, cognitive psychology, and human-machine interaction. The goal is to provide specifications that support human problem solving and the tasks that humans must perform in software development and evolution. A type of specification, called intent specifications, is constructed upon this underlying foundation. Index Terms—Requirements, requirements specification, safety-critical software, software evolution, human-centered specifications, means-ends hierarchy, cognitive engineering. æ 1 THE PROBLEM S OFTWARE is a human product and specification languages are used to help humans perform the various problem- solving activities involved in requirements analysis, soft- ware design, review for correctness (verification and validation), debugging, maintenance and evolution, and reengineering. This paper describes an approach, called intent specifications, to designing system and software specifications that potentially enhances human processing and use by grounding specification design on psychological principles of how humans use specifications to solve problems, as well as on basic system engineering principles. Using such an approach allows us to design specification languages with some confidence that they will be usable and effective. A second goal of intent specifications is to integrate formal and informal aspects of software development and enhance their interaction. While mathematical techniques are useful in some parts of the development process and are crucial in developing software for critical systems, informal techniques will always be a large part (if not most) of any complex software development effort: Our models have limits in that the actual system has properties beyond the model, and mathematical methods cannot handle all aspects of system development. To be used widely in industry, our approach to specification must be driven by the need 1) to systematically and realistically balance and integrate mathematical and nonmathematical aspects of software development and 2) to make the formal parts of the specification easily readable, understandable, and usable by everyone involved in the development and maintenance process. Specifications should also enhance our ability to engineer for quality and to build evolvable and changeable systems. Essential system-level properties (such as safety and security) must be built into the design from the beginning; they cannot be added on or simply measured afterward. Up-front planning and changes to the development process are needed to achieve particular objectives. These changes include using notations and techniques for reasoning about particular properties, constructing the system and the software in it to achieve them, and validating (at each step, starting from the very beginning of system development) that the evolving system has the desired qualities. Our specifications must reflect and support this process. In addition, systems and software are continually changing and evolving; they must be designed to be changeable and the specifications must support evolution without compro- mising the confidence in the properties that were initially verified. Many of the ideas in this paper are derived from attempts by cognitive psychologists, engineers, and human factors experts to design and specify human-machine interfaces. The human-machine interface provides a repre- sentation of the state of the system that the operator can use to solve problems and perform control, monitoring, and diagnosis tasks. Just as the control panel in a plant is the interface between the operator and the plant, system and software requirements and design specifications are the interface between the system designers and builders or builders and maintainers. The specifications help the designer, builder, tester, debugger, or maintainer under- stand the system well enough to create a physical form or to find problems in or change the physical form. The paper is divided into two parts: The first part describes some basic ideas in systems theory and cognitive engineering. 1 The second part describes a type of specification method called intent specifications, built upon these basic ideas, that is designed to satisfy the goals listed IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000 15 . The author is with the Aeronautics and Astronautics Department, Massachusetts Institute of Technology, Room 33-406, 77 Massachusetts Ave., Cambridge, MA 02139-4307. E-mail: [email protected]. Manuscript received 13 Aug. 1997; revised 22 July 1998; accepted 17 Nov. 1998. Recommended for acceptance by H.A. Muller. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number 105499. 1. Cognitive engineering is a term that has come to denote the combination of ideas from systems engineering, cognitive psychology, and human factors to cope with the challenges of building high-tech systems composed of humans and machines. These challenges have necessitated augmenting traditional human factors approaches to consider the capabilities and limitations of the human element in complex systems. 0098-5589/00/$10.00 ß 2000 IEEE
21

Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

Mar 09, 2018

Download

Documents

dinhduong
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

Intent Specifications: An Approach to BuildingHuman-Centered Specifications

Nancy G. Leveson, Member, IEEE

AbstractÐThis paper examines and proposes an approach to writing software specifications, based on research in systems theory,

cognitive psychology, and human-machine interaction. The goal is to provide specifications that support human problem solving and

the tasks that humans must perform in software development and evolution. A type of specification, called intent specifications, is

constructed upon this underlying foundation.

Index TermsÐRequirements, requirements specification, safety-critical software, software evolution, human-centered specifications,

means-ends hierarchy, cognitive engineering.

æ

1 THE PROBLEM

SOFTWARE is a human product and specification languagesare used to help humans perform the various problem-

solving activities involved in requirements analysis, soft-ware design, review for correctness (verification andvalidation), debugging, maintenance and evolution, andreengineering. This paper describes an approach, calledintent specifications, to designing system and softwarespecifications that potentially enhances human processingand use by grounding specification design on psychologicalprinciples of how humans use specifications to solveproblems, as well as on basic system engineering principles.Using such an approach allows us to design specificationlanguages with some confidence that they will be usableand effective.

A second goal of intent specifications is to integrateformal and informal aspects of software development andenhance their interaction. While mathematical techniquesare useful in some parts of the development process and arecrucial in developing software for critical systems, informaltechniques will always be a large part (if not most) of anycomplex software development effort: Our models havelimits in that the actual system has properties beyond themodel, and mathematical methods cannot handle allaspects of system development. To be used widely inindustry, our approach to specification must be driven bythe need 1) to systematically and realistically balance andintegrate mathematical and nonmathematical aspects ofsoftware development and 2) to make the formal parts ofthe specification easily readable, understandable, andusable by everyone involved in the development andmaintenance process.

Specifications should also enhance our ability to engineerfor quality and to build evolvable and changeable systems.

Essential system-level properties (such as safety andsecurity) must be built into the design from the beginning;they cannot be added on or simply measured afterward.Up-front planning and changes to the development processare needed to achieve particular objectives. These changesinclude using notations and techniques for reasoning aboutparticular properties, constructing the system and thesoftware in it to achieve them, and validating (at each step,starting from the very beginning of system development)that the evolving system has the desired qualities. Ourspecifications must reflect and support this process. Inaddition, systems and software are continually changingand evolving; they must be designed to be changeable andthe specifications must support evolution without compro-mising the confidence in the properties that were initiallyverified.

Many of the ideas in this paper are derived fromattempts by cognitive psychologists, engineers, and humanfactors experts to design and specify human-machineinterfaces. The human-machine interface provides a repre-sentation of the state of the system that the operator can useto solve problems and perform control, monitoring, anddiagnosis tasks. Just as the control panel in a plant is theinterface between the operator and the plant, system andsoftware requirements and design specifications are theinterface between the system designers and builders orbuilders and maintainers. The specifications help thedesigner, builder, tester, debugger, or maintainer under-stand the system well enough to create a physical form or tofind problems in or change the physical form.

The paper is divided into two parts: The first partdescribes some basic ideas in systems theory and cognitiveengineering.1 The second part describes a type ofspecification method called intent specifications, built uponthese basic ideas, that is designed to satisfy the goals listed

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000 15

. The author is with the Aeronautics and Astronautics Department,Massachusetts Institute of Technology, Room 33-406, 77 MassachusettsAve., Cambridge, MA 02139-4307. E-mail: [email protected].

Manuscript received 13 Aug. 1997; revised 22 July 1998; accepted 17 Nov.1998.Recommended for acceptance by H.A. Muller.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number 105499.

1. Cognitive engineering is a term that has come to denote the combinationof ideas from systems engineering, cognitive psychology, and humanfactors to cope with the challenges of building high-tech systems composedof humans and machines. These challenges have necessitated augmentingtraditional human factors approaches to consider the capabilities andlimitations of the human element in complex systems.

0098-5589/00/$10.00 ß 2000 IEEE

Page 2: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

above, i.e., to enhance human processing and problemsolving, to integrate formal and informal aspects of soft-ware development, and to enhance our ability to engineerfor quality and to build evolvable and changeable systems.

2 SPECIFICATIONS AND HUMAN PROBLEM SOLVING

To be useful to and usable by humans to solve problems,specification language and system design should be basedon an understanding of the problem or task that the user issolving. The systems we design and the specifications weuse impose demands on humans. We need to understandthose demands and how humans use specifications to solveproblems if we are to design specifications that reflectreasonable demands and that assist humans in carrying outtheir tasks.

Not only does the language in which we specifyproblems have an effect on our problem-solving ability, italso affects the errors we make while solving thoseproblems. Our specification language design needs toreflect what is known about human limitations andcapabilities.

A problem-solving activity involves achieving a goal byselecting and using strategies to move from the current stateto the goal state. Success depends on selecting an effectivestrategy or set of strategies and obtaining the informationnecessary to carry out that strategy successfully. Specifica-tions used in problem-solving tasks are constructed toprovide assistance in this process. Cognitive psychology hasfirmly established that the representation of the problemprovided to problem solvers can affect their performance(see Norman [30] for a survey of this research). In fact,Woods claims that there are no neutral representations [48]:The representations available to the problem solver eitherdegrade or support performance. To provide assistance forproblem solving, then, requires that we develop a theore-tical basis for deciding which representations supporteffective problem-solving strategies. For example, pro-blem-solving performance can be improved by providingrepresentations that reduce the problem solver's memoryload [21] and that display the critical attributes needed tosolve the problem in a perceptually salient way [20].

A problem-solving strategy is an abstraction describingone consistent reasoning approach characterized by aparticular mental representation and interpretation ofobservations [31]. Examples of strategies are hypothesisand test, pattern recognition, decision tree search, reasoningby analogy, and topological search.

Some computer science researchers have proposedtheories about the mental models and strategies used inprogram understanding tasks (examples of such models are[4], [22], [32], [38], [40]). Although this approach seemsuseful, it may turn out to be more difficult than it appearson the surface. Each of the users of a specification may (andprobably will) have different mental models of the system,depending on such factors as prior experience, the task forwhich the model is being used, and their role in the system[1], [13], [28], [36]. The same person may have multiplemental models of a system and even having twocontradictory models of the same system does not seem toconstitute a problem for people [28].

Strategies also seem to be highly variable. A study thatused protocol analysis to determine the trouble-shootingstrategies of professional technicians working on electronicequipment found that no two sequences of actions wereidentical, even though the technicians were performing thesame task every time (i.e., finding a faulty electroniccomponent) [34]. Not only do search strategies vary amongindividuals for the same problem, but a person may varyhis or her strategy dynamically during a problem-solvingactivity: Effective problem solvers change strategies fre-quently to circumvent local difficulties encountered alongthe solution path and to respond to new information thatchanges the objectives and subgoals or the mental workloadneeded to achieve a particular subgoal.

It appears, therefore, that to allow for multiple usersand for effective problem solving (including shiftingamong strategies), specifications should support all possi-ble strategies that may be needed for a task to allow formultiple users of the representation, for shedding mentalworkload by shifting strategies during problem solving,and for different cognitive and problem-solving styles. Weneed to design specifications such that users can easilyfind or infer the information they need regardless of theirmental model or preferred problem-solving strategies.That is, the specification design should be related to thegeneral tasks users need to perform with the information,but not be limited to specific predefined ways of carryingout those tasks.

One reason why many software engineering tools andenvironments are not readily accepted or easily used is thatthey imply a particular mental model and force potentialusers to work through problems using only one or a verylimited number of strategies, usually the strategy orstrategies preferred by the designer of the tool. The goalof specification language design should be to make it easyfor users to extract and focus on the important informationfor the specific task at hand without assuming particularmental models or limiting the problem-solving strategiesemployed by the users of the document. The rest of thispaper describes an approach to achieve this goal.

3 COMPONENTS OF A SPECIFICATION

METHODOLOGY TO SUPPORT PROBLEM-SOLVING

Underlying any methodology is an assumed process. In ourcase, the process must support the basic system andsoftware engineering tasks. A choice of an underlyingsystem engineering process is the first component of aspecification methodology. In addition, cognitive psychol-ogists suggest that three aspects of interface design must beaddressed if the interface is to serve as an effective medium:

1. content (what semantic information should be con-tained in the representation given the goals andtasks of the users,

2. structure (how to design the representation so thatthe user can extract the needed information), and

3. form (the notation or format of the interface) [46].

The next sections examine each of these four aspects ofspecification design in turn.

16 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Page 3: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

3.1 Process

Any system specification method should support thesystems engineering process. This process provides a logicalstructure for problem solving (see Fig. 1). First, a need orproblem is specified in terms of objectives that the systemmust satisfy and criteria that can be used to rank alternativedesigns. Then, a process of system synthesis takes place thatresults in a set of alternative designs. Each of thesealternatives is analyzed and evaluated in terms of the statedobjectives and design criteria and one alternative is selectedto be implemented. In practice, the process is highly iterative:The results from later stages are fed back to early stages tomodify objectives, criteria, design alternatives, and so on.

Design alternatives are generated through a process ofsystem architecture development and analysis. The systemengineers break down the system into a set of subsystems,together with the functions and constraints imposed uponthe individual subsystem designs, the major system inter-faces, and the subsystem interface topology. These aspectsare analyzed with respect to desired system performancecharacteristics and constraints, and the process is iterateduntil an acceptable system design results. The preliminarydesign at the end of this process must be described insufficient detail that subsystem implementation can proceedindependently.

The software requirements and design process aresimply subsets of the larger system engineering process.System engineering views each system as an integratedwhole, even though it is composed of diverse, specializedcomponents, which may be physical, logical (software), orhuman. The objective is to design subsystems that, whenintegrated into the whole, provide the most effective systempossible to achieve the overall objectives. The mostchallenging problems in building complex systems todayarise in the interfaces between components. One example isthe new highly automated aircraft where most incidentsand accidents have been blamed on human error, but moreproperly reflect difficulties in the collateral design of the

aircraft, the avionics systems, the cockpit displays andcontrols, and the demands placed on the pilots.

What types of specifications are needed to supporthumans in this system engineering process and to specifythe results? Design decisions at each stage must be mappedinto the goals and constraints they are derived to satisfy,with earlier decisions mapped (traced) to later stages of theprocess, resulting in a seamless (gapless) record of theprogression from high-level system requirements down tocomponent requirements and designs. The specificationsmust also support the various types of formal and informalanalysis used to decide between alternative designs and toverify the results of the design process. Finally, they mustassist in the coordinated design of the components and theinterfaces between them.

3.2 Content

The second component of a specification methodology isthe content of the specifications. Determining appropriatecontent requires considering what the specifications will beused for, that is, the problems that humans are trying tosolve when they use specifications. Previously, we lookedat a narrow slice of this problemÐwhat should becontained in blackbox requirements specifications forprocess control software to ensure that the resultingimplementations are internally complete [19], [24]. Thispaper again considers the question of specification content,but within a larger context.

This question is critical because cognitive psychologistshave determined that people tend to ignore informationduring problem solving that is not represented in thespecification of the problem. In experiments where someproblem solvers were given incomplete representationswhile others were not given any representation at all, thosewith no representation did better [14], [39]. An incompleteproblem representation actually impaired performancebecause the subjects tended to rely on it as a comprehensiveand truthful representationÐthey failed to consider im-portant factors deliberately omitted from the representa-tions. Thus, being provided with an incomplete problemrepresentation (specification) can actually lead to worseperformance than having no representation at all [46].

One possible explanation for these results is that someproblem solvers did worse because they were unaware ofimportant omitted information. However, both novicesand experts failed to use information left out of thediagrams with which they were presented, even thoughthe experts could be expected to be aware of thisinformation. Fischoff et al., who did such an experimentinvolving fault tree diagrams, attributed it to an ªout ofsight, out of mindº phenomenon [14].

One place to start in deciding what should be in a systemspecification is with basic systems theory, which defines asystem as a set of components that act together as a whole toachieve some common goal, objective, or end. The compo-nents are all interrelated and are either directly or indirectlyconnected to each other. This concept of a system relies onthe assumptions that the system goals can be defined andthat systems are atomistic, that is, capable of being separatedinto component entities such that their interactive behaviormechanisms can be described.

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 17

Fig. 1. The basic systems engineering process.

Page 4: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

The system state at any point in time is the set of relevantproperties describing the system at that time. The systemenvironment is a set of components (and their properties)that are not part of the system but whose behavior can affectthe system state. The existence of a boundary between thesystem and its environment implicitly defines as inputs oroutputs anything that crosses that boundary.

It is very important to understand that a system isalways a modelÐan abstraction conceived by the analyst. Forthe same man-made system, an observer may see a differentpurpose than the designer and may also focus on differentrelevant properties. Thus, there may be multiple ªcorrectºsystem models or specifications. To ensure consistency andenhance communication, a common specification isrequired that defines the:

. system boundary,

. inputs and outputs,

. components,

. structure,

. relevant interactions between components and themeans by which the system retains its integrity (thebehavior of the components and their effect on theoverall system state), and

. purpose or goals of the system that makes itreasonable to consider it to be a coherent entity [8].

All of these properties need to be included in a completesystem model or specification along with a description ofthe aspects of the environment that can affect the systemstate. Most of these aspects are already included in ourcurrent specification languages. However, the last, informa-tion about purpose or intent, is often not.

One of the most important limitations of the modelsunderlying most current specification languages, bothformal and informal, is that they cannot allow us to inferwhat is not explicitly represented in the model, includingthe intention of doing something a particular way. Thisintentional information is critical in the design andevolution of software. As Harman has said, practicalreasoning is concerned with what to intend, while formalreasoning with what to believe [18].

Formal logic arguments are a priori true or false withreference to an explicitly defined model, whereas functionalreasoning deals with relationships between models, andtruth depends on correspondence with the state of affairs inthe real world [18].

In the conclusions to our paper describing our experi-ences specifying the requirements for TCAS II (an aircraftcollision avoidance system), we wrote:

In reverse engineering TCAS, we found it impossible toderive the requirements specification strictly from thepseudocode and an accompanying English language de-scription. Although the basic information was all there, theintent was largely missing and often the mapping from goalsor constraints to specific design decisions. Therefore,distinguishing between requirements and artifacts of theimplementation was not possible in all cases. As has beendiscovered by most people attempting to maintain suchsystems, an audit trail of the decisions and the reasons whydecisions were made is absolutely essential. This was notdone by TCAS over the 15 years of its development, andthose responsible for the system today are currently

attempting to reconstruct decision-making information fromold memos and corporate memory. For the most part, onlyone person is able to explain why some decisions were madeor why things were designed in a particular way [26].

There is widespread agreement about the need fordesign rationale (intent) information in order to understandcomplex software or to correctly and efficiently change oranalyze the impact of changes to it. Without a record ofintent, important decisions can be undone during main-tenance: Many serious accidents and losses can be traced tothe fact that a system did not operate as intended because ofchanges that were not fully coordinated or fully analyzed todetermine their effects [24]. What is not so clear is thecontent and structure of the information that is needed.

Simply keeping an audit trail of decisions and thereasons behind them as they are made is not practical. Thenumber of decisions made in any large project is enormous.Even if it were possible to write them all down, finding theproper information when needed seems to be a hopelesstask if not structured appropriately. What is needed is aspecification of the intent (goals, constraints, and designrationale) from the beginning, and it must be specified in ausable and perceptually salient manner. That is, we need aframework within which to select and specify the designdecisions that are needed to develop and maintain software.

3.3 Structure

The third aspect of specifications, structure, is the basis fororganizing information in the specification. The informationmay all be included somewhere, but it may be hard to findor to determine the relationship to information specifiedelsewhere.

Problem solving in technological systems takes placewithin the context of a complex causal network of relation-ships [12], [34], [35], [46], and those relationships need to bereflected in the specification. The information needed tosolve a problem may all be included somewhere in theassorted documentation used in large projects, but it may behard to find when needed or to determine the relationshipto information specified elsewhere. Psychological experi-ments in problem solving find that people attend primarilyto perceptually salient information [20]. The goal ofspecification language design should be to make it easyfor users to extract and focus on the important informationfor the specific task at hand, which includes all potentialtasks related to use of the specification.

Cognitive engineers speak of this problem as ªinforma-tion pickupº [48]. Just because the information is in theinterface does not mean that the operator can find it easily.The same is true for specifications. The problem ofinformation pickup is compounded by the fact that thereis so much information in system and software specifica-tions, while only a small subset of it may be relevant in anygiven context.

3.3.1 Complexity

The problems in building and interacting with systemscorrectly are rooted in complexity and intellectualmanageability. A basic and often noted principle of engineer-ing is to keep things simple. This principle, of course, is easierto state than to do. Ashby's Law of Requisite Variety [2] tells

18 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Page 5: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

us that there is a limit to how simple we can make controlsystems, including those designs represented in software,and still have them be effective. In addition, basic humanability is not changing. If humans want to build and operateincreasingly complex systems, we need to increase what isintellectually manageable. That is, we will need to find waysto augment human ability.

The situation is not hopeless. As Rasmussen observes,the complexity of a system is not an objective feature of thesystem [33]. Observed complexity depends upon the levelof resolution upon which the system is being considered. Asimple object becomes complex if observed through amicroscope. Complexity, therefore, can only be definedwith reference to a particular representation of a system,and then can only be measured relative to other systemsobserved at the same level of abstraction.

Thus, a way to cope with complex systems is to structurethe situation such that the observer can transfer the problembeing solved to a level of abstraction with less resolution.The complexity faced by the builders or users of a system isdetermined by their mental models (representations) of theinternal state of the system. We build such mental modelsand update them based on what we observe about thesystem, that is, by means of our interface to the system.Therefore, the apparent complexity of a system ultimatelydepends upon the technology of the interface system [33].

The solution to the complexity problem is to takeadvantage of the most powerful resources people have fordealing with complexity. Newman has noted,

People don't mind dealing with complexity if they havesome way of controlling or handling it... If a person isallowed to structure a complex situation according to hisperceptual and conceptual needs, sheer complexity is no barto effective performance [29], [33].

Thus, complexity itself is not a problem if humans arepresented with meaningful information in a coherent,structured context.

3.3.2 Hierarchy Theory

Two ways humans cope with complexity is to use top-downreasoning and stratified hierarchies. Building systemsbottom-up works for relatively simple systems. But, as thenumber of cases and objects that must be consideredincreases, this approach becomes unworkableÐwe gobeyond the limits of human memory and logical ability tocope with the complexity. Top-down reasoning is a way ofmanaging that complexity. At the same time, we havefound that pure top-down reasoning is not adequate alone;humans need to combine top-down with bottom-up reason-ing. Thus, the structure of the information must allowreasoning in both directions.

In addition, humans cope with complexity by buildingstratified hierarchies. Models of complex systems can beexpressed in terms of a hierarchy of levels of organization,each more complex than the one below, where a level ischaracterized by having emergent properties. The concept ofemergence is the idea that, at any given level of complexity,some properties characteristic of that level (emergent at thatlevel) are irreducible. Such properties do not exist at lowerlevels in the sense that they are meaningless in the languageappropriate to those levels. For example, the shape of an

apple, although eventually explainable in terms of the cellsof the apple, has no meaning at that lower level ofdescription.

Regulatory or control action involves imposing con-straints upon the activity at one level of a hierarchy. Thoseconstraints define the ªlaws of behaviorº at that level thatyield activity meaningful at a higher level (emergentbehavior). Hierarchies are characterized by control pro-cesses operating at the interfaces between levels. Checklandexplains it:

Any description of a control process entails an upper levelimposing constraints upon the lower. The upper level is asource of an alternative (simpler) description of the lowerlevel in terms of specific functions that are emergent as aresult of the imposition of constraints [8 p, 87].

Hierarchy theory deals with the fundamental differencesbetween one level of complexity and another. Its ultimateaim is to explain the relationships between different levels:What generates the levels, what separates them, and whatlinks them. Emergent properties associated with a set ofcomponents at one level in a hierarchy are related toconstraints upon the degree of freedom of those compo-nents. In the context of this paper, it is important to notethat describing the emergent properties resulting from theimposition of constraints requires a language at a higherlevel (a metalevel) different than that describing thecomponents themselves. Thus, different description lan-guages are required at each hierarchical level.

The problem then comes down to determining appro-priate types of hierarchical abstraction that allow both top-down and bottom-up reasoning. In computer science, wehave made much use of part-whole abstractions where eachlevel of a hierarchy represents an aggregation of thecomponents at a lower level and of information-hidingabstractions where each level contains the same conceptualinformation but hides some details about the concepts, thatis, each level is a refinement of the information at a higherlevel. Each level of our software specifications can bethought of as providing what information, while the nextlower level describes how.

Such hierarchies, however, do not provide informationabout why. Higher-level emergent information aboutpurpose or intent cannot be inferred from what wenormally include in such specifications. Design errorsmay result when we either guess incorrectly abouthigher-level intent or omit it from our decision-makingprocess. For example, while specifying the systemrequirements for TCAS II [26], we learned from expertsthat crossing maneuvers are avoided in the design forsafety reasons. The analysis on which this decision isbased comes partly from experience during TCAS systemtesting on real aircraft and partly as a result of anextensive safety analysis performed on the system. Thisdesign constraint would not be apparent in most designor code specifications unless it were added in the form ofcomments, and it could easily be violated during systemmodification unless it was recorded and easily located.

But, there are abstractions that can be used in stratifiedhierarchies other than part-whole abstraction. While in-vestigating the design of safe human-machine interaction,

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 19

Page 6: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

Rasmussen studied protocols recorded by people workingon complex systems (process plant operators and computermaintainers) and found that they structured the systemalong two dimensions: 1) a part-whole abstraction in whichthe system is viewed as a group of related components atseveral levels of physical aggregation and 2) a means-endsabstraction [34].

3.3.3 Means-Ends Hierarchies

In a means-end abstraction, each level represents a differentmodel of the same system. At any point in the hierarchy, theinformation at one level acts as the goals (the ends) withrespect to the model at the next lower level (the means).Thus, in a means-ends abstraction, the current levelspecifies what, the level below how, and the level abovewhy [34]. In essence, this intent information is emergent inthe sense of system theory:

When moving from one level to the next higher level, thechange in system properties represented is not merelyremoval of details of information on the physical or materialproperties. More fundamentally, information is added onhigher-level principles governing the coordination of thevarious functions or elements at the lower level. In man-made systems, these higher-level principles are naturallyderived from the purpose of the system, i.e., from thereasons for the configurations at the level considered [34].

A change of level involves both a shift in concepts and inthe representation structure, as well as a change in theinformation suitable to characterize the state of the functionor operation at the various levels [34].

Each level in a means-ends hierarchy describes thesystem in terms of a different set of attributes orªlanguage.º Models at the lower levels are related to aspecific physical implementation that can serve severalpurposes, while those at higher levels are related to aspecific purpose that can be realized by several physicalimplementations. Changes in goals will propagate down-ward through the levels, while changes in the physicalresources (such as faults or failures) will propagate upward.In other words, states can only be described as errors orfaults with reference to their intended functional purpose.Thus, reasons for proper function are derived ªtop-down.ºIn contrast, causes of improper function depend uponchanges in the physical world (i.e., the implementation)and, thus, they are explained ªbottom upº [46].

Mappings betweenlevels aremany-to-many:Componentsof the lower levels can serve several purposes, while purposesat a higher level may be realized using several components ofthe lower-level model. These goal-oriented links betweenlevels can be followed in either direction, reflecting either themeans by which a function or goal can be accomplished (a linkto the level below) or the goals or functions an object can affect(a link to the level above). So, the means-ends hierarchy can betraversed in either a top-down (from ends to means) or abottom-up (from means to ends) direction.

As stated earlier, our representations of problems havean important effect on our problem-solving ability and thestrategies we use, and there is good reason to believe thatrepresenting the problem space as a means-ends mappingprovides useful context and support for decision makingand problem solving. Consideration of purpose or reason

(top-down analysis in a means-ends hierarchy) has beenshown to play a major role in understanding the operationof complex systems [33].

Rubin's analysis of his attempts to understand thefunction of a camera's shutter (as cited in [35]) providesan example of the role of intent or purpose in under-standing a system. Rubin describes his mental efforts interms of conceiving all the elements of the shutter in termsof their function in the whole rather than explaining howthe individual parts worked: How they worked wasimmediately clear when their function was known. Ras-mussen argues that this approach has the advantage thatsolutions of subproblems are identifiable with respect totheir place in the whole picture and it is immediatelypossible to judge whether a solution is correct or not. Incontrast, arguing from the parts to the way they work ismuch more difficult because it requires synthesis: Solutionsof subproblems must be remembered in isolation and theircorrectness is not immediately apparent.

Support for this argument can be found in the difficultiesAI researchers have encountered when modeling thefunction of mechanical devices ªbottom-upº from thefunction of the components. DeKleer and Brown foundthat determining the function of an electric buzzer solelyfrom the structure and behavior of the parts requirescomplex reasoning [10]. Rasmussen suggests that theresulting inference process is very artificial compared tothe top-down inference process guided by functionalconsiderations as described by Ruben.

In the DeKleer-Brown model, it will be difficult to see thewoods for the trees, while Rubin's description appears to beguided by a birds-eye perspective [35].

Glaser and Chi suggest that experts and successfulproblem solvers tend to focus first on analyzing thefunctional structure of the problem at a high level ofabstraction and then narrow their search for a solution byfocusing on more concrete details [16]. Representations thatconstrain search in a way that is explicitly related to thepurpose or intent for which the system is designed havebeen shown to be more effective than those that do notbecause they facilitate the type of goal-directed behaviorexhibited by experts [44]. Therefore, we should be able toimprove the problem solving required in software devel-opment and evolution tasks by providing a representation(i.e., specification) of the system that facilitates goal-oriented search by making explicit the goals related to eachcomponent.

Viewing a system from a high level of abstraction is notlimited to a means-ends hierarchy, of course. Mosthierarchies allow one to observe systems at a less detailedlevel. The difference is that the means-ends hierarchy isexplicitly goal oriented and, thus, assists goal-orientedproblem solving. With other hierarchies (such as the part-whole hierarchies often used in computer science), the linksbetween levels are not necessarily related to goals. So,although it is possible to use higher-levels of abstraction toselect a subsystem of interest and to constrain search, thesubtree of the hierarchy connected to a particular subsystemdoes not necessarily contain system components relevant tothe goals the problem solver is considering.

20 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Page 7: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

3.4 Form (Notation)

The final aspect of specification design is the actual form ofthe specification. Although this is often where we startwhen designing languages, the four aspects actually have tobe examined in order, first defining the process to besupported, then determining what the content should be,then how the content will be structured to make theinformation easily located and used, and, finally, the formthe language should take. All four aspects need to beaddressed, not only in terms of the analysis to be performedon the specification, but also with respect to humanperceptual and cognitive capabilities.

Note that the form itself must also be considered from apsychological standpoint: The usability of the language willdepend on human perceptual and cognitive strategies. Forexample, Fitter and Green describe the attributes of a goodnotation with respect to human perception and under-standing [15]. Casner [7] and others have argued that theutility of any information presentation is a function of the taskthat the presentation is being used to support. For example, asymbolic representation might be better than a graphic for aparticular task, but worse for others.

No particular specification language is being proposedhere. We first must clarify what needs to be expressed beforewe can design languages that express that informationappropriately and effectively. In addition, different types ofsystems require different types of languages. All specifica-tions are abstractionsÐthey leave out unimportant details.What is important will depend on the problem being solved.For different types of systems, the important and difficultaspects differ. For example, specifications for embeddedcontrollers may emphasize control flow over data flow(which is relatively trivial for these systems), while datatransformation or information management systems mightplace more emphasis on the specification of data flow thancontrol flow. Attempts to include everything in thespecification are not only impractical, but involve wastedeffort and are unlikely to fit the budgets and schedules of

industry projects. Because of the importance of complete-ness, as argued earlier, determining exactly what needs to beincluded becomes the most important problem in specifica-tion design.

This paper deals with process, content and structure, butnot form (notation). We are defining specification lan-guages built upon the foundation laid in this paper and onother psychological principles, but they will be describedin future papers.

4 INTENT SPECIFICATIONS

These basic ideas provide the foundation for what can becalled intent specifications. They have been developed andsuccessfully used in cognitive engineering by Vicente andDinadis for the design of operator interfaces, a process theycall ecological interface design [11], [43].

The exact number and content of the means-endshierarchy levels may differ from domain to domain. Here,a structure is presented for process systems with sharedsoftware and human control. In order to determine thefeasibility and scalability of this approach when specifyinga complex system, we extended the formal TCAS II aircraftcollision avoidance system requirements specification pre-viously written [26] to include intent information and otherinformation that cannot be expressed formally, but isneeded in a complete system requirements specification.We are currently applying the approach to other examples,including a NASA robot and part of the U.S. Air TrafficControl System. The TCAS II specification is used as anexample in this paper.2 The table of contents for theexample TCAS II System Requirements Specification(shown in Fig. 3) may be helpful in following thedescription of intent specifications. Note that the only partof TCAS that we specified previously is Section 3.4 andparts of Section 3.3.

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 21

Fig. 2. The structure of an intent specification for software systems.

2. Our TCAS II Intent Specification (complete system specification) isover 800 pages long. Obviously, the entire specification cannot be includedin this paper. It can be accessed from http://sunnyday.mit.edu.

Page 8: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

In the intent specifications we have built for real systems,we have found the approach to be practical; in fact, most ofthe information in an intent specification is already locatedsomewhere in the often voluminous documentation forlarge systems. The problem in these systems usually lies infinding specific information when it is needed, in tracingthe relationships between information, and in understand-ing the system design and why it was designed that way.Intent specifications are meant to assist with these tasks.

System and software specifications of the type beingproposed (see Fig. 2) are organized along a verticaldimension using intent abstraction and two horizontaldimensions using two types of part-whole abstraction(refinement and decomposition). These three dimensionsconstitute the problem space in which the human navigates.The horizontal refinement and decomposition dimensionsallow users to change their focus of attention to more or lessdetailed views within each intent level or model. Thevertical dimension (based on intent abstraction) specifiesthe level of intent at which the problem is being considered,i.e., the language or model that is currently being used.

4.1 Part-Whole Dimension

Computer science commonly uses two types of part-wholeabstractions. Parallel decomposition (or its opposite, aggrega-tion) separates units into (perhaps interacting) componentsof the same type. In Statecharts, for example, thesecomponents are called orthogonal components and theprocess of aggregation results in an orthogonal product.Each of the pieces of the parallel decomposition ofStatecharts is a state machine, although each state machinewill in general be different.

The second type of part-whole abstractionÐrefinementÐ-takes a function and breaks it down into more detailedsteps. An example is the combining of a set of states into asuperstate in Statecharts. In Petri-nets, such abstractionshave been applied both to states and to transitionsÐtheyprovide a higher-level name for a piece of the net. Inprogramming, refinement abstractions are represented byprocedures or subprograms.

Note that neither of these types of abstraction is anªemergent-propertyº or means-ends abstractionÐthewhole is simply broken up into a more detailed description.Additional information, such as intent, is not provided atthe higher level.

Along these horizontal dimensions, intent specifica-tions are broken up into four parts. The first part (thefirst column in Fig. 2) contains information aboutcharacteristics of the environment that affects the abilityto achieve the system goals and design constraints. Forexample, in TCAS, the designers need information aboutthe operation of the ground-based ATC system in orderto fulfill the system-level constraint of not interfering withit. Information about the environment is also needed forsome types of hazard analysis and for normal systemdesign. For example, the design of the surveillance logicin TCAS depends on the characteristics of the transpon-ders carried on the aircraft with which the surveillancelogic interacts.

The second column of the horizontal dimension isinformation about human operators or users. Too often

human factors design and software design is doneindependently. Many accidents and incidents in aircraftwith advanced automation have been blamed on human-error that has been induced by the design of the automa-tion. For example, Weiner introduced the term clumsyautomation to describe automation that places additionaland unevenly distributed workload, communication, andcoordination demands on pilots without adequate support[47]). Sarter et al. [37] describe additional problemsassociated with new attentional and knowledge demandsand breakdowns in mode awareness and ªautomationsurprises,º which they attribute to technology-centered auto-mation: Too often, the designers of the automation focusexclusively on technical aspects, such as the mapping fromsoftware inputs to outputs, on mathematical models ofrequirements functionality, and on the technical details andproblems internal to the computer; they do not devoteenough attention to the cognitive and other demands of theautomation design on the operator.

One goal of intent specifications is to integrate theinformation needed to design ªhuman-centered automa-tionº into the system requirements specification. We arealso working on analysis techniques to identify problematicsystem and software design features in order to predictwhere human errors are likely to occur [27]. This informa-tion can be used in both the automation design and in thedesign of the operator procedures, tasks, interface, andtraining.

The third part of the horizontal dimension is the systemitself and its decomposition into subsystems or components.Finally, each level also includes information about theverification and validation activities and results appropriatefor that specification level.

4.2 Intent Dimension

The Intent (vertical) dimension has five hierarchical levels,each providing intent (ªwhyº) information about the levelbelow. Each level is mapped to the appropriate parts of theintent levels above and below it, providing traceability ofhigh-level system requirements and constraints down tocode (or physical form) and vice versa.

Each level also supports a different type of reasoningabout the system, with the highest level assisting systemsengineers in their reasoning about system-level goals,constraints, priorities, and trade-offs. The second level,System Design Principles, allows engineers to reason aboutthe system in terms of the physical principles and lawsupon which the design is based. The Blackbox Behaviorlevel enhances reasoning about the logical design of thesystem as a whole and the interactions between thecomponents, as well as the functional state without beingdistracted by implementation issues. The lowest two levelsprovide the information necessary to reason about indivi-dual component design and implementation issues. Themappings between levels provide the relational informationthat allows reasoning across hierarchical levels.

4.2.1 System Purpose

Along the vertical dimension, the highest specificationlevel, System Purpose, contains:

22 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Page 9: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 23

Fig. 3. The contents of the sample TCAS Intent Specification.

Page 10: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

. system goals,

. design constraints,

. assumptions,

. limitations,

. design evaluation criteria and priorities, and

. results of analyses for system level qualities.

Examples of high-level goals (purpose) for TCAS II are to:

G1. Provide affordable and compatible collision avoidancesystem options for a broad spectrum of National AirspaceSystem users.

G2. Detect potential midair collisions with other aircraft in allmeteorological conditions.

Usually, in the early stages of a project, goals are stated invery general terms. One of the first steps in defining systemrequirements is to refine the goals into testable andachievable high-level requirements. For G1 above, a refinedsubgoal is:

R1. Provide collision avoidance protection for any two aircraftclosing horizontally at any rate up to 1,200 knots andvertically up to 10,000 feet per minute.

This type of refinement and reasoning is done at the SystemPurpose level, using an appropriate specification language(most likely English).

Requirements (and constraints) are also included for thehuman operator, for the human-computer interface, and forthe environment in which TCAS will operate. Requirementson the operator (in this case, the pilot) are used to guide thedesign of the TCAS-pilot interface, flightcrew tasks andprocedures, aircraft flight manuals, and training plans andprogram. Links are provided to show the relationships.Example TCAS II operator requirements are:

O1. After the threat is resolved, the pilot shall return promptlyand smoothly to his/her previously assigned flight path.

O2. The pilot must not maneuver on the basis of a TrafficAdvisory only.

Design constraints are restrictions on how the system canachieve its purpose. For example, TCAS is not allowed tointerfere with the ground-level air traffic control systemwhile it is trying to maintain adequate separation betweenaircraft. Avoiding interference is not a goal or purpose ofTCASÐthe best way to achieve it is not to build the systemat all. It is, instead, a constraint on how the system canachieve its purpose, i.e., a constraint on the potential systemdesigns. Because of the need to evaluate and clarify trade-offs among alternative designs, separating these two typesof intent information (goals and design constraints) isimportant.

For safety-critical systems, constraints should be furtherseparated into normal and safety-related. Examples ofnonsafety constraints for TCAS II are:

C1. The system must use the transponders routinely carriedby aircraft for ground ATC purposes.

C2. No deviations from current FAA policies and philosophiesmust be required.

Safety-related constraints should have two-way links to thesystem hazard log and, perhaps, links to any analysisresults that led to that constraint being identified. Hazard

analyses specified on this level are linked to Level 1requirements and constraints on this level, to designfeatures on Level 2, and to system limitations (or acceptedrisks). Example safety constraints are:

SC1.The system must generate advisories that require as littledeviation as possible from ATC clearances.SC2.The system must not disrupt the pilot and ATC operationsduring critical phases of flight.

Note that refinement occurs at the same level of the intentspecification (see Fig. 2). For example, the safety-constraintSC3 can be refined

SC3.The system must not interfere with the ground ATC system orother aircraft transmissions to the ground ATC system.

SC3.1.The system design must limit interference with ground-based secondary surveillance radar, distance-measuringequipment channels, and with other radio services thatoperate in the 1030/1090 MHz frequency band.

SC3.1.1.The design of the Mode S waveforms used by TCASmust provide compatibility with Modes A and C ofthe ground-based secondary surveillance radarsystem.SC3.1.1.The frequency spectrum of Mode S transmissionsmust be controlled to protect adjacent distance-measuring equipment channels.SC3.1.1.The design must ensure electromagnetic compatibil-ity between TCAS and...

SC3.2.Multiple TCAS units within detection range of oneanother (approximately 30 nmi) must be designed to limittheir own transmissions. As the number of such TCASunits within this region increases, the interrogation rateand power allocation for each of them must decrease inorder to prevent undesired interference with ATC.

Environment requirements and constraints may lead torestrictions on the use of the system or to the need forsystem safety and other analyses to determine that therequirements hold for the larger system in which the systembeing designed is to be used. Examples for TCAS include:

E1. Among the aircraft environmental alerts, the hierarchyshall be: Windshear has first priority, then the GroundProximity Warning System (GPWS), then TCAS.

E2. The behavior or interaction of non-TCAS equipment withTCAS must not degrade the performance of the TCASequipment or the performance of the equipment withwhich TCAS interacts.

E3. The TCAS alerts and advisories must be independent ofthose using the master caution and warning system.

Assumptions are specified, when appropriate, at all levelsof the intent specification to explain a decision or to recordfundamental information on which the design is based.

24 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Page 11: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

These assumptions are often used in the safety or otheranalyses or in making lower level design decisions. Forexample, operational safety depends on the accuracy of theassumptions and models underlying the design and hazardanalysis processes. The operational system should bemonitored to ensure:

1. that it is constructed, operated, and maintained inthe manner assumed by the designers,

2. that the models and assumptions used during initialdecision making and design were correct, and

3. that the models and assumptions are not violated bychanges in the system, such as workarounds orunauthorized changes in procedures, or by changesin the environment [24].

Operational feedback on trends, incidents, and accidents

should trigger re-analysis when appropriate. Linking the

assumptions throughout the document with the hazard

analysis (for example, to particular boxes in the system fault

trees) will assist in performing safety maintenance activities.Examples of assumptions associated with requirements

on the first level of the TCAS intent specification:

R1. Provide collision avoidance protection for any two aircraftclosing horizontally at any rate up to 1,200 knots andvertically up to 10,000 feet per minute.

Assumption.This requirement is derived from the assumption thatcommercial aircraft can operate up to 600 knots and 5,000fpm during vertical climb or controlled descent (and,therefore, two planes can close horizontally up to 1,200knots and vertically up to 10,000 fpm).

R3. TCAS shall operate in enroute and terminal areas withtraffic densities up to 0.3 aircraft per square nautical miles(i.e., 24 aircraft within 5 nmi).

Assumption.Traffic density may increase to this level by 1990, and thiswill be the maximum density over the next 20 years.

An example of an assumption associated with a safety

constraint is:

SC5.The system must not disrupt the pilot and ATC operationsduring critical phases of flight nor disrupt aircraft operation.

SC5.1.The pilot of a TCAS-equipped aircraft must have theoption to switch to the Traffic-Advisory-Only mode,where TAs are displayed but display of resolutionadvisories is inhibited.

Assumption.This feature will be used during final approach toparallel runways, when two aircraft are projected tocome close to each other and TCAS would call for anevasive maneuver.

Assumptions may also apply to features of the environ-ment. Examples of environment assumptions for TCAS arethat:

EA1.All aircraft have legal identification numbers.EA2.All aircraft carry transponders.EA3.The TCAS-equipped aircraft carries a Mode-S air trafficcontrol transponder whose replies include encoded altitudewhen appropriately interrogated.EA4.Altitude information is available from intruding targets with aminimum precision of 100 feet.EA5.Threat aircraft will not make an abrupt maneuver that thwartsthe TCAS escape maneuver.

System limitations are also specified at Level 1 of anintent specification. Some may be related to the basicfunctional requirements, such as:

L1. TCAS does not currently indicate horizontal escapemaneuvers and, therefore, does not (and is not intendedto) increase horizontal separation.

Limitations may also relate to environment assumptions.For example, system limitations related to the environmentassumptions above include:

L2. TCAS provides no protection against aircraft withnonoperational transponders.

L3. Aircraft performance limitations constrain the magnitudeof the escape maneuver that the flight crew can safelyexecute in response to a resolution advisory. It is possiblefor these limitations to preclude a successful resolution ofthe conflict.

L4. TCAS is dependent on the accuracy of the threat aircraft'sreported altitude. Separation assurance may be degradedby errors in intruder pressure altitude as reported by thetransponder of the intruder aircraft.

Assumption.This limitation holds for existing airspace, where manyaircraft use pressure altimeters rather than GPS. As moreaircraft install GPS systems with greater accuracy thancurrent pressure altimeters, this limitation will be reducedor eliminated.

Limitations are often associated with hazards or hazardcausal factors that could not be completely eliminated orcontrolled in the design. Thus, they represent acceptedrisks. For example:

L5. TCAS will not issue an advisory if it is turned on orenabled to issue resolution advisories in the middle of aconflict (! FTA-405)3

L6. If only one of two aircraft is TCAS equipped while the otherhas only ATCRBS altitude-reporting capability, theassurance of safe separation may be reduced (! FTA-290).

In our TCAS intent specification, both of these system

limitations have pointers to boxes in the fault tree generated

during the hazard analysis of TCAS II.Finally, limitations may be related to problems encoun-

tered or trade-offs made during the system design process

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 25

3. The pointer to FTA-405 denotes the box labeled 405 in the Level-1 faulttree analysis.

Page 12: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

(recorded on lower levels of the intent specification). Forexample, TCAS has a Level 1 performance monitoringrequirement that led to the inclusion of a self-test functionin the system design to determine whether TCAS isoperating correctly. The following system limitation relatesto this self-test facility:

L7. Use by the pilot of the self-test function in flight willinhibit TCAS operation for up to 20 seconds dependingupon the number of targets being tracked. The ATCtransponder will not function during some portion of theself-test sequence.

Most of these system limitations will be traced down inthe intent specification levels to the user documentation. Inthe case of an avionics system like TCAS, this specificationincludes the Pilot Operations (Flight) Manual on Level 4 ofour TCAS intent specification. An example is shown inSection 4.2.2.

Evaluation criteria and priorities are used to resolveconflicts among goals and design constraints and to guidedesign choices at lower levels. This information has notbeen included in the TCAS example specification as I wasunable to find out how these decisions were made duringthe TCAS design process.

Finally, Level 1 contains the analysis results for system-level (emergent) properties such as safety or security. Forthe TCAS specification, a hazard analysis (including faulttree analysis and failure modes and effects analysis) wasperformed and is included and linked to the safety-criticaldesign constraints on this level and to lower-level designdecisions based on the hazard analysis. Whenever changesare made in safety-critical systems or software (duringdevelopment or during maintenance and evolution), thesafety of the change needs to be evaluated. This process canbe difficult and expensive. By providing links throughoutthe levels of the intent specification, it should be easy toassess whether a particular design decision or piece of codewas based on the original safety analysis or safety-relateddesign constraint.

4.2.2 System Design Principles

The second level of the specification contains SystemDesign PrinciplesÐthe basic system design and scientificand engineering principles needed to achieve thebehavior specified in the top level. The horizontaldimension again allows abstraction and refinement ofthe basic system principles upon which the design ispredicated.

For TCAS, this level includes such general principles asthe basic tau concept, which is related to all the high-levelalerting goals and constraints:

PR1.Each TCAS-equipped aircraft is surrounded by a protectedvolume of airspace. The boundaries of this volume are shapedby the tau and DMOD criteria.

PR1.1.TAU: In collision avoidance, time-to-go to the closestpoint of approach (CPA) is more important than distance-to-go to the CPA. Tau is an approximation of the time in

seconds to CPA. Tau equals 3,600 times the slant range innmi, divided by the closing speed in knots.PR1.2.DMOD: If the rate of closure is very low, a target couldslip in very close without crossing the tau boundaries andtriggering an advisory. In order to provide addedprotection against a possible maneuver or speed changeby either aircraft, the tau boundaries are modified (calledDMOD). DMOD varies depending on own aircraft'saltitude regime. See Table 2.

The principles are linked to the related higher levelrequirements, constraints, assumptions, limitations, andhazard analysis, as well as linked to lower-level systemdesign and documentation. Assumptions used in theformulation of the design principles may also be specifiedat this level. For example, the TCAS design has a built-inbias against generating advisories that would result in theaircraft crossing paths (called altitude crossing advisories).

PR36.2A bias against altitude crossing RAs is also used in situationsinvolving intruder level-offs at least 600 feet above or below theTCAS aircraft. In such a situation, an altitude-crossingadvisory is deferred if an intruder aircraft that is projected tocross own aircraft's altitude is more than 600 feet awayvertically(# Alt Separation Testmÿ351).

Assumption.In most cases, the intruder will begin a level-off maneuverwhen it is more than 600 feet away and, so, should have agreatly reduced vertical rate by the time it is within 200feet of its altitude clearance (thereby, either not requiringan RA if it levels off more than ZTHR4 feet away orrequiring a noncrossing advisory for level-offs begun afterZTHR is crossed, but before the 600 foot threshold isreached).

The example above includes a pointer down to the parto f t h e b l a c k b o x r e q u i r e m e n t s s p e c i f i c a t i o n(Alt Separation Test) that embodies the design principle.As another example of the type of links that may be foundbetween Level 2 and the levels above and below it, considerthe following: TCAS II advisories may need to be inhibitedbecause of an inadequate climb performance for theparticular aircraft on which TCAS II is installed. Thecollision avoidance maneuvers posted as advisories (calledRAs or Resolution Advisories) by TCAS II assume anaircraft's ability to safely achieve them. If it is likely they arebeyond the capability of the aircraft, then TCAS II mustknow beforehand so it can change its strategy and issue analternative advisory. The performance characteristics areprovided to TCAS II through the aircraft interface. Anexample design principle (related to this problem) found onLevel 2 of the intent specification is:

PR39.Because of the limited number of inputs to TCAS for aircraft

performance inhibits, in some instances, where inhibiting RAs

would be appropriate it is not possible to do so (" L3). In these

cases, TCAS may command maneuvers that may significantly

26 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

4. The vertical dimension, called ZTHR, used to determine whetheradvisories should be issued varies from 750 to 950 feet, depending on theTCAS aircraft's altitude.

Page 13: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

reduce stall margins or result in stall warning (" SC9.1).

Conditions where this may occur include... The aircraft flightmanual or flight manual supplement should provide informa-

tion concerning this aspect of TCAS so that flight crews may

take appropriate action (# [Pilot procedures on Level 3 and

Aircraft Flight Manual on Level 4]).

Finally, principles may reflect trade-offs between higher-

level goals and constraints. As examples:

PR3.Trade-offs must be made between necessary protection (G1)and unnecessary advisories (SC5). This is accomplished bycontrolling the sensitivity level, which controls the tau and,therefore, the dimensions of the protected airspace around eachTCAS-equipped aircraft. The greater the sensitivity level, themore protection is provided but the higher is the incidence ofunnecessary alerts. Sensitivity level is determined by...

PR38.The need to inhibit CLIMB RAs because of inadequateaircraft climb performance will increase the likelihood ofTCAS II 1) issuing crossing maneuvers, which in turn

increases the possibility that an RA may be thwarted by theintruder maneuvering (" SC7.1, FTA-1150), 2) causing anincrease in DESCEND RAs at low altitude (" SC8.1), and3) providing no RAs if below the descend inhibit level(1,200 feet above ground level on takeoff and 1,000 feetabove ground level on approach).

4.2.3 Blackbox Behavior

Beginning at the third level, or Blackbox Behavior level, thespecification starts to contain information more familiar tosoftware engineers. Above this level, much of the informa-tion, if located anywhere, is found in system engineeringspecifications. The Blackbox Behavior model at the wholesystem viewpoint specifies the system components andtheir interfaces, including the human components(operators). Fig. 4 shows a system-level view of TCAS IIand its environment. Each system component behavioraldescription and each interface is refined in the normal wayalong the horizontal dimensions.

The environment description includes the assumedbehavior of the external components (such as the altimetersand transponders for TCAS), including, perhaps, failure

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 27

Fig. 4. System viewpoint showing the system interface topology for the Blackbox Behavior level of the TCAS specification.

Page 14: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

behavior, upon which the correctness of the system designis predicated, along with a description of the interfacesbetween the TCAS system and its environment. Fig. 5shows part of a state-machine description of anenvironment component, in this case an altimeter.

Remember that the boundaries of a system are purely anabstraction and can be set anywhere convenient for thepurposes of the specifier. In this case, any component thatwas already on the aircraft or in the airspace control systemand was not newly designed or built as part of the TCASeffort was included as environment.

Going along this level to the right, each arrow in Fig. 4represents a communication and needs to be described inmore detail. Each box (component) also needs to be refined.What is included in the decomposition of the componentwill depend on whether the component is part of theenvironment or part of the system being constructed. Thelanguage used to describe the components may also vary.State-machine language called SpecTRM-RL (SpecificationTools and Requirements MethodologyÐRequirements Lan-guage), which is a successor to the language (RSML) used inour official TCAS II specification [36] was used. Fig. 6 showspart of the SpecTRM-RL description of the behavior of theCAS (collision avoidance system) subcomponent.SpecTRM-RL specifications are intended to be both easilyreadable with minimum instruction and formally analyz-able (we have a set of analysis tools that work on thesespecifications).

Note that the behavioral descriptions at this level arepurely blackbox: They describe the inputs and outputs ofeach component and their relationships only in terms ofexternally visible variables, objects, and mathematicalfunctions. Any of these components (except the humans,of course) could be implemented either in hardware or

software (and, in fact, some of the TCAS surveillancefunctions are implemented using analog devices by somevendors). Decisions about physical implementation, soft-ware design, internal variables, and so on are limited tolevels of the specification below this one.

Other information at this level might include flight crewrequirements such as description of tasks and operationalprocedures, interface requirements, and the testing require-ments for the functionality described on this level. We havedeveloped a visual operator, task modeling language thatcan be translated to SpecTRM-RL and, thus, permitsintegrated simulation and analysis of the entire system,including human-computer interactions [5].

4.2.4 Design Representation

The two lowest levels of an intent specification provide theinformation necessary to reason about component designand implementation. The fourth level, Design Representation,contains design information. Its content will depend onwhether the particular function is being implemented usinganalog or digital devices or both. In any case, this level isthe first place where the specification should includeinformation about the physical or logical implementationof the components.

For functions implemented on digital computers, thefourth level might contain the usual software designdocuments or it might contain information different fromthat normally specified. Again, this level is linked to thehigher level specification.

The design intent information may not all be completelylinked and traceable upward to the levels above the DesignRepresentationÐfor example, design decisions based onperformance or other issues unrelated to requirements orconstraints, such as the use of a particular graphics packagebecause the programmers are familiar with it or it is easy tolearn. Knowing that these decisions are not linked to higherlevel purpose is important during software maintenanceand evolution activities.

The fourth level of the example TCAS intent specificationsimply contains the official pseudocode design specifica-tion. But, this level might contain information different thanwe usually include in design specifications. For example,Soloway et al. [41] describe the problem of modifying codecontaining delocalized plans (plans or schemas with piecesspread throughout the software). They recommend usingpointers to chain the pieces together, but a more effectiveapproach might be to put the plan or schema at the higherdesign representation level and point to the localized piecesin the lower level Code or Physical representation. Thepracticality of this approach, of course, needs to bedetermined.

Soloway et al. [41] also note that reviewers havedifficulty reviewing and understanding code that has beenoptimized. To assist in code reviews and walkthroughs, theunoptimized code sections might be shown in therefinement of the Design Representation along withmappings to the actual optimized code at the lowerimplementation level.

The possibilities for new types of information andrepresentations at this level of the intent hierarchy is thesubject of long-term research.

28 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Fig. 5. Part of the SpecTRM-RL description of an environmentcomponent (a radio altimeter). Modeling failure behavior is especiallyimportant for safety analyses. In this example, 1) the altimeter may beoperating correctly, 2) it may have failed in a way that the failure can bedetected by TCAS II (i.e., it fails a self-test and sends a status messageto TCAS or it is not sending any output at all), or 3) the malfunctioning isundetected and it sends an incorrect radio altitude.

Page 15: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

Other information at this level might include hardware

design descriptions, the human-computer interface design

specification, the pilot operations (flight) manual, and

verification requirements for the requirements and design

specified on this level.

4.2.5 Physical Representation

The lowest level includes a description of the physical

implementation of the levels above. It might include the

software itself, hardware assembly instructions, training

requirements (plan), etc.

4.2.6 Example

To illustrate this approach to structuring specifications, asmall example is used related to generating resolution

advisories. TCAS selects a resolution advisory (verticalescape maneuver) against other aircraft that are considereda threat to the aircraft on which the TCAS system resides. Aresolution advisory (RA) has both a sense (upward or

downward) and a strength (vertical rate), and it can bepositive (e.g., CLIMB) or negative (e.g., DON'T CLIMB). In thesoftware to evaluate the sense to be chosen against aparticular threat, there is a procedure to compute what is

called a ªDon't-Care-Test.º The software itself (Level 5)would contain comments about implementation decisionsand also a pointer up to the Level 4 design documentation

and from there up to the Level 3 black-box description ofthis test, shown in Fig. 7.

In turn, the blackbox (Level 3) description of the Dont-Care-Test would be linked to Level 2 explanations of theintent of the test and the reason behind (why) the design ofthe test. For example, our Level 2 TCAS intent specificationcontains the following:

PR35.Don't-Care-Test. When TCAS is displaying an RA againstone threat and then attempts to choose a sense against asecond threat, it is often desirable to choose the same senseagainst it as was chosen against the first threat, even if thissense is not optimal for the new threat. One advantage isdisplay continuity (" SC6). Another advantage is that thepilot may maneuver more sharply to increase separationagainst both threats. If a dual sense advisory is given, suchas DON'T CLIMB AND DON'T DESCEND, a verticalmaneuver to increase separation against one threat reducesseparation against the other threat. The most importantadvantage, however, is to avoid sacrificing separationinappropriately against the first threat in order to gain amarginal advantage against the second threat.

The don't-care test determines the relative advantages ofoptimizing the sense against the new threat versus selecting thesame sense for both threats. When the former outweighs the latter,the threat is called a do-care threat; otherwise, the threat is adon't-care threat.

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 29

Fig. 6. Part of a SpecTRM-RL Blackbox Behavior level description of the criteria for downgrading the status of an intruder (into our protected volume)from being labeled a threat to being considered simply as other traffic. Intruders can be classified in decreasing order of importance as a threat, apotential threat, proximate traffic, and other traffic. In the example, the criterion for taking the transition from state Threat to state Other Traffic isrepresented by an AND/OR table, which evaluates to TRUE if any of its columns evaluates to TRUE. A column is TRUE if all of its rows that have a ªTºare TRUE and all of its rows with an ªFº are FALSE. Rows containing a dot represent ªdon't careº conditions. The subscripts denote the type ofexpression (e.g., v for input variable, m for macro, t for table, and f for function) as well as the page in the document on which the expression isdefined. A macro is simply an AND/OR table used to implement an abstraction that simplifies another table.

Page 16: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

This Level 2 description in turn points up to high-levelgoals to maintain separation between aircraft andconstraints (both safety-related and nonsafety-related) onhow this can be achieved. We found while constructing the

TCAS intent specification that having to provide these linksidentified goals and constraints that did not seem to bedocumented anywhere but were implied by the design andsome of the design documentation.

30 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Fig. 7. This macro is used in defining which resolution advisory will be chosen when multiple aircraft (threats) are involved, among the most

complicated aspects of the collision avoidance logic. Abbreviations are used to enhance readability.

Page 17: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

Understanding the design of the Don't-Care-Test alsorequires understanding other concepts of sense selectionand aircraft separation requirements that are used in theblackbox description (and in the implementation) of theDon't-Care-Test procedure. For example, the separationbetween aircraft in Fig. 7 is defined in terms of ALIM. Theconcept is used in the Level 3 documentation, but themeaning and intent behind using the concept is defined inthe basic TCAS design principles at Level 2:

PR2.ALIM is the desired or ªadequateº amount of separationbetween aircraft that TCAS is designed to meet. Thisamount varies from 400 to 700 feet, depending on ownaircraft's altitude. ALIM includes allowances to account forintruder and own altimetry errors and vertical trackinguncertainties that affect track projections (see PR22.3). Thevalue of ALIM increases with altitude to reflect increasedaltimetry error (" SC4.5) and the need to increase trackedseparation at higher altitudes.

The blackbox behavioral specification shown in Fig. 7also points to the module that implements this requiredbehavior in the design specification on Level 4. For TCASII, pseudocode was used for the design specification. Fig.8 shows the pseudocode provided by MITRE for theDon't-Care-Test.

The structure of intent specifications has advantages insolving various software engineering problemsÐsuch as

changing requirements, program understanding, maintain-ing and changing code, and validationÐas discussed in thenext section.

5 INTENT SPECIFICATION SUPPORT FOR SOFTWARE

ENGINEERING PROBLEM SOLVING

As stated earlier, our representations of problems have animportant effect on our problem-solving ability and thestrategies we use. A basic hypothesis of this paper is thatintent specifications will support the problem solvingrequired to perform software engineering tasks. Thishypothesis seems particularly relevant with respect to tasksinvolving education and program understanding, search,design, changing requirements, fault tolerance, safetyassurance, maintenance, and evolution.

5.1 Education and Program Understanding

Curtis et. al. [9] did a field study of the requirements anddesign process for 17 large systems. They found thatsubstantial design effort in projects was spent coordinatinga common understanding among the staff of both theapplication domain and of how the system shouldperform within it. The most successful designers under-stood the application domain and were adept at identify-ing unstated requirements, constraints, or exceptionconditions and mapping between these and the

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 31

Fig. 8. The pseudocode for the Don't-Care-Test.

Page 18: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

computational structures. This is exactly the informationthat is included in the higher levels of intent specificationsand the mappings to the software. Thus, using intentspecifications should help with education in the mostcrucial aspects of the system design for both developersand maintainers and augment the abilities of both, i.e.,increase the intellectual manageability of the task.

5.2 Search Strategies

Vicente and Rasmussen have noted that means-endshierarchies constrain search in a useful way by providingtraceability from the highest level goal statements down toimplementations of the components [46]. By starting thesearch at a high level of abstraction and then decidingwhich part of the system is relevant to the current goals, theuser can concentrate on the subtree of the hierarchyconnected to the goal of interest: The parts of the systemnot pertinent to the function of interest can easily beignored. This type of ªzooming-inº behavior has beenobserved in a large number of psychological studies ofexpert problem solvers. Recent research on problem-solvingbehavior consistently shows that experts spend a great dealof their time analyzing the functional structure of a problemat a high level of abstraction before narrowing in on moreconcrete details [3], [6], [16], [34], [42].

With other hierarchies, the links between levels are notnecessarily related to goals. So, although it is possible to usehigher levels of abstraction in a standard decomposition orrefinement hierarchy to select a subsystem of interest and toconstrain search, the subtree of the hierarchy connected to aparticular subsystem does not necessarily contain systemcomponents that are relevant to the goals and constraintsthat the problem solver is considering.

Upward search in the hierarchy, such as that requiredfor debugging, is also supported by intent specifications.Vicente and Rasmussen claim (and have experimentalevidence to support) that, in order for operators tocorrectly and consistently diagnose faults, they must haveaccess to higher-order functional information since thisinformation provides a reference point defining how thesystem should be operating. States can only be described aserrors or faults with reference to the intended purpose.Additionally, causes of improper functioning dependupon aspects of the implementation. Thus, they areexplained bottom up. The same argument seems to applyto software debugging. There is evidence to support thishypothesis. Using protocol analysis, Vessey found that themost successful debuggers had a ªsystemº view of thesoftware [42].

5.3 Design Criteria and Evaluation

An interesting implication of intent specifications is theirpotential effect on system and software design. Suchspecifications might not only be used to understand andvalidate designs, but also to guide them.

An example of a design criterion appropriate to intentspecifications might be to be minimize the number of one-to-many mappings between levels in order to constraindownward search and limit the effects of change in higherlevels upon the lower levels. Minimizing many-to-many (ormany-to-one) mappings, would in addition, ease activities

that require following upward links and minimize the sideeffects of lower-level changes.

Intent specifications assist in identifying intent-related

structural dependencies (many-to-many mappings across

hierarchical levels) to allow minimizing them during

design, and they clarify the tradeoffs being made between

conflicting goals. Software engineering attempts to define

coupling between modules have been limited primarily to

the design level. Perhaps an intent specification can

provide a usable definition of coupling with respect to

emergent properties and to assist in making design

tradeoffs between various types of high-level coupling.

5.4 Minimizing the Effects of RequirementsChanges

Hopefully, the highest levels of the specification will notchange, but sometimes they do, especially during develop-ment, as system requirements become better understood.Functional and intent aspects are represented throughoutan intent specification, but in increasingly abstract andglobal terms at the higher levels. The highest levelsrepresent more stable design goals that are less likely tochange (such as detecting potential threats in TCAS), but,when they do, they have the most important (and costly)repercussions on the system and software design anddevelopment, and they may require analysis and changes atall the lower levels. We need to be able to determine thepotential effects of changes and, proactively, to design tominimize them.

Reversals in TCAS are an example of this. About fouryears after the original TCAS specification was written,experts discovered that it did not adequately coverrequirements involving the case where the pilot of anintruder aircraft does not follow his or her TCAS advisoryand, thus, TCAS must change the advisory to its own pilot.This change in basic requirements caused extensive changesin the TCAS design, some of which introduced additionalsubtle problems and errors that took years to discover andrectify.

Anticipating exactly what changes will occur anddesigning to minimize the effects of those changes isdifficult and the penalties for being wrong are great. Intentspecifications theoretically provide the flexibility andinformation necessary to design to ease high-level require-ments changes without having to predict exactly whichchanges will occur: The abstraction and design are based onintent (system requirements) rather than on part-wholerelationships (which are the least likely to change withrespect to requirement or environment changes).

5.5 Design of Run-Time Assertions

Intent specifications may assist software engineers indesigning effective fault tolerance mechanisms. Detectingunanticipated faults during execution has turned out to be avery difficult problem. For example, in one of our empiricalstudies, we found that programmers had difficulty writingeffective assertions for detecting errors in executing soft-ware [25]. We have suggested that using results from safetyanalyses might help in determining which assertions arerequired and where to detect the most important errors [23].

32 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Page 19: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

The information in intent specifications tracing intent fromrequirements, design constraints, and hazard analysesthrough the system and software design process to thesoftware module (and back) might assist with writingeffective and useful assertions to detect general violations ofsystem goals and constraints.

5.6 Safety Assurance

A complete safety analysis and methodology for buildingsafety-critical systems requires identifying the system-levelsafety requirements and constraints and then tracing themdown to the components [24]. After the safety-criticalbehavior of each component has been determined (includ-ing the implications of its behavior when the componentsinteract with each other), verification is required that thecomponents do not violate the identified safety-relatedbehavioral constraints. In addition, whenever any change ismade to the system or when new information is obtainedthat brings the safety of the design into doubt, revalidationis required to ensure that the change does not degradesystem safety. To make this verification (and reverification)easier, safety-critical parts of the software should be isolatedand minimized.

This analysis cannot be performed efficiently unlessthose making decisions about changes and those actuallymaking the changes know which parts of the system affect aparticular safety design constraint. Specifications need toinclude a record of the design decisions related to basicsafety-related system goals, constraints, and hazards (in-cluding both general design principles and criteria anddetailed design decisions), the assumptions underlyingthese decisions, and why the decisions were made andparticular design features included. Intent specificationscapture this information and provide the ability to tracedesign features upward to specific high-level system goalsand constraints.

5.7 Software Maintenance and Evolution

Although intent specifications provide support for a top-down, rational design process, they may be even moreimportant for the maintenance and evolution process thanfor the original designer, especially of smaller or lesscomplex systems. Software evolution is challenging becauseit involves many complex cognitive processesÐsuch asunderstanding the system's structure and function, under-standing the code and documentation and the mappingbetween the two, and locating inconsistencies and er-rorsÐthat require complex problem-solving strategies.

Intent specifications provide the structure required forrecording the most important design rationale information,i.e., that related to the purpose and intent of the system, andlocating it when needed. They, therefore, can assist in thesoftware change process.

While trying to build a model of TCAS, we discoveredthat the original conceptual model of the TCAS systemdesign had degraded over the years as changes were madeto the pseudocode to respond to errors found, newrequirements, better understanding of the problem beingsolved, enhancements of various kinds, and errorsintroduced during previous changes. The specific changesmade often simplified the process of making the change or

minimized the amount of code that needed to be changed,but complicated or degraded the original model. Nothaving any clear representation of the model also con-

tributed to its degradation over the 10 years of changes tothe pseudocode.

By the time we tried to build a representation of theunderlying conceptual model, we found that the system

design was unnecessarily complex and lacked conceptualcoherency in many respects, but we had to match what wasactually flying on aircraft. I believe that making changes

without introducing errors or unnecessarily complicatingthe resulting conceptual model would have been simplifiedif the TCAS staff had had a blackbox requirements

specification of the system. Evolution of the pseudocodewould have been enhanced even more if the extra intentinformation had been specified or organized in such a way

that it could easily be found and traced to the code.Tools for restructuring code have been developed to

cope with this common problem of increasing complexityand decreasing coherency of maintained code [17]. Using

intent specifications will not eliminate this need, but wehope it will be reduced by providing specifications thatassist in the evolution process and, more important, assist inbuilding software that is more easily evolved and main-

tained. Such specifications may allow for backing up andmaking changes in a way that will not degrade theunderlying conceptual model because the model is expli-

citly described and its implications traced from level tolevel. Intent specifications may also allow controlledchanges to the higher levels of the model if they become

necessary.Maintenance and evolution research has focused on

ways to identify and capture information from legacy

code. While useful for solving important short-term

problems, our long-term goal should be to specify and

design systems that lend themselves to change

easilyÐthat is, evolvable systems. Achieving this goal

requires devising methodologies that support change

throughout the entire system life cycleÐfrom require-

ments and specification to design, implementation, and

maintenance. For example, we may be able to organize

code in a way that will minimize the amount of code that

needs to be changed or that needs to be evaluated when

deciding if a change is safe or reasonable.In summary, the author believes that effective support

for such evolvable systems will require a new paradigmfor specification and design and hypothesize that such a

paradigm might be rooted in abstractions based on intent.Intent specifications provide the framework to include theinformation maintainers need in the specification. They

increase the information content so that less inferencing(and guessing) is required. Intent specifications not onlysupport evolution and maintenance, but they may be

more evolvable themselves, which would ease theproblem of keeping documentation and implementationconsistent. In addition, they also provide the possibility ofdesigning for evolution so that the systems we build are

more easily maintained and evolved.

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 33

Page 20: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

6 CONCLUSIONS

Specifications are constructed to help us solve problems.Any theory of specification design, then, should be basedon fundamental concepts of problem-solving behavior. Itshould also support the basic systems engineering process.This paper has presented one such approach to systemand software specifications based on underlying ideasfrom psychology, systems theory, human factors, systemengineering, and cognitive engineering.

The choice of content, structure, and form of specifica-tions have a profound effect on the kind of cognitiveprocessing that the user must bring to bear to use aspecification for the tasks involved in system and softwaredesign and construction, maintenance, and evolution.Intent specifications provide a way of coping with thecomplexity of the cognitive demands on the builders andmaintainers of automated systems by basing our specifica-tions on means-ends as well as part-whole abstractions.The author believes that the levels of the means-endshierarchy reflect a rational design philosophy for thesystems engineering of complex systems and, thus, arational way to specify the results of the process. Theyprovide mapping (tracing) of decisions made earlier intothe later stages of the process. Design decisions at eachlevel are linked to the goals and constraints they arederived to satisfy. A seamless (gapless) progression isrecorded from high-level system requirements down tocomponent requirements, design, and implementation.

In addition, intent specifications provide a way ofintegrating formal and informal aspects of specifications.Completely informal specifications of complex systemstend to be unwieldy and difficult to validate. Completelyformal specifications provide the potential for mathema-tical analysis and proofs, but omit necessary informationthat cannot be specified formally. Some formal approachesrequire building special models in addition to the regularsystem specifications. The author believes that the wide-spread use of formal specifications in industry will requirethe development of formal specifications that are readablewith minimal training requirements and that are integratedwith informal specifications. Ideally, formal analysisshould not require building special models that duplicatethe information included in the specification or it isunlikely that industry will find the use of formal methodsto be cost effective.

An example intent specification for TCAS II has beenconstructed and was used as an example in this paper.The reader is cautioned, however, that intent specificationsare a logical abstraction that can be realized in manydifferent physical ways. That is, the particular organiza-tion used for the TCAS specification is simply one possiblephysical realization of the general logical organizationinherent in intent specifications.

ACKNOWLEDGMENTS

This work was partially supported by NASA GrantNAG-1-1495 and by U.S. National Science Foundation GrantCCR-9396181.

REFERENCES

[1] Mental Models and Human-Computer Interaction, D. Ackermann andM.J. Tauber, eds. Amsterdam: North-Holland, 1990.

[2] W.R. Ashby, ªPrinciples of the Self-Organizing System,º Principlesof Self-Organization, H. Von Foerster and G.W. Zopf, eds.,Pergamon, 1962.

[3] M. Beveridge and E. Parkins, ªVisual Representation in Analogi-cal Program Solving,º Memory and Cognition, vol. 15, 1987.

[4] R. Brooks, ªTowards a Theory of Comprehension of ComputerPrograms,º Int'l J. Man-Machine Studies, vol. 18, pp. 543-554, 1983.

[5] M. Brown and N.G. Leveson, ªModeling Controller Tasks forSafety Analysis,º Proc. Second Workshop Human Error and SystemDevelopment, Apr. 1998.

[6] M.A. Buttigieg and P.M. Sanderson, ªEmergent Features in VisualDisplay Design for Two Types of Failure Detection Tasks,º HumanFactors, vol. 33, 1991.

[7] S.M. Casner, ªA Task Analytic Approach to the AutomatedDesign of Graphic Presentations,º ACM Trans. Graphics, vol. 10,no. 2, Apr. 1991.

[8] P. Checkland, Systems Thinking, Systems Practice. John Wiley &Sons, 1981.

[9] B. Curtis, H. Krasner, and N. Iscoe, ªA Field Study of the SoftwareDesign Process for Large Systems,º Comm. ACM, vol. 31, no. 2,pp. 1,268-1,287, 1988.

[10] J. DeKleer and J.S. Brown, ªAssumptions and Ambiguities inMechanistic Mental Models,º Mental Models, D. Gentner and A.L.Stevens, eds., Lawrence Erlbaum, 1983.

[11] N. Dinadis and K.J. Vicente, ªEcological Interface Design for aPower Plant Feedwater Subsystem,º IEEE Trans. Nuclear Science,1996.

[12] D. Dorner, ªOn the Difficulties People Have in Dealing withComplexity,º New Technology and Human Error, J. Rasmussen,K. Duncan, and J. Leplat, eds., pp. 97-109, New York: JohnWiley & Sons, 1987.

[13] K.D. Duncan, ªReflections on Fault Diagnostic Expertise,º NewTechnology and Human Error, J. Rasmussen, K. Duncan, andJ. Leplat, eds., pp. 261-269, New York: John Wiley & Sons, 1987.

[14] B. Fischoff, P. Slovic, and S. Lichtenstein, ªFault Trees:Sensitivity of Estimated Failure Probabilities to problemRepresentation,º J. Experimental Psychology: Human Perceptionand Performance, vol. 4, 1978.

[15] M.J. Fitter and T.R.G. Green, ªWhen Do Diagrams Make GoodProgramming Languages?º Int'l J. Man-Machine Studies, vol. 11,pp. 235-261, 1979.

[16] R. Glaser and M.T.H. Chi, ªOverview,º The Nature of Expertise,R. Glaser, M.T.H. Chi, and M.J. Farr, eds., Hillsdale, N.J.: Erlbaum,1988.

[17] W. Griswold and D. Notkin, ªArchitectural Tradeoffs for aMeaning-Preserving Program Restructuring Tool,º IEEE Trans.Software Eng., vol. 21, no. 3, pp. 275-287, Mar. 1995.

[18] G. Harman, ªLogic, Reasoning, and Logic Form,º Language, Mind,and Brain, T.W. Simon and R.J. Scholes, eds., Lawrence Erlbaum,1982.

[19] M.S. Jaffe, N.G. Leveson, M.P.E. Heimdahl, and B. Melhart,ªSoftware Requirements Analysis for Real-Time Process-ControlSystems,º IEEE Trans. Software Eng., vol. 17, no. 3, Mar. 1991.

[20] C.A. Kaplan and H.A. Simon, ªIn Search of Insight,º CognitivePsychology, vol. 22, 1990.

[21] K. Kotovsky, J.R. Hayes, and H.A. Simon, ªWhy Are SomeProblems Hard? Evidence from Tower of Hanoi,º CognitivePsychology, vol. 17, 1985.

[22] S. Letovsky, ªCognitive Processes in Program Comprehension,ºProc. First Workshop Empirical Studies of Programmers, pp. 58-79,1986.

[23] N.G. Leveson, ªSoftware Safety in Embedded Computer Sys-tems,º Comm. ACM, vol. 34, no. 2, Feb. 1991.

[24] N.G. Leveson, Safeware: System Safety and Computers. Addison-Wesley, 1995.

[25] N.G. Leveson, S.S. Cha, J.C. Knight, and T.J. Shimeal, ªThe Use ofSelf-Checks and Voting in Software Error Detection: An EmpiricalStudy,º IEEE Trans. Software Eng., vol. 16, no. 4, Apr. 1990.

[26] N.G. Leveson, M.P.E. Heimdahl, H. Hildreth, and J.D. Reese,ªRequirements Specification for Process-Control Systems,ºIEEE Trans. Software Eng., vol. 20, no. 9, Sept. 1994.

[27] N.G. Leveson, L.D. Pinnel, S.D. Sandys, S. Koga, and J.D. Reese,ªAnalyzing Software Specifications for Mode Confusion Poten-tial,º Proc. Workshop Human Error and System Development, 1977.

34 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 26, NO. 1, JANUARY 2000

Page 21: Intent Specifications: An Approach to Building Human-Centered Specificationssunnyday.mit.edu/papers/intent-tse.pdf ·  · 2015-12-11Intent Specifications: An Approach to Building

[28] D.A. Lucas, ªMental Models and New Technology,º NewTechnology and Human Error, J. Rasmussen, K. Duncan, andJ. Leplat, eds., pp. 321-325, New York: John Wiley & Sons, 1987.

[29] J.R. Newman, ªExtension of Human Capability through Informa-tion Processing and Display Systems,º Technical Report SP-2560,System Development Corp., 1966.

[30] D.A. Norman, Things that Make Us Smart. Addison-Wesley, 1993.[31] J. Rasmussen and A. Pejtersen, ªVirtual Ecology of Work,º An

Ecological Approach to Human Machine Systems I: A GlobalPerspective, J.M. Flach, P.A. Hancock, K. Caird and K.J. Vicente,eds., Hillsdale, N.J.: Erlbaum, 1995.

[32] N. Pennington ªStimulus Structures and Mental Representationsin Expert Comprehension of Computer Programs,º CognitivePsychology vol. 19, pp. 295-341, 1987.

[33] J. Rasmussen ªThe Role of Hierarchical Knowledge Representa-tion in Decision Making and System Management,º IEEE Trans.Systems, Man, and Cybernetics, vol. 15, no. 2, Mar./Apr. 1985.

[34] J. Rasmussen, ªInformation Processing and Human-Machine,ºInteraction: An Approach to Cognitive Engineering, North-Holland,1986.

[35] J. Rasmussen, ªMental Models and the Control of Action inComplex Environments,º Mental Models and Human-ComputerInteraction, D. Ackermann and M.J. Tauber, eds., North-Holland:Elsevier, pp. 41-69, 1990.

[36] J. Reason, Human Error. Cambridge Univ. Press, 1990.[37] N.D. Sarter, D.D. Woods, and C.E. Billings, ªAutomation

Surprises,º Handbook of Human Factors/Ergonomics, second ed.,G. Salvendy ed., New York: Wiley, 1995.

[38] B. Shneiderman and R. Mayer, ªSyntactic/Semantic Interactionsin Programmer Behavior: A Model and Experimental Results,ºComputer and Information Sciences, vol. 8, no. 3, pp. 219-238, 1979.

[39] G.F. Smith, ªRepresentational Effects on the Solving of anUnstructured Decision Problem,º IEEE Trans. Systems, Man, andCybernetics, pp. 1,083-1,090, vol. 19, 1989.

[40] E. Soloway and K. Ehrlich, ªEmpirical Studies of ProgrammingKnowledge,º IEEE Trans. Software Eng., vol. 10, no. 5, pp. 595-609,1984.

[41] E. Soloway, J. Pinto, S. Letovsky, D. Littman, and R. Lampert,ªDesigning Documentation to Compensate for DelocalizedPlans,º Comm. ACM, vol. 31, no. 2, pp. 1,259-1,267, 1988.

[42] I. Vessey, ªExpertise in Debugging Computer Programs: AProcess Analysis,º Int'l J. Man-Machine Studies, vol. 23, 1985.

[43] K.J. Vicente, ªSupporting Knowledge-Based Behavior throughEcological Interface Design,º PhD thesis, Univ. of Illinois atUrbana-Champaign, 1991.

[44] K.J. Vicente, K. Christoffersen, and A. Pereklit, ªSupportingOperator Problem Solving through Ecological Interface Design,ºIEEE Trans. Systems, Man, and Cybernetics, vol. 25, no. 4, pp. 529-545, 1995.

[45] K.J. Vicente and J. Rasmussen, ªThe Ecology of Human-MachineSystems II: Mediating Direct Perception in Complex WorkDomains,º Ecological Psychology, vol. 2, no. 3, pp. 207-249, 1990.

[46] K.J. Vicente and J. Rasmussen, ªEcological Interface Design:Theoretical Foundations,º IEEE Trans. Systems, Man, and Cyber-netics, vol. 22, no. 4, July/Aug. 1992.

[47] E.L. Wiener, ªHuman Factors of Advanced Technology (ªGlassCockpitº) Transport Aircraft,º NASA Contractor Report 177528,NASA Ames Research Center, June 1989.

[48] D.D. Woods, ªToward a Theoretical Base for RepresentationDesign in the Computer Medium: Ecological Perception andAiding Human Cognition,º An Ecological Approach to HumanMachine Systems I: A Global Perspective, J.M. Flach, P.A. Hancock,K. Caird and K.J. Vicente, eds., Hillsdale, N.J.: Erlbaum, 1995.

Nancy G. Leveson is a professor of aerospaceinformation systems in the Aeronautics andAstronautics Department at the MassachusettsInstitute of Technology, Cambridge. Previously,she was Boeing Professor of computer scienceand engineering at the University of Washington,Seattle. She has served as editor-in-chief of theIEEE Transactions on Software Engineering andon the board of directors of the InternationalCouncil on Systems Engineering. Dr. Leveson is

a fellow of the ACM and is currently an elected member of the board ofdirectors of the Computing Research Association, a member of the U.S.National Research Council (NRC) Commission on Engineering andTechnical Systems, as well as liaison to the U.S. NRC Aeronautics andSpace Engineering board. She received the 1995 AIAA InformationSystems award for ªdeveloping the field of software safety and forpromoting responsible software and system engineering practiceswhere life and property are at stakeº and also the 1999 ACM AlanNewell Award. Recently, Dr. Leveson was elected to the NationalAcademy of Engineering. She is author of the book, Safeware: SystemSafety and Computers, published by Addison-Wesley, and publishes,speaks, and consults widely.

LEVESON: INTENT SPECIFICATIONS: AN APPROACH TO BUILDING HUMAN-CENTERED SPECIFICATIONS 35