Top Banner
34 1541-1672/07/$25.00 © 2007 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society A r g u m e n t a t i o n T e c h n o l o g y Argumentation-Based Inference and Decision Making—A Medical Perspective John Fox, Oxford University David Glasspool, Edinburgh University Dan Grecu, Sanjay Modgil, and Matthew South, Cancer Research UK Vivek Patkar, Royal Free Hospital London A rgumentation—which used to be an informal topic, of interest primarily to lawyers and philosophers—has become in recent years an area with a strong formal foundation, with numerous applications in cognitive science and information technology. The first communities concerned with argumentation were those interested in its relationship to reasoning by individuals (notably Stephen Toulmin and his followers) and those interested in natural dialogue and rhetoric (notably the Amsterdam school and Douglas Walton and Erik Krabbe’s work). With the growing interest in argumentation in computer science, researchers have started to explore both theoretical and practi- cal aspects in fields as diverse as mathematical logic, software engineering, and AI. From a theoretical perspective, argumentation offers a novel way of understanding and formaliz- ing human cognition. Logicians and computer sci- entists are also using argumentation systems for expanding their investigations into classical forms of reasoning such as deduction and reasoning under uncertainty, and into nonclassical forms of reason- ing such as nonmonotonic, modal, and epistemic log- ics. Another promising application area for argu- mentation theory is distributed AI and multiagent systems, where the dialectical nature of argumenta- tion helps structure and reason about the exchange of knowledge and information related to claims. From a practical perspective, argumentation provides a ver- satile computational model for developing advanced services for decision support and for computer- human dialogues in which explanation and rationale play a central role. Argumentation is a potentially important para- digm for developing commercial and public ser- vices that are flexible and easily understood by human users. In the first part of this article, we pre- sent our work on developing argumentation-based services for biomedical applications, with a partic- ular focus on argument-based inference, decision making and planning. The second part of the arti- cle concerns the theoretical and practical lessons learned from this initial work and its follow-on as part of the Argumentation Services Platform with The Argumentation Services Platform with Integrated Components (ASPIC) project aims to provide advanced argumentation-based computational capabilities.
8

Argumentation-Based Inference and Decision Making—A ...

Mar 27, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Argumentation-Based Inference and Decision Making—A ...

34 1541-1672/07/$25.00 © 2007 IEEE IEEE INTELLIGENT SYSTEMSPublished by the IEEE Computer Society

A r g u m e n t a t i o n T e c h n o l o g y

Argumentation-BasedInference and DecisionMaking—A MedicalPerspectiveJohn Fox, Oxford University

David Glasspool, Edinburgh University

Dan Grecu, Sanjay Modgil, and Matthew South, Cancer Research UK

Vivek Patkar, Royal Free Hospital London

A rgumentation—which used to be an informal topic, of interest primarily to

lawyers and philosophers—has become in recent years an area with a strong

formal foundation, with numerous applications in cognitive science and information

technology. The first communities concerned with argumentation were those interested

in its relationship to reasoning by individuals(notably Stephen Toulmin and his followers) andthose interested in natural dialogue and rhetoric(notably the Amsterdam school and Douglas Waltonand Erik Krabbe’s work). With the growing interestin argumentation in computer science, researchershave started to explore both theoretical and practi-cal aspects in fields as diverse as mathematical logic,software engineering, and AI.

From a theoretical perspective, argumentationoffers a novel way of understanding and formaliz-ing human cognition. Logicians and computer sci-entists are also using argumentation systems forexpanding their investigations into classical formsof reasoning such as deduction and reasoning underuncertainty, and into nonclassical forms of reason-ing such as nonmonotonic, modal, and epistemic log-ics. Another promising application area for argu-mentation theory is distributed AI and multiagent

systems, where the dialectical nature of argumenta-tion helps structure and reason about the exchange ofknowledge and information related to claims. Froma practical perspective, argumentation provides a ver-satile computational model for developing advancedservices for decision support and for computer-human dialogues in which explanation and rationaleplay a central role.

Argumentation is a potentially important para-digm for developing commercial and public ser-vices that are flexible and easily understood byhuman users. In the first part of this article, we pre-sent our work on developing argumentation-basedservices for biomedical applications, with a partic-ular focus on argument-based inference, decisionmaking and planning. The second part of the arti-cle concerns the theoretical and practical lessonslearned from this initial work and its follow-on aspart of the Argumentation Services Platform with

The Argumentation

Services Platform

with Integrated

Components (ASPIC)

project aims to provide

advanced

argumentation-based

computational

capabilities.

Page 2: Argumentation-Based Inference and Decision Making—A ...

Integrated Components project. ASPIC wasa 45-month EU-funded collaborative pro-ject that concluded in September 2007, andits aim was to establish a consensual modelof argumentation that can be translated intoa formal framework, technical standardsand widely applicable software componentsembedding argumentation-based computa-tional capabilities.

Argumentation in biomedical applications

Research on medical judgment has raiseddeep questions about how clinicians makedecisions and plan treatment, particularlyunder conditions of uncertainty and infor-mation overload. To address the challengesclinicians face, recent work has proposedapplying normative numerical methods suchas probabilistic inference to diagnosis andrisk assessment and expected utility theoryto treatment selection.

However, in the real world, clinicians fre-quently have difficulty acquiring or estimat-ing the numbers required to model decisions.As John Maynard Keynes put it, “As livingand moving beings, we are forced to act …[even when] our existing knowledge does notprovide a sufficient basis for a calculatedmathematical expectation.”1

So it is not surprising that in medicine,although subjective probabilities certainlyunderlie some decision making, cliniciansoften appear to solve problems through qual-itative reasoning rather than calculation,using primarily domain-specific knowledge.For example, a clinician might make deduc-tions based on detailed knowledge of diseaseprocesses, their effects, and the implicationsof abnormalities for diagnosis and treatmentplanning. However, once knowledge isabstracted to the level of purely statisticalrelationships, all information about theprocesses underlying the data is lost, andthere is no way to incorporate this latter typeof knowledge into decision making. Theselimitations have led to new qualitative andlogic-based methods, among them our group’sargumentation-based inference and decision-making approaches.

LA, a logic of argumentSince the mid-1980s, we have investigated

the use of argumentation techniques in vari-ous reasoning and decision-making problemsin science and medicine. Because the clinicaluse of such systems has significant safetyimplications, an important goal of our work

has been the development of a formal foun-dation for argumentation-based services.Much of this work has been based on a for-mal logic of argument, LA, which is a vari-ant of standard first-order logic in which anargument has the form of a proof but doesnot prove its conclusion.2,3

In standard logic, we are justified in deduc-ing a proposition p from a database � whenwe can construct a valid proof under the rulesof a particular logic L (��L p). We cannot inferboth p and ¬p from the same database in stan-dard logic, but in LA, different arguments cansimultaneously support or oppose proposi-tions. This is because in complex domainssuch as medicine and science, knowledge canbe inconsistent. To deal with this inconsis-tency, we can divide a knowledge base intosubsets (theories) that are internally but notnecessarily mutually consistent. These sub-sets can be the basis of arguments that are in-dividually sound but justify contradictoryconclusions in particular contexts.

LA identifies three parts of an argument:

• the claim being made,• the internally consistent grounds of the

argument, and • a representation of confidence (Theory �

Context �LA (Claim, Grounds, Confi-dence)).

In principle, a knowledge base can yieldany number of arguments about a claim. Onecan represent confidence as a numerical value(such as a probability or an ad hoc score), or

symbolically, when an argument simply sup-ports or opposes a claim and quantifying aconfidence level is not possible.2,3

From inference to decision making

A decision is a choice between competingbeliefs about the world or between alterna-tive courses of action. Making a decisionrequires several steps beyond simple infer-ence. Figure 1 summarizes the main steps inthis process. In general, a decision starts withan analysis of the current situation, whichgives rise to one or more goals. A goal mighthave one or more candidate solutions thatwould achieve it, and these form the deci-sion’s frame. Inference processes generatearguments for and against each candidate.Decision making then ranks and evaluatescandidates based on the underlying argu-ments and selects one candidate as the finaldecision. Finally, the decision commits to anew belief about a situation, or an intentionto act in a particular way.

In practical medicine it is therefore notsufficient to just evaluate evidence or argu-ments. The other steps in this cycle—theupstream circumstances that lead to the needfor and framing of a decision, and the down-stream processes that implement the decision(such as treatment actions or plans)—arepotential sources of error that might need tobe remedied.

To support the entire decision life cycle,we have developed a theoretical frameworkthat combines argumentation with ideas frommodal logic, logic programming, and agenttheory.4 In this approach, a decision processis reified as a special kind of object—atask—whose attributes include

• a situation—the logical conditions requir-ing a decision,

• a goal—what the decision is intended toachieve,

• candidates—a schema for proposing can-didate decision options,

• arguments—propositional rules or first-order schemas that specify how to arguefor or against competing candidates, and

• commitments—rules and procedures forselecting the most preferred candidates.

The PROforma agent-specification lan-guage adopts a variant of this scheme. PRO-forma extends standard logic programmingwith specialized objects representing deci-sions, plans, actions, and data-acquisition

NOVEMBER/DECEMBER 2007 www.computer.org/intelligent 35

GoalIntentions

Situation

Candidates

Commitment

Figure 1. A synoptic view of decision making.The situation in which an agent findsitself might raise a goal to understandsome aspect of the situation or to changeit in some way. The goal might have several possible solutions (candidates)and the decision process involves committing to one of these. This mightinvolve accepting a new belief about thesituation or adopting an intention to actin some way.

Page 3: Argumentation-Based Inference and Decision Making—A ...

tasks. PROforma has been used to build awide range of medical decision-support andworkflow-management applications.

Figure 2 shows an example PROformamodel for triple assessment, an importantprocess in making diagnosis and treatmentdecisions for patients with suspected breastcancer.5 The example includes three upstreamdecisions (genetic risk assessment, imagingmethod selection, and choice of biopsy). Onlyafter making these decisions and completingthe resulting actions will a clinician be ableto make a management decision—for exam-ple, whether to refer the patient for treatmentof a tumor.

One benefit of encapsulating argumenta-tion within decision objects is that it consol-idates the different types of inference andassessment required for decision making in

a single structure that is easily understoodand manipulated in software. One can sched-ule decisions in workflows, as in the triple-assessment example, and embed them incomplex networks of interrelated decisionsusing both data- and control-flow methods.

PROforma’s argumentation-based deci-sion model supports a natural form of expla-nation based on arguments for and againstdecision candidates. Figure 3 shows the argu-mentation for the imaging decision in thetriple-assessment process,5 with each deci-sion candidate’s status. A checkmark indi-cates that ultrasound is recommended, and across indicates that mammography is not rec-ommended. The application determines thedecision’s status by weighing the argumentsfor and against each candidate. The argumentpresentation is based on the Toulmin argu-

mentation schema: A candidate is a claim,the pros and cons (indicated by plus andminus signs) are the claim’s warrants, andthe text (revealed by clicking on “More…”)provides the argument backing.

Example applicationsThree applications of the LA model for

inference and decision making in biomedi-cine illustrate how we can use argumentationmethods in practical applications and high-light some of these methods’ advantages.

Qualitative reasoning in toxicology. Figure4 illustrates Star, a toxicology application6

that lets toxicologists draw a (possiblynovel) compound of interest in standardchemical notation. The application exam-ines the molecular structure for substruc-tures known to be associated with cancer.For each alert, a reasoning component con-structs arguments for and against the com-pound being carcinogenic. The system canuse multiple forms of reasoning in con-structing arguments. For example, it mightuse knowledge of human biochemistry andmammalian metabolism to infer a possiblepathway by which the compound could betransformed from a benign into a potentiallycarcinogenic form. Other arguments couldoppose this hypothesis. For example, an ar-gument for toxicity in humans on the basisof trials of similar substances in animalsmight be counterbalanced by an argumentthat toxicity depends on an enzyme not pre-sent in humans.

The Star system demonstrates that in theabsence of statistical information, an argu-mentation approach can draw on differentsources of evidence to provide useful levelsof advice on new chemical compounds’ car-cinogenic nature.

Assessment and explanation of genetic risk.Risk Assessment in Genetics7 helps doctorsassess whether people with a family historyof cancer are genetically predisposed to thedisease. RAGs uses qualitative rules to for-mulate arguments for and against the personhaving a potential genetic predisposition. Theright panel in figure 5 shows a family tree fora 44-year-old woman whose mother andgrandmother had cancers at relatively youngages. The left panel shows four argumentsthat separately support the hypothesis thatshe is a gene carrier, and two argumentsagainst the claim. Rules in RAGs are weightedto represent different risk factors’ relative

A r g u m e n t a t i o n T e c h n o l o g y

36 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Figure 2. Task network representation of triple assessment in breast cancer.5 Diamondsrepresent queries, rounded rectangles represent plans, and circles represent decisions.

Figure 3. Presentation of a decision about alternative imaging studies in the triple-assessment application.5 Checkmarked candidate decisions are recommended, whilecrossed candidates are recommended against. A green “+” is a supporting argumentfor a candidate, and a red “-” is an opposing argument.

Page 4: Argumentation-Based Inference and Decision Making—A ...

importance. The application determines thepatient’s overall risk by adding the pros andcons.

In evaluation tests, RAGs significantlyimproved physicians’management decisionsand won their praise for the clarity of itsexplanations. This clarity didn’t come at theexpense of accuracy: the RAGs risk classifi-cations were no less accurate than those of astandard statistical risk-assessment package.Genetic risk is a relatively new and complexfield in which physicians do not feel com-fortable, and they especially appreciated theargument-based explanations in this context.In particular, patients are often concernedabout a family history of disease when thepattern of incidence suggests several chanceoccurrences, rather than a genetic link. Breastcancer, for example, is common, and mostpeople have affected relatives, but a geneticcause is less common. RAGs was particu-larly helpful in explaining why a patient wasnot at risk, even though they had affected rel-atives, and thus can help reduce the numberof inappropriate management decisions.

Understanding and explaining plans.Developing a plan of action requires anunderstanding of interactions and depen-dencies between various kinds of actions andevents and requires the planner to track thisinformation as a plan evolves. REACT (risks,events, actions and consequences over time)8

is a planning-support system that providesimmediate feedback based on constraints,interactions, and dependencies betweenevents and actions over periods of care. Itdynamically shows “what if” scenarios interms of quantitative outcome measures, andpresents the arguments for and against pos-sible clinical interventions together with theirjustifications (see figure 6). REACT representsinformation in argumentation logic terms andtracks the changing set of arguments as theuser manipulates a plan (adding, moving, anddeleting actions). The system has been eval-uated with promising results in the field ofgenetic counseling and was helpful in struc-turing the conversation with patients.

Limitations of the PROforma inference and decision model

Despite the effectiveness of argumenta-tion-based reasoning demonstrated in theseapplications, the embedded argumentationmodels only partially capture the behaviorthat can be observed in natural patterns ofargumentation. Phan Minh Dung’s seminal

theory of argumentation,9 which is based onattack relations and preferences among argu-ments, brings in key additional elements incomputational argumentation. Dung defineshis theory in terms of arguments that are jus-tified or winning, at the expense of overruledor rejected arguments. This supports the def-inition of a dialectical calculus of opposi-tion—a capability that would be beneficialin a wide range of applications.

The PROforma decision model commitsto a decision-making criterion, in whicharguments for a decision candidate are

aggregated into a single argument that isthen compared with similarly aggregatedarguments about other decision candi-dates. To extend this model’s generalityand scope, decision criteria other than ag-gregation might be needed to expand theapplication potential to a wider range ofdecision-making tasks in the medical andother domains.

The ASPIC project described in the follow-ing section has addressed these and otherissues. It has sought to develop consensustheoretical models for argumentation-based

NOVEMBER/DECEMBER 2007 www.computer.org/intelligent 37

Figure 4. The Star risk-assessment and reporting system showing a chemicalcompound of interest with a highlighted alert (red) and arguments for and againstpossible carcinogenicity.

Figure 5. Risk Assessment in Genetics. The right panel shows a family tree for Karen,whose mother and grandmother both had cancer. The left panel shows the argumentsfor and against her being a carrier of a gene that might predispose her to breast andovarian cancer.

Page 5: Argumentation-Based Inference and Decision Making—A ...

inference, decision making, and dialogue,and to implement these models in softwarecomponents.

The ASPIC projectSince our early work on argumentation-

based reasoning in medicine, the field of argu-mentation has gained significant momentum.Several communities within computer sciencehave explored different approaches to argu-mentation, many of them based on the idea ofdefeasible reasoning, in which arguments winor are defeated. The diversity of emergingargumentation models and the apparentlygrowing fragmentation of the argumentationresearch community have motivated severalresearch groups to join efforts as part of theASPIC project (www.argumentation.org).

ASPIC has developed consensus theoreti-cal models on argumentation and has transi-

tioned them into practical argumentation ser-vices. The resulting software componentsimplement computational argumentationcapabilities in four areas: inference, decisionmaking, dialogue, and learning. An equallynoteworthy outcome is the argumentationinterchange format (AIF), which has beendeveloped as a potential standard for sup-porting argument exchange.10 Developerscan use the ASPIC components as stand-aloneservices, integrate them into host and legacyapplications, or combine them to build com-plex practical applications.

Inference in ASPIC

ASPIC’s defeasible model of argument-based inference consists of four steps:

1. Argument construction supports theconstruction of tree-structured argu-

ments from a knowledge base K of facts,a set S of strict rules of the form �1, …,� n � �, and a set R of defeasible rulesof the form � 1, …, � n � �. These areexpressed in a language consisting offirst-order literals and their negations.The inference rules being used are strictand defeasible modus ponens.

2. Argument valuation assigns weights toarguments using validation schemesthat depend on the application domain.

3. Argument interaction is based on binaryconflict relations of attack and defeatbetween constructed arguments.

4. Argument status evaluation determineswinning or justified arguments based onthe graph of interacting arguments andusing Dung’s calculus of opposition.9

The inferences derivable from a knowl-edge base are represented by the justifiedarguments’ claims. The ASPIC model’s keycontribution is that, unlike other approachesto argument-based inference, the model sat-isfies several quality postulates representinga set of minimal requirements that should besatisfied by any rational model of argument-based inference.11

A survey of existing argumentation toolsand technologies indicated that existingimplementations of argumentation-basedinference were not robust enough to bereusable and portable for standard deploy-ment in applications. Hence, a key objectiveof the project was to develop a rigorouslyengineered argumentation engine imple-menting the model of inference. The ASPIC

project has developed efficient algorithms forevaluating arguments under two differentacceptability semantics9: a grounded seman-tics that is inherently skeptical and a credu-lous semantics.

The engine API lets an application builderinstantiate an engine with an associateddefeasible knowledge base, set various rea-soning parameters, and generate queries. Forany claim that matches an input query, theargumentation engine uses the knowledgebase to construct and evaluate the argumentstatus (justified or not). It provides users witha graphical visualization of the proof argu-ment network associated with the claim sothey can examine each argument’s status.The engine also provides a version of theproof and results via AIFXML, an XMLimplementation of the AIF abstract model.10

Application developers provide a defaultargument source in the form of a knowledge

A r g u m e n t a t i o n T e c h n o l o g y

38 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Figure 6. The REACT (risks, events, actions and consequences over time) system for planning under risk. The top panel shows a timeline of the events and actionsmaking up a plan, and the second panel shows a timeline of the quantitative risk. The third panel displays the arguments for and against any selected action. As theuser changes the plan, the associated risk and arguments are updated dynamically in the lower two panels.

Page 6: Argumentation-Based Inference and Decision Making—A ...

base of facts and rules, each having a numericdegree of belief in the range (0,1]. A DOB of1 (the default) indicates a strict rule or fact,and a DOB smaller than 1 indicates a defea-sible rule or fact. This argument source sup-ports the development of arguments that arevaluated over their constituent rules and facts.Each argument has a claim and a numeric sup-port (also in the range (0,1]) that the argu-mentation engine uses to resolve mutual at-tacks between arguments. The engine cancreate atomic arguments from atomic facts. Itcan also develop arguments by applying rules.The argumentation engine can valuate thesetree-structured arguments using various strate-gies. Weakest-link valuation determines anargument’s support as the minimum supportover all of its subarguments. Last-link valua-tion determines the argument’s support as theDOB of the highest defeasible rule in the argu-ment tree. If multiple highest-level defeasiblerules are at the same level in the tree, last-linkvaluation determines the argument’s supportto be those rules’ minimum DOB.

To define an argument’s acceptability, wemust define attack and defeat relations be-tween arguments. ASPIC defines three typesof attack relations:

• A rebut is a premise attack. • An undercut is an attack on a rule applica-

tion. Strict arguments cannot be rebutted.• Restrictive rebutting ensures that an argu-

ment whose top rule is defeasible cannotrebut an argument whose top rule is strict.Restricted rebutting is required to ensurethat the quality postulates are satisfied.

Every rule in the knowledge base is auto-matically associated with a name, which is afact acting as a hidden premise for that rule.Writing a fact or rule whose head contradictsthat name undercuts the rule. If argument Aundercuts argument B, A claims that somerule in B is not applicable.

Figure 7 is a screen capture of the argu-mentation engine when run in stand-alonemode. It shows a proof diagram for an infer-ence from which users can see how theknowledge is used to build two arguments,the support for those arguments, and theattack and defeat relations between them.The argumentation engine and sampleknowledge bases are available for downloadon the ASPIC project Web site.

ASPIC and decision makingOne of the objectives of the ASPIC project

was to support an extended range of deci-sion criteria in conjunction with its decision-making model.

In ASPIC, one can specify decision candi-dates using elements of the underlying log-ical language. For example, one can declaresymbols denoting beliefs, plans, or actions tobe candidates. Some decision theories limittheir scope to deciding between alternativeactions, but in medical or engineering deci-sion making, for example, options might bealternative beliefs (for example, diseases orfault diagnoses) or plans (such as treatmentsor repair plans). Like the LA-based decisionmodel, the ASPIC model does not place anysuch restrictions on decision options.

The inference component constructs ac-ceptable arguments for and against decisionoptions, and the decision-making compo-nent processes these arguments. One of sev-eral possible decision-making criteria isapplied to the arguments to determine thepreferred decision option. These criteriamight or might not use preference orderingson arguments (the LA decision model doesnot). For example, if a1 and a2 are actions,a goal-based decision criterion ranks an

argument A1 for action a1 above an argu-ment A2 for action a2, if A1’s claim is a pos-itive goal g1, and A2’s claim is a positivegoal g2, and g1 is prioritized above g2 in apartial ordering on goals. (A positive or neg-ative goal is a belief or set of beliefs describ-ing a state of the world that is desirable orundesirable, respectively.)

The forms of argument that are appropri-ate for making decisions can be specific to aparticular domain or goal. If an agent’s goalis to decide what to believe about a given sit-uation, or which plan to follow, it will requireappropriate specifications (schemas) declar-ing what it means for an argument to be foror against particular kinds of beliefs or plans.If such schemas are available, the inferenceengine can be invoked using domain schemasthat assemble arguments relevant to the typeof option under consideration. For diagnosisdecisions, the inference component mightuse relevant domain schemas to guide theconstruction of causal or statistical argu-ments for or against candidate diagnoses. Fordecisions about plans, the same inferencecomponent might use schemas that guide theconstruction of arguments about efficacy,

NOVEMBER/DECEMBER 2007 www.computer.org/intelligent 39

Figure 7. A screen capture of the argumentation engine GUI showing a proof networkfor the query “outcome(patient, increase_risk_of_stroke).” The diagram shows oneargument supporting the query’s claim (red box) being undercut by another argument(green box). The blue and red arrows in the graph indicate undercut and defeat relations, respectively, between argument trees.

Page 7: Argumentation-Based Inference and Decision Making—A ...

safety, cost, resource availability, and so on.The construction of arguments for and

against decision options must be accompa-nied by a method for combining the argu-ments and arriving at a commitment to oneoption. Commitments can be arrived at bysome generic decision rule or criterion (suchas the goal-based decision criterion or theaggregation criterion used in LA) or bydomain-specific criteria.

Support models and toolsFigure 8 shows the inference and decision-

making components used in conjunction withan online Argumentation Schema Editingand Visualization tool (ASEV, http://editor.argumentation.org/editor). This Web applica-tion lets users build and share knowledgebases, queries, and decisions, so they canexplore the ASPIC argumentation model. Theseknowledge bases use the default knowledgerepresentation, which is extended so that factsand decisions can have captions and descrip-tions. Captions let users provide a canned textrepresentation of a fact or rule and the descrip-tion lets them comment on that fact or rule.

To evaluate a decision, users need a knowl-edge base, a set of candidates, and (option-ally) a set of ordered goals. Once they havedefined the decisions, the decision-making

component computes an outcome and letsthe user examine the recommended decisioncandidates’ status.

The engine generates proof graphs based onAIF. Proofs can also be generated as an XMLdocument using an AIFXML interface, whichwe will build into ASEV in the near future.

Our experience with argumentation hasfollowed a path focused on evidential

reasoning and decision making. The workhas been strongly anchored in clinical andscientific domains but has sought to developan abstract model of argumentation for rea-soning and decision making under uncer-tainty that can be used in other domains andsettings.

The ASPIC argumentation engine and deci-sion-making component are key to developinga range of advanced AI systems, such as work-flow management, planning, and design tools.Future work will extend and refine ASPIC

results and exploit them in these and othertypes of application domains. We anticipatethat argumentation will confer the same ben-efits of naturalness and theoretical soundness,practical effectiveness and robustness to thesetasks as they have for decision making.

References

1. J.M. Keynes, The Collected Writings of JohnMaynard Keynes, Vol. 4, The General Theoryand After: Part II, Macmillan, 1973.

2. J. Fox, P.J. Krause, and M. Elvang-Gorans-son, “Argumentation as a General Frameworkfor Uncertain Reasoning,” Proc. Int’l Conf.Uncertainty in Artificial Intelligence, MorganKaufman, 1993, pp. 428-434.

3. P.J. Krause, S. Ambler, and J. Fox, “A Logicof Argumentation for Uncertain Reasoning,”Computational Intelligence, vol. 11, no. 1,1995, pp. 113–131.

4. S. Das et al., “A Flexible Architecture for aGeneral Intelligent Agent,” J. Experimentaland Theoretical Artificial Intelligence, vol. 9,no. 4, 1997, pp. 407–440.

5. V. Patkar et al., “Evidence-Based Guidelinesand Decision Support Services: A Discussionand Evaluation in Triple Assessment of Sus-pected Breast Cancer,” British J. Cancer, vol.95, no. 11, 2006, pp. 1490–1496.

6. P. Krause, J. Fox, and P. Judson, “Is There aRole for Qualitative Risk Assessment?” Proc.11th Conf. Uncertainty in Artificial Intelli-gence, Morgan Kaufmann, 1995, pp.386–439.

7. A.S. Coulson et al., “RAGs: A NovelApproach to Computerized Genetic RiskAssessment and Decision Support from Pedi-grees,” Methods of Information in Medicine,vol. 40, no. 4, 2001, pp. 315–322.

8. D.W. Glasspool et al., “Interactive DecisionSupport for Medical Planning,” Proc. 9thConf. Artificial Intelligence in Medicine inEurope (AIME 03), M. Dojat, E. Keravnou,and P. Barahona, eds., Springer, 2003, pp.335–339 .

9. P.M. Dung, “On the Acceptability of Argu-ments and its Fundamental Role in Nonmo-notonic Reasoning, Logic Programming andn-Person Games,” Artificial Intelligence, vol.77, no. 2, 1995, pp. 321–357.

10. C. Chesnevar et al., “Towards an ArgumentInterchange Format,” Knowledge Eng. Rev.,vol. 21, no. 4, 2007, pp. 293–316.

11. M. Caminada and L. Amgoud, “On the Eval-uation of Argumentation Formalisms,” Arti-ficial Intelligence, vol. 171, nos. 5–6, 2007,pp. 286–310.

For more information on this or any other com-puting topic, please visit our Digital Library atwww.computer.org/csdl.

A r g u m e n t a t i o n T e c h n o l o g y

40 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Figure 8. A screen capture of the argument schema editing and visualization Webapplication. ASEV lets users run ad hoc queries against a knowledge base or use savedqueries. They can examine the proof of these queries in graph format.

Page 8: Argumentation-Based Inference and Decision Making—A ...

NOVEMBER/DECEMBER 2007 www.computer.org/intelligent 41

T h e A u t h o r sJohn Fox is a professor of engineering science at the University of Oxford.His research interests include theories of reasoning and decision making aswell as the development of technologies for designing and implementingcognitive systems. He has a PhD in cognitive psychology from CambridgeUniversity. Contact him at the Dept. of Engineering Science, Parks Rd., Univ.of Oxford, Oxford OX1 3PJ, UK; [email protected].

David Glasspool is a senior research fellow in the School of Informatics atthe University of Edinburgh. His research interests include executive control,routine behavior, and planning in the face of uncertainty, in both computa-tional systems and human cognitive psychology. He received his PhD incomputational modeling from the psychology department of University Col-lege London. Contact him at the Univ. of Edinburgh School of Informatics,Appleton Tower, Crichton St., Edinburgh EH8 9LE, UK; [email protected].

Dan Grecu is the engineering coordinator of the European Union SixthFramework project, Argumentation Service Platform with Integrated Com-ponents. His research interests include decision-support systems, human-computer interaction, geospatial information systems, and machine learn-ing. He received his PhD in artificial intelligence in design from the WorcesterPolytechnic Institute in Worcester, Massachusetts. He is a member of theAAAI, the ACM, the IEEE, and Sigma Xi. Contact him at the AdvancedComputation Lab., Cancer Research UK, 25 Bedford Square, London WC1B3HW, UK; [email protected].

Sanjay Modgil is a senior research fellow on the European Union SixthFramework project, Argumentation Service Platform with Integrated Com-ponents. His research interests include formal models of argumentation andargumentation theory applied in a medical domain. He received his PhD incomputational logic from Imperial College London. Contact him at theAdvanced Computation Lab., Cancer Research UK, 25 Bedford Square,London WC1B 3HW, UK; [email protected].

Matthew South is an R&D engineer on the European Union Sixth Frame-work project, Argumentation Service Platform with Integrated Components.His research interests include medical informatics, workflow systems, andargumentation. He received his PhD in applied mathematics from the Uni-versity of the West of England in the UK. Contact him at the Advanced Com-putation Laboratory, Cancer Research UK, 25 Bedford Square, LondonWC1B 3HW, UK; [email protected].

Vivek Patkar is a clinical research fellow at the Department of AcademicOncology at the Royal Free Hospital in London. His research interestsinclude evidence-based medical decision making and clinical decision sup-port systems. He received his master’s degree in surgery from Bombay Uni-versity. He’s a member of the Royal College of Surgeons, Edinburgh. Con-tact him at Academic Oncology Dept., Royal Free Hospital, London NW32QG, UK; [email protected].

EXECUTIVE COMMITTEE

President: Michael R. Williams*President-Elect: Rangachar Kasturi;* Past President:

Deborah M. Cooper;* VP, Conferences and Tutorials:Susan K. (Kathy) Land (1ST VP);* VP, Electronic Prod-ucts and Services: Sorel Reisman (2ND VP);* VP,Chapters Activities: Antonio Doria;* VP, EducationalActivities: Stephen B. Seidman;† VP, Publications: JonG. Rokne;† VP, Standards Activities: John Walz;† VP,Technical Activities: Stephanie M. White;* Secretary:Christina M. Schober;* Treasurer: Michel Israel;†2006–2007 IEEE Division V Director: Oscar N.Garcia;† 2007–2008 IEEE Division VIII Director:Thomas W. Williams;† 2007 IEEE Division V Director-Elect: Deborah M. Cooper;* Computer Editor in Chief:Carl K. Chang;† Executive Director: Angela R. Burgess†

* voting member of the Board of Governors† nonvoting member of the Board of Governors

BOARD OF GOVERNORS

Term Expiring 2007: Jean M. Bacon, George V. Cybenko,Antonio Doria, Richard A. Kemmerer, Itaru Mimura,Brian M. O’Connell, Christina M. Schober

Term Expiring 2008: Richard H. Eckhouse, James D. Isaak,James W. Moore, Gary McGraw, Robert H. Sloan,Makoto Takizawa, Stephanie M. White

Term Expiring 2009: Van L. Eden, Robert Dupuis, Frank E.Ferrante, Roger U. Fujii, Ann Q. Gates, Juan E. Gilbert,Don F. Shafer

Next Board Meeting: 16 May 2008, Las Vegas, NV, USA

EXECUTIVE STAFF

Executive Director: Angela R. Burgess; Associate Execu-tive Director: Anne Marie Kelly; Associate Publisher:Dick Price; Director, Administration: Violet S. Doan;Director, Finance and Accounting: John Miller

COMPUTER SOCIETY OFFICESWashington Office. 1730 Massachusetts Ave. NW,

Washington, DC 20036-1992Phone: +1 202 371 0101 • Fax: +1 202 728 9614Email: [email protected]

Los Alamitos Office. 10662 Los Vaqueros Circle, LosAlamitos, CA 90720-1314Phone: +1 714 821 8380 • Email: [email protected] and Publication Orders: Phone: +1 800 272 6657 • Fax: +1 714 821 4641Email: [email protected]

Asia/Pacific Office. Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 107-0062, JapanPhone: +81 3 3408 3118 • Fax: +81 3 3408 3553Email: [email protected]

IEEE OFFICERS

President: Leah H. Jamieson; President-Elect: LewisTerman; Past President: Michael R. Lightner; ExecutiveDirector & COO: Jeffry W. Raynes; Secretary: CeliaDesmond; Treasurer: David Green; VP, EducationalActivities: Moshe Kam; VP, Publication Services andProducts: John Baillieul; VP, Regional Activities: PedroRay; President, Standards Association: George W.Arnold; VP, Technical Activities: Peter Staecker; IEEEDivision V Director: Oscar N. Garcia; IEEE Division VIIIDirector: Thomas W. Williams; President, IEEE-USA:John W. Meredith, P.E.

PURPOSE: The IEEE Computer Society is theworld’s largest association of computingprofessionals and is the leading provider oftechnical information in the field. Visit ourWeb site at www.computer.org.

revised 11 Oct. 2007