Top Banner
King’s Research Portal DOI: 10.20532/cit.2019.1004411 Document Version Peer reviewed version Link to publication record in King's Research Portal Citation for published version (APA): Cristani, M., Olivieri, F., Tomazzoli, C., Vigano, L., & Zorzi, M. (Accepted/In press). Diagnostics as a Reasoning Process: From Logic Structure to Software Design. CIT. Journal of Computing and Information Technology, 27(Special Issue), 43-57. https://doi.org/10.20532/cit.2019.1004411 Citing this paper Please note that where the full-text provided on King's Research Portal is the Author Accepted Manuscript or Post-Print version this may differ from the final Published version. If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections. General rights Copyright and moral rights for the publications made accessible in the Research Portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognize and abide by the legal requirements associated with these rights. •Users may download and print one copy of any publication from the Research Portal for the purpose of private study or research. •You may not further distribute the material or use it for any profit-making activity or commercial gain •You may freely distribute the URL identifying the publication in the Research Portal Take down policy If you believe that this document breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 24. Aug. 2022
27

King's Research Portal

May 03, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: King's Research Portal

King’s Research Portal

DOI:10.20532/cit.2019.1004411

Document VersionPeer reviewed version

Link to publication record in King's Research Portal

Citation for published version (APA):Cristani, M., Olivieri, F., Tomazzoli, C., Vigano, L., & Zorzi, M. (Accepted/In press). Diagnostics as a ReasoningProcess: From Logic Structure to Software Design. CIT. Journal of Computing and Information Technology,27(Special Issue), 43-57. https://doi.org/10.20532/cit.2019.1004411

Citing this paperPlease note that where the full-text provided on King's Research Portal is the Author Accepted Manuscript or Post-Print version this maydiffer from the final Published version. If citing, it is advised that you check and use the publisher's definitive version for pagination,volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you areagain advised to check the publisher's website for any subsequent corrections.

General rightsCopyright and moral rights for the publications made accessible in the Research Portal are retained by the authors and/or other copyrightowners and it is a condition of accessing publications that users recognize and abide by the legal requirements associated with these rights.

•Users may download and print one copy of any publication from the Research Portal for the purpose of private study or research.•You may not further distribute the material or use it for any profit-making activity or commercial gain•You may freely distribute the URL identifying the publication in the Research Portal

Take down policyIf you believe that this document breaches copyright please contact [email protected] providing details, and we will remove access tothe work immediately and investigate your claim.

Download date: 24. Aug. 2022

Page 2: King's Research Portal

Diagnostics as a Hybrid Reasoning Processes:from Logic Model to Software design

Matteo Cristani1, Francesco Olivieri2, Claudio Tomazzoli1, Luca Viganò3, Margherita Zorzi1

1Department of Computer Science, University of Verona, Italy2Data61, CSIRO, Australia

3Department of Informatics, King’s College London, UK

Abstract. Diagnostic tests are used to determine anomalies in complex systems suchas organisms or built structures. Once a set of tests is performed, theexperts interpret their results and make decisions based on them. Thisprocess is named diagnostic reasoning. It is a process in which a decision isestablished that uses rules and general knowledge on the tests and thedomain.The artificial intelligence community has focused on devising and automatingdifferent methods of diagnosis for medicine and engineering, but, to the bestof our knowledge, the decision process in logical terms hasn’t yet beeninvestigated thoroughly. The automation of the diagnostic process would behelpful in a number of contexts, in particular when the number of test sets tomake decision is too wide to be dealt with manually.To tackle such challenges, we shall study logical frameworks for diagnosticreasoning, automation methods and their computational properties andtechnologies implementing these methods. In this paper, we present the formalization of a hybrid reasoning framework

TL that hosts tests and deduction rules on tests, and an algorithm that

transforms a TL theory into defeasible logic, for which an implemented

automated deduction technology (called Spindle) exists.We evaluate the methodology by means of a real-world example related tothe Open Web Application Security Project requisites. The full diagnosticprocess is driven from the definition of the issue to the decision.

Keywords: Tests, Experiments, Hybrid Reasoning, Labelled Logic, Temporal Logic, Defeasible Logic, SPINdle Engine

1 Introduction

Diagnostic reasoning is the process of evaluating the results of operations

(questions or practical actions) in order to establish which specific conditions

hold on an individual or, generically, a sample. This class of operations are

usually called tests. A number of scientific fields exploit test-based knowledge

acquisition: computer science, engineering, earth sciences, biology, medicine

and many others.

A test commonly reveals a property, usually in search of anomalies, with a

margin of error, and provides information about causes of the anomaly. As a

1

Page 3: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

consequence, by establishing “cause and effect” relationships, tests provide

information about possible solutions. Consider medical diagnosis: specific

symptoms suggest which tests are to be done, and the results of such tests

help the specialists in identifying which disease is currently on and,

consequently, which therapies are needed. The steps of this process are test-

driven a n d knowledge-driven decisions and, therefore, hybrid reasoning

processes.

The result of a test is not exact. Tests are prone to errors, for they reveal a

property without proving it in a logical way. This is in contrast to what happens

in deductive systems, where a reasoning process starts from premises

considered true and, through derivation rules, infers consequences. In other

words, tests reveal truth on tested conditions in a provisional way.

The diagnostic reasoning processes that we mentioned above are

commonly executed on a huge number of data for several specific diagnostic

processes, for instance:

1. In information security, when vast system logs and security data are

issued and tested.

2. In geology, where the process of testing data for decisions related to

anti-earthquake protections are named with the portmanteau word

geognostic investigations.

3. In medicine, especially in epidemic control population tests, where the

number of tests executed and controlled is wide.

In the situations listed above, the decisions to be made are complex and

the diagnostic processes per se can be time-consuming. It would be therefore

worth to assist the specialist (information engineer, earth scientist, medical

scientist, etc.) in the process with a computer-assisted diagnostic reasoning

system.

In particular, when a diagnostic test is performed, there are numerous

configurations of the results that may be complex to treat simultaneously, for a

medical doctor or another specialist, and basic automated learning

techniques, such as data mining ones, cannot be used for this purpose

satisfactorily. Being able to employ reasoning techniques for computer-

assisted diagnosis has been one of the main goals of the research efforts on

AI in medicine (see [3, 6, 18, 5] for references and specific approaches). In

these cases, it would be useful to assist the medical doctor or another

specialist in decision making by providing an automated tool able to decide

about the logical consistency and the logical consequences of a set of test

results, on top of general issues of those tests themselves, including statistical

behaviors, temporal relationships and revealing capabilities.

To tackle these challenges, we started a research program that comprises

the development of a mechanized reasoning technology, able to decide the

2

Page 4: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

validity of test sets, the assessment of the technology on real-world cases and

the comparison with human behaviors in these cases. In this paper, we move

the first steps in the research cycle, providing both the definition of a logical

framework for the diagnostic reasoning, the TL logic, and the development of

an architecture that applies the reasoning process of TL to a real-world case

study in information security.

We use the expressiveness of Labelled Modal Logic [14,38,42], with

temporal and statistical information added to a basic propositional language.

Experiments are modeled in terms of tests viewed as Bayesian classifiers,

which reveal one or more properties of a sample.

We define the syntax of formulae and relational rules between labels in TL

and sketch ideas about a full deduction systems à la Prawitz, by presenting

the deduction rules; however, we do not provide soundness and

completeness results as these are beyond the scope of this paper. We

propose examples of how TL works and provide technical issues in the

construction of the mentioned experimental technology; we also show how to

build an architecture to host the developed mechanization of reasoning.

The remainder of the paper is organized as follows. Section 2 discusses

some background of this research. Section 3 reviews relevant related

literature. Section 4 introduces the logic TL, defining the basic alphabets, the

syntax of formulae, test labels (procedures applied during an experiment), and

labelled formulae. In particular, Section 4.2 formalizes central notions of the

diagnostic-based reasoning, provides a specific analysis of the non-monotonic

aspects of the logic itself and focuses also on the relations between tests. In

Section 5, we investigate the structure of an architecture for diagnostic

reasoning that we further discuss in a real world setting in Section 6. We

conclude with Section 7 by summarizing research, discussing some open

problems, and sketching future lines of research.

2 Background

In this section we briefly recall some basic notions from statistical information

retrieval and learning [9], and we discuss concrete examples of diagnostic

procedure.

From a mathematical perspective, a test is naturally interpreted as a

statistical classifier, i.e., a function f that, fed with an input a, is able to predict

a probability distribution over a set of classes. Oversimplifying, f assigns to a a

label y that represents the answer (the classification of a). This classification

is not exact and therefore the answer given by the classifier can be wrong.

For example, if f encodes the problem “Does x enjoy property P?”, the answer

“Yes” to this question classifies a as an element of the set of objects that enjoy

3

Page 5: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

P, a n d t h i s c a n b e d e s c r i b e d b y a n a s s e r t i o n s u c h a s

f: P(x). There is an implicit epistemic meaning of this assertion, corresponding

to the ability of f to assert P(x) as happens, for instance, in announcement

logics or in Agent Communication Languages, where agents make assertions,

or in the pure epistemic interpretation of the classical modal logic K, where

agents know (or believe) assertions. Also in those systems, truth of sentences

may not be guaranteed by the assertion, belief or knowledge of the

sentences. Someone may assert, believe or know something, but this

something might actually be false.

A large taxonomy of probabilistic classifiers has been developed. In this

paper, we focus on the simplest type of classifiers, called (Naive) Bayes (or

Bayesian) classifiers, which exploit some strong statistical assumptions [13].

Bayesian classifiers work well in many complex real-world situations and thus

represent the execution of tests in an acceptable way.

Classifiers are prone to error. In this context, errors are described either as

false positive results, or false negative results1. In the remainder of this paper,

we omit the word result(s) whenever it is clear from context; we also speak of

true positive and true negative for those answers that coincide with the

answers given by a logical formula.

Scientific research in this area aims to reduce errors in Bayesian

classifiers, obtaining better methods to derive knowledge from experiments.

The following example both provides a concrete instance of diagnostic

reasoning and permits us to introduce the notions of error, and their

taxonomy, based on the relation between properties and the revelation of

them.

Example 1. Western-Blot is a technique used in biology to confirm the

existence of antibodies against a particular pathogenic factor. This is

determined by the application of the test in a manner that can be considered

without false negatives. Western-Blot, however, has a number of false

positives. In contrast, the Elisa test (or, simply, Elisa) analogously lacks false

negatives but it exhibits a larger number of false positives than Western-Blot

when applied to the same pathogenic factor.

1� False positives and false negatives are concepts analogous to type I and type II errors in statisticalhypothesis testing, where a positive result corresponds to rejecting the null hypothesis and a negativeresult corresponds to not rejecting the null hypothesis. Roughly speaking, a false positive, commonlycalled a “false alarm”, is a result that indicates that a given condition exists while in fact it does not,whereas a false negative is a test result that indicates that a condition does not hold while in fact itdoes. In principle, tests can be considered without false negatives when the number of false negativeresults is irrelevant to the decision process as happens, for instance, for those tests that present 1 caseof false negative in 1 million. For the purpose of our logical framework, we can assume that thismeans that there are no false negatives.

4

Page 6: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

Usually, the sequence of tests depends upon their cost more than their

reliability. For instance, Elisa is a cheaper procedure than Western-Blot, and

thus Elisa is typically applied before than Western-Blot.

To illustrate this, assume that Elisa answers positively on a given sample. We

cannot conclude with certainty that the pathogenic factor is present in the

tested organism, due to the high number of false positives exhibited by Elisa.

Thus, we apply the Western-Blot test to confirm the validity of Elisa’s result.

We now derive a negative answer. Since it is assumed that Western-Blot is

without false negatives, we can conclude that the pathogenic factor is not

present in the organism, against the evidence provided by Elisa.

Example 1 shows a way of deriving truth from tests that is common in

those systems. It is straightforward to see that tests with no false negatives

that give a negative answer, as well as tests with no false positives that give a

positive answer, are always truthful.

3 Related Work

The notion of assisted diagnosis and the usage of intelligent systems in

medicine for diagnostic purposes have been a mainstream research topic in

artificial intelligence in medicine.

Since the pioneering works of Reiter [6] and Davis [3] these studies have

been focusing on two methods: case-based reasoning (see [5] for several

recent references to this approach) and statistical methods applied to

reasoning (inspired by the original work of Johnson et al. [4]; see [1, 2] for

recent investigations).

The nature of errors in tests for diagnostic reasoning has been studied to

support the idea that a test has some intrinsic probability of revealing the

property it has been devised for. Therefore, the majority of these

investigations have focused upon the ideas that a test can be erroneous in

making decisions and that, based on a potentially erroneous decision, we

have a tree of possible decisions that have a degree of validity, depending

strictly on the validity of the starting decision. There is a long stream of

investigations based upon fuzzy and probabilistic reasoning methods of

computer science applied to medicine, started by Píš et al. in [25] and

followed by many other investigations, notably [19] in rheumatology. There

have also been some comparative studies, such as [18] and [24]. These

methodological studies have given rise to a series of architectural proposals

making use of probabilistic methods (see, e.g., [20] and [21]).

5

Page 7: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

4 Focusing on Experimental Knowledge: The Logic TL

We introduce logic TL that is devised to perform approximate reasoning on

tests. Informally, a (well-formed) formula of TL represents a property of a

sample (or individual) that can be revealed, with a margin of error, by a

suitable experiment, built out from a sequence of tests. Information about

tests is represented by labels, which are metalinguistic logical objects that

“adorn” the pure syntactical level of formulae.

4.1 Syntax of TL

The alphabet of TL is built out of the variable symbol x, a denumerable set

of symbols for constants each denoted by lowercase Latin letters (possibly

indexed), and a denumerable set of unary predicates, denoted by capital Latin

letters P, Q ..., possibly indexed.

Predicates represent properties. When applied to an individual constant, a

predicate returns an element of a given domain. Properties are revealed by

tests, which are not included in the syntax of formulae; rather, we introduce

tests in the syntax of labels that we give below.

A ground atomic formula (ground formula, hereafter) is an atomic formula

of the form P(c), where c is a constant. We write gF to denote the set of

ground formulae.

Formulae in TL are built from the set of atomic formulae by means of the

usual logical connectives: ⊥, ¬, ∧, →. Formally, the set aF of well-formed

assertion formulae is the smallest set such that:

(i) gF ⊆ aF; (ii) ⊥ ∈ aF;(iii) if A ∈ aF then ¬A ∈ aF, (iv) if A, B ∈ aF then (A∧B) ∈ aF, and(v) if A, B ∈ aF then (A → B) ∈ aF.

We denote well-formed assertion formulae by A, B, C ..., possibly indexed,

and call them formulae or assertions for short.

Basic literals are formed by letters or negations of letters, applied to

constants, i.e., P(c) and ¬P(c). For example, if Fever is a predicate and John

is a constant, then Fever(John) is a literal.

Following the tradition of labelled deduction systems [14, 10, 11], we

extend the syntax above by introducing a class of labels that represent

experiments, i.e., instants of time in which tests of properties are performed

on a sample, under some environmental conditions. Labels are built from a

set R of symbols for tests denoted by variables and possibly indexed. Tests in

label symbols carry information about the execution time (the instant in which

the test is performed) and the experimental condition (condition, for short),

6

Page 8: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

which is the history of actions performed during the experiment and (possibly)

additional information provided/known during the diagnostic process. This

reflects the fact that a particular test can be conditioned by a specific situation

(like the environment, a medical condition, etc.). For instance, when a

geologist conducts a forecast of the position of underground water, among

other examinations there is an extraction of a vertical cylinder of ground: if the

terrain is very humid, then the stratification of the underground can be

different than usual, leading to a change in the forecast itself.

To formalize these ideas, we introduce the set T of symbols for time

instants t, possibly indexed, and the set A of experimental conditions denoted

by φ, possibly indexed. In this paper, which provides a first investigation, we

define A simply as the set that contains finite compositional sequences of

tests τ1... τk, where we assume that τi+1 ∈ R has been applied after τi ∈ R on

the same sample. Clearly, we can have φ =∅.

Given a fixed denumerable set LabT of labels of the form τ(t;φ), where τ is a

test able to reveal one or more properties, t represents a time instant (of a

given timeline) and φ is the experimental condition. Labels are denoted by l

and r, possibly indexed. A test label is a construct that is more expressive than

a test symbol: a test label represents a test put into a context, i.e., equipped

with additional information such as its time (when it is applies) and the history

of the experiment, i.e., the trace of previously applied tests (in the same

experiment).

In this paper, we focus on diagnostic reasoning about ground formulae and

leave the extension to propositional or first-order logic to future work (see

Section 7.

We define labelled formulae as follows. A labeled (well-formed) formula is

a formula of the form τ(t;φ): A, where A ∈ gF. Intuitively, τ(t;φ): P(c) denotes the

assertion “τ reveals P at time t on the sample c, under conditions φ”. For

instance, we can write Elisa(Monday;Fever) : Ebola(John) to express that we

execute the Elisa test on a sample on Monday, with the patient John having a

Fever, to reveal the existence of an infection of Ebola.

Ground facts are ground formulae without labels. We need to introduce

one epistemic negation to denote the fact that a formula is not revealed by a test,

which is conceptually different than stating that a test reveals the negation of a

formula. We thus introduce the negation ∼ that ranges over labelled formulae, in

contrast to the logical connective ¬ that we already introduced above. Note that

neither τ(t;φ): A implies τ(t;φ): ¬A, nor τ(t;φ): ¬A implies τ(t;φ): A.

4.2 Orders and Relation for Tests and Observable Properties

We now discuss the mechanization of experimental reasoning and how to

provide a logical foundation of test-based knowledge. In this paper, we mainly

focus on test labels and on the reasoning processes performed during a

7

Page 9: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

procedure that aims at extracting experimental knowledge from some

resources (typically, a sample).

We can define a partial order between two test labels, both related to

temporal information and statistical measures for test performances. We start

by defining some temporal orders between labels. We write t1 < t2 to denote

the usual temporal order between time instants, and φ1▹φ2 to denote the

order between conditions. We state that φ1▹φ2 indicates that i φ1 s a

prefix of φ2 .

Following the tradition of labelled deduction systems [14, 10, 11], we

define relational formulae by lifting the orders to labels.

Definition 1 (Temporal Relational Formulae).

– τ 1(t 1; φ1 )

≪ τ2( t 2; φ2 )

iff t1 < t2 and φ1▹φ2

– τ 1(t 1; φ1 ) →τ 2

( t2; φ2 ) iff t1 < t2, φ2=φ1 · τ1 and there is no t such that t1

< t < t2, where φ1 · τ1 denotes the condition obtained by performing τ1 after the events described in .

Note that we are modeling the notion of temporal composition of tests. Inparticular, << represents a general temporal application sequence, whereas

τ1→τ2 represents the execution of the test τ 2 immediately after the

execution of the test τ1 . Note also that the above formula requires the

introduction of a logic with branching future time (see Section 7).

With a slight abuse of notation, we write τ 1→τ2 to denote the test

obtained by composing τ1 and τ 2 ; we treat τ 1→τ2 as a symbol in

R, and we then use it as a label. We now introduce three orders based on testmetrics for elements in LabT.

Definition 2 (Metric-based Relational Formulae).

We write

– ( τ1( t1 ;φ 1)¿a τ2

(t 2; φ2 )) [A]

if τ1 at time t1 and under condition φ1 is more accurate in revealing Athan τ 2 at time t2 under condition φ2 .

– ( τ1( t1 ;φ 1)¿ p τ 2

(t 2; φ2 )) [A] if τ 1 at time t1 and under condition φ1 is more precise in revealing A

than τ 2 at time t2 under condition φ2 .

– (τ1( t1 ;φ 1)¿r τ2

(t 2;φ2 )) [A]

if τ1 at time t1 and under condition φ1 has greater recall in revealing A than τ 2 at time t2 under condition φ2 .

The base of empirical reasoning about tests is the deduction of truth ontests that are correct (with no false positives) or complete (with no false

8

Page 10: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

negatives). We introduce the modal operator □︎+ to denote the fact that a test

has no false positives, and the modal operator □︎- to denote that it has no false

negatives. The modal operators □︎+ and □︎- relate to accuracy, precision andrecall. We use these terms in the usual meaning they have in machinelearning and specifically in the theory of Bayesian classifiers. Accuracy is theprobabilistic complement of error rate of a test, precision is the probabilisticcomplement of negative error rate (namely the probability of the test giving acorrect positive answer), and recall is the probabilistic complement of positiveerror rate (thus the probability of a test giving a correct negative answer). If atest is both correct and complete, then so is the property it reveals.

This can be expressed by means of logical rules. For example, when twotests are differently accurate, and both lack false positives, then they are alsoordered in the same way by precision. Analogously, when they lack falsenegatives, they are also ordered in the same way with respect to recall. Weformalize these concepts as follows, where MAR stands for Map Accuracy toRecall, and MAP stands for Map Accuracy to Precision:

Interference between ( τ1(t ;φ )>aτ 2

( t ; φ ) ) [A] assertions and ( τ1(t ; φ )>p τ2

(t ; φ ) ) [A]

or ( τ1(t ; φ )>rτ2

( t ; φ) ) [A] is managed by means of rules like the following one:

This rule can be reproduced, analogously, for the accuracy as related torecall and to precision.

The modal interplay between different metrics is an interesting problem fromboth the proof theoretical viewpoint and the practical one (related to thesoftware design). We leave to future work the implementation of the interplaybetween different metrics.

In Example 2 we introduce the key idea that we will exploit in the following:the result of a test is measured by the accuracy hypothesis we assume for thetest. For instance, when a test is valued 0.8 accurate, we mean that webelieve the test result true in 80% of the cases, whilst we think that the testgives a wrong answer in 20% of the cases.

9

( τ1( t ; φ )>pτ2

( t ; φ) ) [ A ] ( τ1(t ; φ )>rτ2

( t ;φ ) ) [ A ]

( τ1(t ; φ )>aτ2

( t ;φ ) ) [ A ]P− R

( τ1( t ; φ )>pτ2

( t ; φ) ) [ A ] ( τ1(t ; φ )>rτ2

( t ;φ ) ) [ A ]

( τ1(t ; φ )>aτ 2

( t ;φ ) ) [ A ]P− R

( τ1(t ; φ )>p τ2

(t ; φ ) ) [ A ] ( τ 1( t ; φ )>r τ2

( t ; φ )) [ A ]

( τ1(t ; φ )>aτ2

( t ; φ ) ) [ A ]A − P

Page 11: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

Example 2. Assume that we execute Elisa (Eli) on sample John (J) to test forHIV. We execute the test on Monday (Mon), under the history of no previoustest. The test results positive. Now, since Elisa has no false negatives but hasfalse positives (and is not particularly accurate), we execute Western-Blot(WB) on Tuesday (Tue) to confirm/refute Elisa’s result. Western-Blot,obviously is executed with the history of Elisa, which does not interfere with it.The test results negative. Now, since Western-Blot has not false negatives,we conclude that the sample is HIV-free.

Western-Blot is more accurate than Elisa, so WB(Tue;Eli) >a Eli(Mon;لØ).

The example shows that the best accuracy of a test τ i with respect to a

test τ j induces a first, intuitive notion of prevalence: if revealed formulae

are contradictory, we trust the more reliable experiment. This will becomecentral in Section 3.3, when we move toward defeasible theories.

It is well known that, when using tests for revealing properties employed inempirical sciences, a given test can interfere with the result of other tests. Forinstance, certain therapeutic tests (such as the attempt at solving a dangerouspotential bacterial infection by the prophylaxis with antibiotics) can make theresults of other tests unreliable.

We say that test τ1 obfuscates test τ 2 if performing τ1 on a

sample before τ 2 diminishes τ 2 ’s ability to reveal a given property.

On the other hand, τ 1 gifts a property on a test τ 2 when its

application extends the ability of τ2 to reveal the property itself.

This reasoning is based on the application of tests in sequence, which isthe reason why we have introduced an implicit notion of time. We assume thattime is discrete, and that tests are executed at a given instant of time. Weintroduce a notion of absolute time and associate directly temporal instants totest execution only. Partial obfuscation and partial gift can be intuitivelydescribed as follows:

– We say that a test τ 1 (for a property A) a-obfuscates (↘a) the test

τ2 of a property B if, when τ 1 is executed before τ2 , then the

accuracy of τ2 : B is less than it would have been if the test τ1 on A

was not executed.

– We say that a test τ 1 (for a property A) a-gifts (↗a) a property B if

τ1 : B when, contrary to a-obfuscation, the accuracy of the test for B

increases.

10

Page 12: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

We can similarly define p-obfuscation, p-gift, r-obfuscation and r-giftreferring to obfuscation and gift for precision and recall, instead of accuracy.

More formally, we can provide the following relation, exploiting metric-based relational formulae (Definition 2).

Definition 3 (Obfuscation and Gift).

– ( τ1( t1 ;φ 1)↘a τ2

( t 2; φ2 )) [B] iff t1 < t2 and τ 2(t ; φ )❑a τ2

( t 2; φ2 ) for t < t1 or for φ s.t. .

τ1∉ φ

– (τ1( t1 ;φ 1)↗a τ 2

( t2;φ2 )) [B] iff t1 < t2 and τ 2(t 2; φ2 )❑a τ 2

(t ; φ ) for t < t1 or for φ s.t.

τ1∉ φ

Similar rules for recall and precision can be obtained by replacing relations

↗a and ↘a with the counterparts ↗p , ↗r , ↘p and

↘r .

Total obfuscation and total gift have a specific logical interpretation. A test

τ1 (revealing a property A) totally obfuscates another test τ 2 (revealing

a property B) if after the execution of τ1 it is no longer possible to reveal B

by means of τ 2 :( τ1(t1; φ1 ) : A ( c ) τ 1

(t 1; φ1 ) ↓a τ 2( t2 ;φ2) ) [B ] t1<t2

~ τ 2( t2 ; φ2 ): B ( c )

totalObf

Dually, a test τ 1 (revealing a property A) totally gifts another test τ 2

(revealing a property B) if after the execution of τ 1 it is no longer necessary to

reveal B, since the information that B holds for the sample is obtained as a side

effect of the execution of τ1 . Since B does not require to be revealed but, after

the execution of τ 1 , it becomes a ground knowledge, we can classify B as a

fact.

( τ1(t1; φ1 ) : A ( c ) τ 1

(t 1; φ1 ) ↑a τ 2( t2 ;φ2) ) [B ] t1<t2

τ 1(t 1; φ1 ) :B ( c )

totalGift

From now on, for the sake of simplicity, when writing the total gift and totalobfuscation symbols we will omit the “a” symbol so that ↓a and ↑a will simplybe ↓ and ↑.

Clinical diagnostic is a useful setting to show what kind of hybridknowledge we are modeling, but it is not the only context in which thisknowledge can be found. We discuss here an example related to information

11

Page 13: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

security. In particular, we consider the Open Web Application Security Project(OWASP)3, Top Ten Most Critical Web Application Security Risks. OWASP’sTop Ten is updated regularly and the latest edition includes A1-Injection, A2-Broken Authentication, A3-Sensitive Data Exposure, A4-XML External Entities(XXE), A5-Broken Access Control, A6-Security Misconfiguration, A7-Cross-SiteScripting (XSS), A8-Insecure Deserialization, A9-Using Components withKnown Vulnerabilities, A10-Insufficient Logging and Monitoring. These risks arenot independent from each other; being exposed to one risk sometimesentails being also exposed to another one in the list. Being exposed toInjection (A1) means that untrusted data is sent to an interpreter as part of acommand or query and this data can trick the interpreter into executingunintended commands or accessing data without proper authorization; thiscan imply also to be exposed to the risk of Broken Authentication (A2). In thefollowing example, we interpret vulnerability scanning tools as a test to reveala given risk of the OWASP Top Ten.

We write τ (A i)(t ; φ )

: Ai to say that the security risk Ai is revealed by a test (a

suitable scanning tool) τ (A i) . To classify test/scanning tools, i.e., to

measure software performances, we adopt the standard binary classificationof algorithm behavior.

Example 3 (Total obfuscation). Let be C a web application. Risk A9 (UsingComponents with Known Vulnerabilities) obfuscates risk A10 (InsufficientLogging and Monitoring).

Example 4 (Total gift). Let be C a web application. Risk A1 (Injection) totallygifts risk A2 (Broken Authentication).

12

Page 14: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

An interference between tests τ1 and τ 2 may occur. We write

τ1⊥ τ2 if τ 1 and τ2 are non-interfering, i.e., if they do not obfuscate

or gift each other in either direction.

4.3 Defeasible Logic and diagnostic reasoning

One of the most characterising aspects of the experiment based reasoning isthe possibility that a property revealed positively by a test is revealednegatively by another test. Generally speaking, we want to devise a method ofreasoning that allows us to accommodate contradictory assertions, in a non-monotonic fashion.

In [17,37,39,40,41,43], some of us have investigated the use of Defeasible

Logic as a means for managing data coming from external sources and

validated by means of data mining methods.

Non-monotonic reasoning accommodates conclusions when dealing withpotential conflicts. When derivations may lead to potentially contradictoryconclusions, we may typically have two strategies to avoid inconsistencies. Ina credulous approach, we branch by creating two distinct sets of conclusions:one for each of the contradictory conclusions. Opposite, with a skepticalapproach, we need a (preference) mechanism to establish whether oneconclusion is preferred to the other one (in literature, this is typically referredto as a superiority, or preference relation; see [34] for a systematic analysis).If such a mechanism is not able to solve the conflict, no conclusion is derived,unless exceptions are given. Exceptions can be seen as particular conditionspreventing to draw a specific conclusion2. In this paper, we shall not considercredulous settings.

The formalism that we employ here to convert intrinsic non-monotonic

aspects of TL logic is Defeasible Logic (DL), a skeptical non-monotonic

reasoning framework that accommodates assertions, priorities and negativeexceptions as introduced above.

There are three distinct sources of non-monotonicity when reasoningabout tests:

Two different tests may give out different results on the same sample. One test cannot be used to conclude a diagnosis, because another test has modified the sample

or created a condition that prevents the use of the sample. One test can be used to conclude diagnosis, because another test has modified the sample or

created a condition that allows the use of the sample for concluding on the diagnosis without performing the test at all.

2� Such exceptions are known as negative exceptions. In credulous settings, another type of exceptions is possible: positive exceptions, whose purpose is to force a particular derivations.

13

Page 15: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

We emphasize these aspects in Section 5, where we introduce a rewriting

algorithm that transforms a set of labelled TL rules into a defeasible theory that

can be processed, in turn, by a defeasible engine.

In this perspective, DL can be viewed as a meta-logic: its rules habilitatethe expression of diagnostic reasoning in a natural way and, thanks to therewriting algorithm, a defeasible theory is produced. Once we have producedsuch a defeasible theory, we can process the theory by means of thereasoning technology SPINdle. In Defeasible Logic [30, 32] we indeed haverules for opposite derivations, although not all concluded. In the situationwhere rules for opposite literals are activated the logic does not produce anyinconsistency but does not draw any conclusion unless a preference (orsuperiority) relation states that one rule prevails over the other.

A defeasible theory D is defined as a structure (F, R, >), where:

F is the set of facts, a set of atomic assertions (literals) considered to be

always true (e.g., a fact is that “the stove is ON”, formally “stove ON”), R is the set of rules, which in turn contains three finite sets of rules:

strict rules (denoted by ) , defeasible rules (denoted by symbol ),

and defeaters (denoted by symbol ~>). > is a binary relation over R, restricted on defeasible rules with opposite

conclusions.

A defeasible rule can be defeated by contrary evidence; defeaters are specialrules whose only purpose is to defeat defeasible rules by producing contraryevidence. Our framework does not use strict rules or defeaters, but onlydefeasible rules. The superiority relation establishes that some rules overridethe conclusion of another one with the opposite conclusion.

Like in [32], we consider only a version of this logic that can be reduced toa propositional theory and does not contain defeaters.

In DL, a proof P of length n is a finite sequence P(1), . . ., P(n) of tagged literals

of the type ±∆p and ±∂p. The idea is that, at every step of the derivation, a literal iseither proven or disproven. The set of positive and negative conclusions is called

extension. The meanings of tagged literals is as follows

+Δq, which means that there is a definite proof for q in D; such a proof

uses strict rules and facts only. -Δq which means that q is definitely refuted in D. +∂q which means that q is defeasibly proven in D. ∂q which means that q is defeasibly refuted in D.

Formalisation of the proof tags is out of the scope of the present paper. Anidea of how the derivation mechanism works is proposed in the followingexample.

Example. Let D=({a,b,},R,>) be a defeasible theory such thatR= { r1: a c

14

Page 16: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

r2: b d

r3: c d}

>={(r2, r3)}.Then we derive +c via r1 and +∂d since r2 is stronger than r3. Note that it does

not matter whether the antecedents of a rule have been proven as strictconclusions: if such literals are used to allow a defeasible rule to fire (as for r2),the conclusion will be defeasible.

5 An architecture for diagnostic reasoning

5.1 An algorithm for transforming TL assertions onto defeasible theories

In this section, we define an algorithm that translates diagnostic results ina DL theory. We employ three kinds of objects: (a) facts, (b) rules and (c)priorities, namely superiority relations that establish which rule prevails when

conflicting conclusions arise. We treat TL as formed by two layers, the first

formed by the defeasible objects and the second formed by meta-rules,establishing how to deal with the rules themselves. Essentially, we considerthe facts as known truths, incontrovertible data, such as the results of testswithout errors, or direct anamnestic data, for instance the age of a patient.Rules are instead the central part of the logical structure and relate the teststo the diagnosis, providing defeasible derivations.

The relations between tests, both temporal and evaluation ones, includinggift and obfuscation, are treated as meta-rules, namely rules providing roomfor derived priorities. In the application of the translation algorithm we showthat the meta-rules can be synchronized, translated into defeasible rules, and

then used to make a decision about the meaning of a TL theory in linear time.

First of all, we introduce the intended meaning of the elements of the system.This architecture of the solution is specified in detail in Section 5.2.

Meta-rules are written with two constraints: time and experimentalevaluation. In particular, these rules are transformed into defeasible rules,extended with a temporal label expressing the initial time instant t of the(open) interval in which the rule is available to be put before the literalsappearing in the rule itself and criteria based on measures, again mappedonto labels in the form p+ or p- above the derivation operation sign. Thetransformation algorithm SincroCutII is introduced below. The algorithm takesas input a set of meta-rules and a set of evaluations of an experiment andtransforms them into a defeasible theory by checking the temporalconstraints, interference, and modal relations among tests. The result of thealgorithm is a defeasible theory. The model of these meta-rules is inspired bystudies of one of the authors [8].

15

Page 17: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

We now describe how the algorithm works. It takes in input a finite set of TL

assertions, and gives, as output, a defeasible theory. The first cycle initializesbasic data structures, used to host the converted tokens. The second cycle ofthe algorithm computes the facts in the theory, and therefore determines thebase for the subsequent derivations by the Spindle reasoner. The third cyclereads meta-rules and priorities and translates them in the defeasible theoryunder construction. Notation r:t extracts the temporal information of a rule.The procedure evalExperiment extracts the result of an experiment. Theprocedure createNewRule creates a new empty rule.

Algorithm SincroCutII

16

Page 18: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

Observe, moreover, that obfuscation and gift, the relations between rulesthat are providing room for temporal re-processing of rules themselves, aretreated as modals in the second cycle of the algorithm itself.

A certain rule is a candidate for rewriting only if the synchronizeracknowledged that its clock time falls within the validity interval of the rulewhen the rule has a validity interval explicitly specified, or at the exact instantof the rule if the rule is not tagged so and therefore is consideredinstantaneous.

17

Page 19: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

The algorithm executes the translation. At this stage of our research, wedo not provide proofs of soundness and completeness of the deductionsystem introduced in Section 2, and, consequently we will not discussproperties of correctness and completeness of the implemented solution. Allthese investigations are left to future work, along with the discussion of thesemantics of the logical framework, and the corresponding canonical models.Being the framework based on DL, clearly the semantics of the first depends

upon the semantics of the latter to which the TL assertions are translated.

What we can prove here is the complexity of the method, which is

independent of the issues discussed above (soundness of the TL deduction

rules, semantics of TL, completeness, canonical models, correctness of the

algorithm, completeness of the algorithm). In fact, the algorithm is linear in the

number of literals appearing in the TL set of assertions given as input to the

algorithm, since the limit number of cycles that can be executed is the numberof literals.

5.2 Architecture of a system implementing TL transformation

Fig. 1: Logic model of reference architecture

In this section, we briefly introduce an architecture for diagnostic reasoningthat is based on four modules, some documented in this paper, some yet to come.The architecture is described in terms of functions of the modules. We introducehere the DILP module, a module used to perform recommendations on the rules to

18

Page 20: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

introduce that is based upon the Machine Leaning methods of Inductive LogicProgramming, an approach that is also applied to DL.

User Interface: allows the user to input “meta rules” and providesvisualization of all data coming from the DILP module;

Transformer: takes as input a set of “meta rules” and gives as output a set ofdefeasible rules to be used by the Reasoner, according with time given by itsinternal clock mechanism and an evaluation algorithm;

Reasoner: uses the rules and determines the “should be” conclusions;

Preliminary Output: is responsible for the delivery to the user;

DILP: gathers data and makes analysis delivering summaries and possiblerules to be displayed by the User Interface (future extension).

The output of the User Interface is an ordered set of “Meta rules” whichare one of the input of the Transformer that runs continuously and at giventimes uses the algorithm SincroCutII to produce a set of defeasible rules.

These rules are given to the Reasoner whose output is using the +@conclusion given the set of rules coming from the Transformer. The DefeasibleLogic rule engine used is a Prolog-like engine called SPINdle [33].

At different times different conclusions are possible, due to the work of theTransformer. We now show how the algorithm works by means of a detailedexample.

19

Page 21: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

6 Case Study

We use a concrete example to show how our model can fit a real-lifescenario. We consider a web application of a bank, located in Italy, which hadto be tested against the OWASP Top Ten risks that we listed above.

The analysis is split between two different contractors, namely subjects incharge of analyzing a subset of the list. Contractor α uses a combination ofautomated testing and human validation and can cover risk {A1, A3, A4, A5,A10} while contractor β performs only automated testing and can cover risk{A2, A4, A6, A7, A8, A9, A10}; α is considered to perform tests with a higheraccuracy of the ones delivered by β. Both execute the tests sequentially oneper day and α has been engaged after β has completed its task. For the sakeof space in the rest of this example we will write Ai instead of Ai(BankApp) todescribe the ground formulas we are revealing.

There is an obfuscation on A10 given a test on A9 and a gift on A2 given atest on A1

We also state that α:A1 > β:A1, α:A4 > β:A4 and α:A10 > β:A10 byknowledge on the accuracy of tests and α:A2 >β:A2 because of the giftspecified above. The results are:

This is therefore the set of meta-rules. Once the Translator has performedAlgorithm SincroCutII we have, at any time after all the tests have beenexecuted (t > 12):

→ φ1, φ2, φ3, φ4, φ5, φ6, φ7, φ8, φ9, φ 10, φ

11, φ 12

→ rα1 : => A1 r β1 : => ¬A2rα2 : => ¬A3 r β2 : => A4r α3 : => ¬A4 r β3 : => A6r α4 : => ¬A5 r β4 : => ¬A7r α5 : => A10 r β5 : => A8

r β6 : => A9

20

Page 22: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

r β7 : =>¬A10

oβ1 : => A10 gα1 > rβ1oβ2 : =>¬A10 rα3 > rβ2gα1 => A2 rα5 > rβ7

rα5 > oβ2

Given that theory, the Reasoner concludes +∂A1, +∂A2, +∂ ¬A3, +∂¬A4,+∂¬A5, +∂ A6, +∂ ¬A7, +∂ A8, +∂A9, +∂ A10, as shown in Appendix A.

We can therefore conclude that the application is subject to risks A1-Injection, A2-Broken Authentication, A6-Security Misconfiguration, A8-InsecureDeserialization, A9-Using Components with Known Vulnerabilities, A10-Insufficient Logging and Monitoring, but not to A3-Sensitive Data Exposure, A4-XML External Entities (XXE), A5-Broken Access Control, A7 Cross-SiteScripting (XSS).

7 Discussion and Conclusions

In this paper, we have developed the logic TL [17], which is able to

formalize a form of diagnostic reasoning based both on deduction and onexperimental knowledge. We introduced some notions about experiment-based deduction, following a perspective clearly oriented to reasoningmechanization. In comparison with [17], we also focused on the (natural)defeasible aspects of diagnostic knowledge.

To this end, we introduced a rewriting algorithm SincroCutII, that takes as

input TL formulas and transforms them into a defeasible theory by checking

temporal, accuracy and interference constraints between tests. The result ofthe SincroCutII is a defeasible theory. By means of an example from a real-lifescenario, we carried out a case study using the defeasible engine SPINdle.

We are currently working in three directions. First, the system TL can be

improved as a (stand-alone) labelled temporal logic framework, and its prooftheory seems to be a challenging and interesting task. On the semantic side,

we observed that the natural interpretation for TL is related to some

interpretations of the branching time logic UB [7]. The most suitable style isPrawitz’ natural deduction [12,35,36]. Following [16, 15], we are developing alabelled, non-monotonic natural deduction system.

Second, the defeasible flavor we pointed out in this paper seems to be theright perspective to move toward a more expressive automatic reasoner. Inparticular, we aim to extend the deduction system both to include more refinedquantitative information about tests and to address more complex diagnosticbased deduction, including multi-level defeasible mechanisms.

21

Page 23: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

Finally, there is a strong relation between tests and resources. In a morerefined framework, a test could reasonably consume a resource in revealing aproperty. This reflects what effectively happens in a number of laboratoryexperiments and we plan to investigate whether this could be captured bymeans of an approach inspired by linear logic.

References

1. A.E. Lawson and E.S. Daniel: Inferences of clinical diagnostic reasoning and diagnostic error. Journal of Biomedical Informatics, 44(3):402–412, 2011.

2. M. McShane, S. Beale, S. Nirenburg, B. Jarrell, and G. Fantry: Inconsistency as a diagnostic tool in a society of intelligent agents. Artificial Intelligence in Medicine, 55(3):137–148, 2012.

3. R. Davis: Diagnostic reasoning based on structure and behavior. Artificial Intelligence, 24(1-3):347–410, 1984.

4. P.E. Johnson, A.S. Duran, F. Hassebrock, J. Moller, M. Prietula, P.J. Feltovich, and D.B. Swan-son: Expertise and error in diagnostic reasoning. Cognitive Science, 5(3):235–283, 1981.

5. D. McSherry: Conversational case-based reasoning in medical decisionmaking. Artificial Intelligence in Medicine, 52(2):59–66, 2011.

6. R. Reiter: A theory of diagnosis from first principles. Artificial Intelligence, 32(1):57–95, 1987.

7. Caleiro, C., Viganò L., Volpe, M.: A labeled deduction system for the logic UB. Proceedings of the 20th International Symposium on Temporal Representation and Reasoning, TIME, 45– 53 (2013).

8. Cristani, M., Burato, E., Gabrielli, N.: Ontology-Driven Compression of Temporal Series: A Case Study in SCADA Technologies. Proceedings of DEXA Workshop, Turin, Italy, May 2008.

9. Manning, C. D., Raghavan, P., Schütze, H., Introduction to Information Retrieval, Cambridge University Press, New York, NY, USA, 2008.

10. Masini, A., Viganò, L., Zorzi, M.: A Qualitative Modal Representation ofQuantum Register Transformations. 38th IEEE International Symposium on Multiple-Valued Logic (ISMVL 2008), 22-23 May 2008, Dallas, Texas, USA, ISMVL 2008: 131-137.

11. Masini, A., Viganò, L., Zorzi, M.: Modal Deduction Systems for Quantum State Transformations, Multiple-Valued Logic and Soft Computing 17(5-6): 475- 519 (2011)

12. Prawitz, D., Natural Deduction: A Proof-Theoretical Study, Almquist and Wiskell, 1965.

22

Page 24: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

13. Rish, I.,An empirical study of the naive Bayes classifier (PDF). IJCAI Workshop on Empirical Methods in AI (2001).

14. Viganò, L.: Labelled Non-Classical Logics. Kluwer Academic Publishers,2000.

15. Viganò, L., Volpe, M., Zorzi, M.: A branching distributed temporal logic for reasoning about entanglement-free quantum state transformations. Inf. Comput. 255: 311-333 (2017).

16. Viganò, L., Volpe, M., Zorzi, M.: Quantum State Transformations and Branching Distributed Temporal Logic. 21st International Workshop, WoLLIC 2014, Valparaiso, Chile, September 1-4, 2014, Lecture Notes in Computer Science, 8652, 1–19 (2014).

17. Cristani, M., Olivieri, F., Tomazzoli, C., Zorzi, M.: Towards a logical framework for diagnostic reasoning. Smart Innovation, Systems and Technologies 96, KES Conference on Agent and Multi-Agent Systems: Technologies and Applications, KES-AMSTA 2018, pp. 144-155 (2018).

18. J.E.C. Bellamy. Medical diagnosis, diagnostic spaces, and fuzzy systems. Journal of the American Veterinary Medical Association, 210(3):390–396, 1997.

19. M. BelmonteSerrano, C. Sierra, and R.L. de Mantaras. Renoir: An expert system using fuzzy logic for rheumatology diagnosis. International Journal of Intelligent Systems, 9(11):985– 1000, 1994.

20. K. Boegl, K.-P. Adlassnig, Y. Hayashi, T.E. Rothenfluh, and H. Leitich. Knowledge acqui-sition in the fuzzy knowledge representation framework of a medical consultation system. Artificial Intelligence in Medicine, 30(1):1–26, 2004.

21. Q. Liu, F. Jiang, and D. Deng. Design and implement for diagnosis systems of hemorheol-ogy on blood viscosity syndrome based on grc. Lecture Notes in Computer Science (including subseries Lecture Notesin Artificial Intelligence and Lecture Notes in Bioinformatics), 2639:413–420, 2003.

22. B. Pandey and R.B. Mishra.Knowledge and intelligent computing system in medicine. Computers in Biology and Medicine, 39(3):215–230, 2009.

23. E.I. Papageorgiou, J.D. Roo, C. Huszka, and D. Colaert. Formalization of treatment guide-lines using fuzzy cognitive maps and semantic web tools. Journal of Biomedical Informatics, 45(1):45–60, 2012.

24. N.H. Phuong and V. Kreinovich. Fuzzy logic and its applications in medicine. International Journal of Medical Informatics, 62(2-3):165–173, 2001.

25. P. Píš and R. Mesiar. Fuzzy model of inexact reasoning in medicine. Computer Methods and Programs in Biomedicine, 30(1):1–8, 1989.

23

Page 25: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

26. G. Rau, K. Becker, R. Kaufmann, and H.J. Zimmermann. Fuzzy logic and control: Principal approach and potential applications in medicine. Artificial Organs, 19(1):105–112, 1995.

27. R. Seising. From vagueness in medical thought to the foundations of fuzzy reasoning in medical diagnosis. Artificial Intelligence in Medicine, 38(3):237–256, 2006.

28. P. Vineis. Methodological insights: Fuzzy sets in medicine. Journal of Epidemiology and Community Health, 62(3):273–278, 2008.

29. A. Yardimci: Soft computing in medicine. Applied Soft Computing Journal, 9(3):1029–1043, 2009.

30. Nute, D.: Defeasible logic. In: Handbook of Logic in Artificial Intelligenceand Logic Pro-gramming, vol. 3. Oxford University Press (1987)

31. G. Antoniou, D. Billington, G. Governatori, M.J. Maher, and A. Rock: A family of defeasible reasoning logics and its implementation. In ECAI 2000, pages 459–463, 2000.

32. Grigoris Antoniou, David Billington, Guido Governatori, and Michael J. Maher: Representation results for defeasible logic. ACM Trans. Comput. Log., 2(2):255–287, 2001.

33. Lam, H.P., Governatori, G.: The making of SPINdle. In Paschke, A., Governatori, G., Hall, J., eds.: Proceedings of The International RuleMLSymposium on Rule Interchange and Applications (RuleML 2009), Springer (2009) 315–322

34. P.M. Dung, P. Mancarella, and F. Toni: Argumentation-based proof procedures for credulous and skeptical non-monotonic reasoning. Lecture Notes in Computer Science (includ-ing subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2408(PART2):289–310, 2002.

35. Aschieri, A., Zorzi, M.: On natural deduction in classical first-order logic:Curry-Howard correspondence, strong normalization and Herbrand's theorem. Theoretical Computer Science 625: 125-146 (2016).

36. Aschieri, F., Zorzi, M.: Non-determinism, non-termination and the strongnormalization of system T. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7941 LNCS, pp. 31-47, 2013.

37. Cristani, M., Olivieri, F., Tomazzoli, C.: Automatic synthesis of best practices for energy consumptions, Proceedings - 2016 10th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, IMIS 2016, 7794456, pp. 154-161, 2016.

24

Page 26: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic model to Software design

38. Combi, C., Masini, A., Oliboni, B., Zorzi, M.: A logical framework for XML reference specification. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9262, pp. 258-267, 2015.

39. Cristani, M., Karafili, E., Tomazzoli, C.: Improving energy saving

techniques by ambient intelligence scheduling, Proceedings - International Conference on Advanced Information Networking and Applications, AINA, 2015

40. Cristani, M., Karafili, E., Tomazzoli, C.: Energy saving by ambient intelligence techniques, Proceedings - 2014 International Conference on Network-Based Information Systems, NBiS 2014.

41. Tomazzoli, C., Cristani, M., Karafili, E., Olivieri, F.: Non-monotonic reasoning rules for energy efficiency, Journal of Ambient Intelligence and Smart Environments, 9 (3), pp. 345-360, 2018.

42. Combi, C., Masini, A., Oliboni, B., Zorzi, M.: A hybrid logic for XML reference con-straints. Data knowledge Engineering, 115, pp. 94-115 (2018)

43. Matteo Cristani, Claudio Tomazzoli, Erisa Karafili, Francesco Olivieri:Defeasible Reasoning about Electric Consumptions. AINA 2016: 885-892

25

Page 27: King's Research Portal

Cristani et al. Diagnostics as a Reasoning Processes: from Logic structure to Software design

A Spindle conclusions for rules of the reference implementation

******************************************** SPINdle (version 2.2.4)* Copyright (C) 2009-2013 NICTA Ltd.......* java -jar spindle-<version>.jar --app.license****************************************************=========================== application start!! ===========================Initialize application context - start.....===================......

+d A1(X) +d A10(X) +d A2(X) +d -A3(X) +d -A4(X) +d -A5(X) +d A6(X) +d -A7(X) +d A8(X) +d A9(X) +d Phi1(X) +d Phi10(X) +d Phi11(X) +d Phi12(X) +d Phi2(X) +d Phi3(X) +d Phi4(X) +d Phi5(X) +d Phi6(X) +d Phi7(X) +d Phi8(X) +d Phi9(X) -d -A10(X) -d -A2(X) -d A4(X)

Calling the shutdown routine...Terminate application context - startTerminate application context - end========================================== Application shutdown completed! ==========================================