Top Banner
Representing Uncertainty In Expert Systems Presented By : Bhupendra Kumar Integrated M.Tech. Faculty of Engg.
32

Representing uncertainty in expert systems

Nov 01, 2014

Download

Engineering

bhupendra kumar

 
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Representing uncertainty in expert systems

Representing Uncertainty In Expert Systems

Presented By : Bhupendra KumarIntegrated M.Tech.Faculty of Engg.

Page 2: Representing uncertainty in expert systems

• Introduction to expert systems• Merits & Demerits in expert systems• Introduction to uncertainty• Sources of Uncertain Knowledge• Bayesian Rule• Reasoning in Expert Systems• Bias of the Bayesian method• Certainty factors theory and evidential reasoning

OUTLINE

Page 3: Representing uncertainty in expert systems

• An Expert System is a computer program that simulates human intelligence and behavior in specific and limited domains.

• It is composed of three major modules:• A Knowledge Base• An Inference Engine• A User Interface

Expert system

Page 4: Representing uncertainty in expert systems

Expert system

Page 5: Representing uncertainty in expert systems

Major Components• Knowledge base - a declarative

representation of the expertise, often in IF THEN rules

• Working storage - the data which is specific to a problem being solved

• Inference engine - the code at the core of the system

• Derives recommendations from the knowledge base and problem-specific data in working storage

• User interface - the code that controls the dialog between the user and the system

Page 6: Representing uncertainty in expert systems

Advantage of expert system

• Providing expert opinion in remote sites• Enhance the performance of tasks by applying heuristic expert knowledge

• Planning, Troubleshooting, Robotic manipulation, Exploration, Process Control

Page 7: Representing uncertainty in expert systems

• Expert systems are not good for :

• Representing temporal knowledge• Representing spatial knowledge• Performing commonsense reasoning• Recognizing the limits of their ability• Handling inconsistent knowledge

Problems with expert system

Page 8: Representing uncertainty in expert systems

• The information available to humans is often imperfect. An expert can cope with defects.

• Classical logic permits only exact reasoning• IF A is trueTHEN A is ¬ falseand• IF B is falseTHEN B is¬ true• Most real-world problems do not provide exact information. It can be inexact, incomplete or even immeasurable.

Introduction to uncertainty

Page 9: Representing uncertainty in expert systems

• Uncertainty is defined as the lack of the exact knowledge, that

would enable us to reach a perfectly reliable conclusion.

• Classical logic permits only exact reasoning. It assumes that

perfect knowledge always exists and the law of the excluded middle can always be applied.

¬TRUE = FALSE.

Definition

Page 10: Representing uncertainty in expert systems

• Weak implications:

• Domain experts and knowledge engineers must establish concrete correlations between IF and THEN parts of the rules.

• Therefore, expert systems need to have the ability to handle vague associations, for example by accepting the degree of correlations as numerical certainty factors.

Sources of uncertain knowledge

Page 11: Representing uncertainty in expert systems

• Imprecise Language: Our natural language is ambiguousand imprecise. As a result, it can be difficult to express knowledge in the precise

IF-THEN form of production rules. Expert systems need

quantified measures.

Unknown Data: When the data is incomplete or missing,the only solution is to accept the value “unknown” andproceed to an approximate reasoning with this value.

• Disagreement among experts: Weighting associated todifferent expert opinions.

Sources of Uncertain Knowledge

Page 12: Representing uncertainty in expert systems

Probabilistic reasoning• Probability Theory Basics• Bayesian Reasoning

Examine the uncertainty

Page 13: Representing uncertainty in expert systems

Basics of probability theoryWhen examining uncertainty, we adopt probability as a model to predict future events.

And likewise for failures, q. Now let A be some event and B be some other event. These are not mutually exclusive.

The conditional probability that event A will occur,GIVEN that B has occurred is P(AlB).

Page 14: Representing uncertainty in expert systems

Bayesian RuleThe probability of both A and B both occurring, denoted is the joint probability.

and is commutativeThis allows us to derive the famous Bayesian Rule.

• If A is conditionally dependent on n other mutually exclusive events then:

Page 15: Representing uncertainty in expert systems

• We shall now consider the case where A depends on two mutually exclusive events, From Equation 5

• and substituting this into Bayesian Rule (Equation 4) gives

• Equation 6 is used in the management of uncertainty in expert systems.

Dependent Events that are mutually exclusive

Page 16: Representing uncertainty in expert systems

• Armed with Equation 6, we shall try to manage uncertainty in expert systems.

• Suppose rules in a knowledge base are represented as follows:

• Uncertain rules• IF H is true THEN E is true, with probability p• if event E has occurred, then probability of occurrence of H con be obtained by

• Equation 6, replacing for A and B. In this case, H is the hypothesis and E is the evidence. Rewriting Eq. 6 in terms of H and E:

Reasoning in Expert Systems

Page 17: Representing uncertainty in expert systems

• Single evidence E and m hypotheses imply:

• Suppose the expert, given multiple (n) evidences, cannot distinguish between m hypotheses:

• An application of Eq. 9 requires us to obtain the conditional probabilities of all possible combinations of evidences for all hypotheses! This grows exponentially. Therefore, assume conditional independence if possible.

Generalising to m hypotheses and n evidences

Page 18: Representing uncertainty in expert systems

• Let the posterior probability of hypothesis Hi upon observing evidences E1… En be:

• This is a far more tractable solution and assumes conditional independence among different evidences.

• Users provide information about the evidence observed and the expert system computes p(H|E) for hypothesis H in light of the user-supplied evidence E. Probability p(H|E) is called the posterior probability of hypothesis H upon observing evidence E.

Probabilistic reasoning in expert systems

Page 19: Representing uncertainty in expert systems

• Example

How does an expert system compute posteriorprobabilities and rank hypotheses?

Page 20: Representing uncertainty in expert systems

After evidence E3 is observed, belief in hypothesis H1 decreases and becomes equal to belief in hypothesis H2. Belief in hypothesis H3 increases and even nearly reaches beliefs in hypotheses H1 and H2.

Thus,

32,1,=,3

13

33 i

HpHEp

HpHEpEHp

kkk

iii

0.3425.0.90+35.07.0+0.400.6

0.400.631

EHp

0.3425.0.90+35.07.0+0.400.6

35.07.032

EHp

0.3225.0.90+35.07.0+0.400.6

25.09.033

EHp

Page 21: Representing uncertainty in expert systems

Suppose now that we observe evidence E1. The posterior probabilities are calculated as

Hence,

32,1,=,3

131

3131 i

HpHEpHEp

HpHEpHEpEEHp

kkkk

iiii

0.1925.00.5+35.07.00.8+0.400.60.3

0.400.60.3311

EEHp

0.5225.00.5+35.07.00.8+0.400.60.3

35.07.00.8312

EEHp

0.2925.00.5+35.07.00.8+0.400.60.3

25.09.00.5313

EEHp

Hypothesis H2 has now become the most likely one.

Page 22: Representing uncertainty in expert systems

After observing evidence E2, the final posterior probabilities for all hypotheses are calculated:

Although the initial ranking was H1, H2 and H3, only hypotheses H1 and H3 remain under consideration after all evidences (E1, E2 and E3) were observed.

32,1,=,3

1321

321321 i

HpHEpHEpHEp

HpHEpHEpHEpEEEHp

kkkkk

iiiii

0.4525.09.00.50.7

0.7

0.7

0.5

+.3507.00.00.8+0.400.60.90.30.400.60.90.3

3211

EEEHp

025.09.0+.3507.00.00.8+0.400.60.90.3

35.07.00.00.83212

EEEHp

0.5525.09.00.5+.3507.00.00.8+0.400.60.90.3

25.09.00.70.53213

EEEHp

Page 23: Representing uncertainty in expert systems

• Bayesian reasoning requires probability inputs requiring human judgement,

• Humans do not elicit probabilities completely accurately,• Conditional probabilities may be inconsistent with prior probabilities given by the expert.

• The expert makes different assumptions and can make inaccurate judgements.

Bias of the Bayesian method

Page 24: Representing uncertainty in expert systems

Certainty factors theory is a popular alternative to Bayesian reasoning.

A certainty factor (cf ), a number to measure the expert’s belief. The maximum value of the certainty factor is, say, +1.0 (definitely true) and the minimum -1.0 (definitely false). For example, if the expert states that some evidence is almost certainly true, a cf value of 0.8 would be assigned to this evidence.

Certainty factors theory and evidential reasoning

Page 25: Representing uncertainty in expert systems

Uncertain terms and their interpretation in MYCIN

Term

Definitely notAlmost certainly notProbably notMaybe notUnknown

Certainty Factor

+0.4+0.6+0.8+1.0

MaybeProbablyAlmost certainlyDefinitely

1.0_

0.8_

0.6_

0.4_

0.2 to +0.2_

Page 26: Representing uncertainty in expert systems

In expert systems with certainty factors, the knowledge base consists of a set of rules that have the following syntax:

IF <evidence> THEN <hypothesis> {cf }

where cf represents belief in hypothesis H given that evidence E has occurred.

Page 27: Representing uncertainty in expert systems

The certainty factors theory is based on two functions: measure of belief MB(H,E), and measure of disbelief MD(H,E ).

p(H) is the prior probability of hypothesis H being true; p(H|E) is the probability that hypothesis H is true given evidence E.

if p(H) = 1

MB (H, E) =

1

max 1, 0 p(H)

max p(H |E), p(H)

p(H)otherwise

if p(H) = 0

MD (H, E) =

1

min 1, 0 p(H)

min p(H |E), p(H) p(H)otherwise

Page 28: Representing uncertainty in expert systems

The values of MB(H, E) and MD(H, E) range between 0 and 1. The strength of belief or disbelief in hypothesis H depends on the kind of evidence E observed. Some facts may increase the strength of belief, but some increase the strength of disbelief.

The total strength of belief or disbelief in a hypothesis:

EH,MD,EH,MBmin-EH,MDEH,MB

=cf1

Page 29: Representing uncertainty in expert systems

Example: Consider a simple rule:

IF A is X THEN B is Y An expert may not be absolutely certain that this rule holds. Also suppose it has been observed that in some cases, even when the IF part of the rule is satisfied and object A takes on value X, object B can acquire some different value Z.IF A is X THEN B is Y {cf 0.7}; B is Z {cf 0.2}

Page 30: Representing uncertainty in expert systems

The certainty factor assigned by a rule is propagated through the reasoning chain. This involves establishing the net certainty of the rule consequent when the evidence in the rule antecedent is uncertain:

cf (H,E) = cf (E) x cf For example, IF sky is clear THEN the forecast is sunny {cf 0.8}

and the current certainty factor of sky is clear is 0.5, then

cf (H,E) = 0.5 × 0.8 = 0.4

This result can be interpreted as “It may be sunny”.

Page 31: Representing uncertainty in expert systems

• The certainty factors theory provides a practical alternative to Bayesian reasoning.

• The heuristic manner of combining certainty factors is different from the manner in which they would be combined if they were probabilities.

• The certainty theory is not “mathematically pure” but does mimic the thinking process of a human expert

Page 32: Representing uncertainty in expert systems