Top Banner
Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss
25

Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Dec 19, 2015

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Measurements

Meir Kalech

Partially Based on slides of Brian Williams and Peter struss

Page 2: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Outline Last lecture:

1. Justification-based TMS

2. Assumption-based TMS Consistency-based

diagnosis

Today’s lecture:

1. Generation of tests/probes

2. Measurement Selection

3. Probabilities of Diagnoses

Page 3: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Generation of tests/probes

Test: test vector that can be applied to the system assumption: the behavior of the component does not change

between tests

approaches to select the test that can discriminate between faults of different components (e.g. [Williams])

Probe: selection of the probe based on: predictions generated by each candidate on unknown

measurable points

cost/risk/benefits of the different tests/probes

fault probability of the various components

Page 4: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Generation of tests/probes (II)Approach based on entropy [deKleer, 87, 92]

A-priori probability of the faults (even a rough estimate)

Given set D1, D2, ... Dn of candidates to be discriminated

1. Generate predictions from each candidate

2. For each probe/test T, compute the a-posteriori probability p(Di|T(x)), for each possible outcome x of T

3. Select the test/probe for which the distribution p(Di|T(x)) has a minimal entropy (this is the test that on average best discriminates between the candidates)

Page 5: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

A Motivating Example

Minimal diagnoses: {M1}, {A1}, {M2, M3}, {M2, A2} Where to measure next? X,Y, or Z? What measurement promises the most information? Which values do we expect?

Minimal diagnoses: {M1}, {A1}, {M2, M3}, {M2, A2} Where to measure next? X,Y, or Z? What measurement promises the most information? Which values do we expect?

10

12

X

Y

Z

M1

M2

M3

*

*

*

A1

A2

+

+

F

G

2

A

2

3

3

B CD

E

3

Page 6: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Outline Last lecture:

1. Justification-based TMS

2. Assumption-based TMS Consistency-based

diagnosis

Today’s lecture:

1. Generation of tests/probes

2. Measurement Selection

3. Probabilities of Diagnoses

Page 7: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Measurement Selection - Discriminating Variables

10

12

X

Y

Z

M1

M2

M3

*

*

*

A1

A2

+

+

F

G

2

A

2

3

3

B CD

E

3

Suppose: single faults are more likely than multiple faults Probes that help discriminating {M1} and {A1} are most valuable

Suppose: single faults are more likely than multiple faults Probes that help discriminating {M1} and {A1} are most valuable

Page 8: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

?

Discriminating Variables - Inspect ATMS Labels!

6 {{M1}}{{}}

{{}}

{{}}

{{}}

{{}}

{ }12

{{ }}

{{ }}

4 {{M2, A1} {M3, A1, A2}}

{{M2}} 6

{{M1,A1}} 4

6 {{M3}}

8 {{M1, A1, A2}}

10

12

X

Y

Z

M1

M2

M3

*

*

*

A1

A2

+

+

F

G

2

A

2

3

3

B CD

E

3

Observations: Facts - not based on any assumption:Node has empty environment (as the only minimal one): always derivable

Note the difference: empty label node not derivable!

Observations: Facts - not based on any assumption:Node has empty environment (as the only minimal one): always derivable

Note the difference: empty label node not derivable!

{{M3, A2}{M2}} 6

?

Justification :{A,C}

Justification :{B,D}

Justification :{C,E}

Empty label

A1=10 and M1=6M2=4

A2=12 and M2=4M3=8

A2=12 and M3=6M2=6

A1=10 and M2=6M1=4

A1=10 and M2(depends on M3 and A2)=6M1=4

Page 9: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Fault Predictions

No fault models used Nevertheless, fault hypotheses make predictions! E.g. diagnosis {A1} implies OK(M1)

OK(M1) implies x=6

No fault models used Nevertheless, fault hypotheses make predictions! E.g. diagnosis {A1} implies OK(M1)

OK(M1) implies x=6

6 {{M1}}

{{}}

{{}}

{{}}

{{}}

{{}}

{ }12

{{ }}

{{ }}

4 {{M2, A1} {M3, A1, A2}}

{{M2}} 6

{{M1,A1}} 4

6 {{M3}}

8 {{M1, A1, A2}}

10

12

X

Y

Z

M1

M2

M3

*

*

*

A1

A2

+

+

F

G

2

A

2

3

3

B CD

E

3

{{M3, A2}{M2}} 6If we measure x and concludes x=6 then we can infer that A1 is the diagnosis rather than M1

If we measure x and concludes x=6 then we can infer that A1 is the diagnosis rather than M1

Page 10: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Predictions of Minimal Fault Localizations

ATMS Labels: ATMS Labels:

X = 4 {{M2, A1} {M3, A1, A2}} X = 6 {{M1}}Y = 6 {{M2} {M3, A2}} Y = 4 {{M1, A1}}Z = 6 {{M3} {M2, A2}} Z = 8 {{M1, A1, A2}}

X = 4 {{M2, A1} {M3, A1, A2}} X = 6 {{M1}}Y = 6 {{M2} {M3, A2}} Y = 4 {{M1, A1}}Z = 6 {{M3} {M2, A2}} Z = 8 {{M1, A1, A2}}

Minimal Fault Localization

Prediction X Y Z

{M1} 4 6 6 {A1} 6 6 6 {M2, A2} 6 4 8 {M2, M3} 6 4 8

Minimal Fault Localization

Prediction X Y Z

{M1} 4 6 6 {A1} 6 6 6 {M2, A2} 6 4 8 {M2, M3} 6 4 8

X 6: M1 is broken.

X = 6 : {A1} only single fault

Y or Z same for {A1}, {M1}

X best measurement.

X 6: M1 is broken.

X = 6 : {A1} only single fault

Y or Z same for {A1}, {M1}

X best measurement.

X=4 M1 is diagnosis, since it appears

only in x=6

Page 11: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Outline Last lecture:

1. Justification-based TMS

2. Assumption-based TMS Consistency-based

diagnosis

Today’s lecture:

1. Generation of tests/probes

2. Measurement Selection

3. Probabilities of Diagnoses

Page 12: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Probabilities of Diagnoses

Fault probability of component(type)s: pf

For instance, pf(Ci) = 0.01 for all Ci{A1, A2, M1, M2, M3} Normalization by = p(FaultLoc)

FaultLoc

Fault probability of component(type)s: pf

For instance, pf(Ci) = 0.01 for all Ci{A1, A2, M1, M2, M3} Normalization by = p(FaultLoc)

FaultLoc

10

12

X

Y

Z

M1

M2

M3

*

*

*

A1

A2

+

+

F

G

2

A

2

3

3

B CD

E

3

Page 13: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Probabilities of Diagnoses - Example

Assumption: independent faults Heuristic: minimal fault localizations only

Assumption: independent faults Heuristic: minimal fault localizations only

Minimal fault localization

p(FaultLoc)/

Prediction X Y Z

{M1} .495 4 6 6 {A1} .495 6 6 6 {M2, A2} .005 6 4 6 {M2, M3} .005 6 4 8

495.001.001.001.001.0

01.022

1

M

Page 14: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Entropy-based Measurement Proposal

Entropy of a Coin toss as a function of the probability of it coming up heads

Page 15: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

The cost of locating a candidate with probability pi is log(1/pi) (binary search through 1/pi objects).

Meaning, needed cuts to find an object. Example: p(x)=1/25 the number of cuts in binary search will be

log(25) = 4.6 p(x)=1/2 the number of cuts in binary search will be log(2)

= 1

pi is the probability of Ci being actual candidate given a measurement outcome.

The Intuition Behind the Entropy

Page 16: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

The cost of identifying the actual candidate, by the measurement is:

1. pi 0 occur infrequently, expensive to find pi log(1/pi) 0

2. pi 1 occur frequently, easy to find pi log(1/pi) 0

3. pi in between pi log(1/pi) 1

The Intuition Behind the Entropy

Go over through the possible candidates

The probability of candidate Ci to be faulted given an

assignment to the measurement

The cost of searching for this

probability

Page 17: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

The expected entropy by measuring Xi is:

Intuition: the expected entropy of X = ∑ the probability of Xi * entropy of Xi

This formula is an approximation of the above:

The Intuition Behind the Entropy

Go over through the possible outcomes of

measurement Xi

The probability of measurement Xi to be

Vik

The entropy if Xi=Vik

m

Page 18: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

This formula is an approximation of the above:

Where, Ui is the set of candidates which do not predict any value for xi

The goal is to find measurement xi that minimizes the above function

The Intuition Behind the Entropy

m

Page 19: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Entropy-based Measurement Proposal - Example

vari valik p(vari = valik)entropy

p(vari = valik)* log p (vari = valik)

X6

4

.505

.495-.99993

Y6

4

.990

.010-.0808

Z6

8

.995

.005-.0454

vari valik p(vari = valik)entropy

p(vari = valik)* log p (vari = valik)

X6

4

.505

.495-.99993

Y6

4

.990

.010-.0808

Z6

8

.995

.005-.0454

Proposal: Measure variable which minimizes the entropy: XProposal: Measure variable which minimizes the entropy: X

x=6 under the diagnoses: {A1}, {M2,A2}, {M2,M3}0.495+0.05+0.05=0.505

x=6 under the diagnoses: {A1}, {M2,A2}, {M2,M3}0.495+0.05+0.05=0.505

Page 20: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

How to update the probability of a candidate?

Given measurement outcome xi=uik, the probability of

a candidate is computed via Bayes’ rule:

Meaning: the probability that Cl is the actual candidate

given the measurement xi=uik.

p(Cl) is known in advance.

Computing Posterior Probability

Normalization factor:The probability that xi = uik : the sum of theprobabilities of the Candidates consistent withthis measurment

Page 21: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

How to compute p(xi=uik|Cl)? Three cases:

1. If the candidate Cl predicts the output xi=uik then p(xi=uik|

Cl)=1

2. If the candidate Cl predicts the output xi≠uik then p(xi=uik|

Cl)=0

3. If the candidate Cl predicts no output for xi then p(xi=uik|

Cl)=1/m (m is number of possible values for x)

Computing Posterior Probability

Page 22: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Example

Initial probability of failure in inverter is 0.01.

Assume the input a=1:

What is the best next measurement b or e?

Assume next measurement points on fault: Measuring closer to input produces less conflicts:

b=1 A is faulty

e=0 some components is faulty.

Page 23: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Example

On the other hand:

Measuring further away from the input is more likely to produce a discrepant value.

The large number of components the more likely that there is a fault.

the probability of finding a particular value outweighs the expected cost of isolating the candidate from a set.

the best next measurement is e.

Page 24: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Example

H(b) =

p(b = true | all diagnoses with observation a)

log p(b = true | all diagnoses with observation a) + p(b = false | all diagnoses with observation a)

log p(b = false | all diagnoses with observation a)

Page 25: Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Example

Assume a=1 and e=0:Then the next best measurement is c.equidistant from previous measurements.

Assume a=1 and e=1 and p(A)=0.025:Then the next best measurement is b.