This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
The Bayes theorem and the associated rules of probability are a consistent and powerful basis for algorithms manipulating probability mass functions and probability density functions directly.
The Bayes methodology is a statistical approach to modeling and simulating discrete-event systems under uncertainty.
! Why is it worth to consider the Bayes methodology?
The basic assumption is that the state variables can be represented by probability mass functions (discrete variables) or probability density functions (continuous variables).
Based on the Bayes theorem, conclusions can be drawn to identify optimal decisions.
Repetition of relevant definitions and formulas of probability theory
Probability of event A:
Probability of event A and on the condition of event B:
Bayes’ formula (theoretical basis of Bayesian Networks):
Formula of the total probability:
P(A)
P(A|B)
P(A|B) = P(B|A) P(A)
P(B)
This formula enables the conversion of the probability of event A on the condition of event B into the probability of event B on the condition of event A.
𝑃𝑃 𝐴𝐴 = �𝑃𝑃 𝐴𝐴|𝐵𝐵𝑖𝑖 𝑃𝑃 𝐵𝐵𝑖𝑖𝑖𝑖
The absolute probability of A can be calculated based on the conditional probability of A.
1. Product rule: The joint probability of A and B is: 2. Independence: The random variables A and B are independent, if the joint probability distribution
can be factorized as:
3. Sum rule: If the hypotheses B1, ..., Bn are mutually exclusive and therefore form a partition of the
set B, the marginal likelihood of the data is:
Hence, the Bayes theorem can be expanded:
Rules of probability
Note, associated with Bayesian methodology the random variables A and B are named D and h.
𝑃𝑃 𝐴𝐴,𝐵𝐵 = 𝑃𝑃 𝐵𝐵|𝐴𝐴 𝑃𝑃 𝐴𝐴 = 𝑃𝑃 𝐴𝐴|𝐵𝐵 𝑃𝑃 𝐵𝐵 B in condition to A A in condition to B
W (Winter): {true, false} C (Slippery roads): {true, false} D (Klaus drunk alcohol): {true, false} K (Klaus has an accident): {true, false} M (Mike has an accident): {true, false}
Formalism to describe causal dependence within given situations. Consisting of:
Season of the year: Variable winter | states (true, false) | has a significant impact on the condition of the street
Condition of the street: Variable C | states (true, false) | describes the sleekness of the street and has a significant impact on the risk of an accident of Klaus (K) or Mike (M)
Occurrence of an accident: Variables K or M | states (true, false) | describe the occurrence of an accident of Klaus (K) or Mike (M)
Condition of Klaus: Variable D | states (true, false) | describes if Klaus has drunken alcohol
Two variables A and B of a causal network are designated as dependent if the probabilities of the states of variable A depends on the state of variable B and vice versa:
Two variables A and B of a causal network are designated as conditional dependent if A and B are dependent for specific states Z and independent for all other states �̅�𝑍.
W (Winter): {true, false} C (Slippery roads): {true, false} M (Mike has an accident): {true, false}
Variables W and M are independent if the condition of the road C is known.
If the conditions of the street are known the season has no impact on the probability of an accident.
K
C
M
Branch C (Slippery roads): {true, false} K (Klaus has an accident): {true, false} M (Mike has an accident): {true, false}
Variables K and M are independent if the condition of the road C is known. If K has an accident and the condition of the street is unknown the probability of the sleekness of the street increases. Furthermore, the probability of an accident of M increases.
D C D (Klaus drunk alcohol): {true, false} C (Slippery roads): {true, false} K (Klaus has an accident): {true, false}
Variables D and C dependent on each other if the state of variable K is known. If Klaus (K) has an accident and the street is not slippery then the probability that he has drunken alcohol is increased.
The Bayes theorem goes back to the seminal work of the English reverent Thomas Bayes in the 18th century on games of chances.
To answer this question, the Bayes theorem is used.
Formula:
P(h): A priori probability of a hypothesis h (or a model) representing the initial degree of belief
P(D): A priori probability of the data D (observations)
P(h|D): A posteriori probability of hypothesis h under the condition of given data D
P(D|h): Probability of data D under the condition of hypothesis h
known as the Bayesian methodology
Two meanings of probability: Frequencies of outcomes in random experiments, e.g. repeated rolling of a dice
Degrees of belief in propositions that do not necessarily involve random experiments, e.g. probability that a certain production machine will fail, given the evidence of a poor surface quality of the workpiece
• The choice of P(h) and P(D|h) represents the a priori knowledge and assumptions of the modeler concerning the application domain.
• The hypotheses are regarded as functions of the observations, which can be adapted iteratively to the state of knowledge of an observer.
• If all hypotheses have the same a priori probability, the equation above can be simplified further and only the term P(D|h) has to be maximized. Each hypothesis maximizing P(D|h) is called the maximum likelihood hypothesis (hML) :
( )arg maxML h Hh P D h
∈=
( ) ( ) ( )( )arg max arg max arg max ( )
( )MAP h H h H h H
P D h P hh P h D P D h P h
P D∈ ∈ ∈= = =
Objective function for Bayesian parameter estimation is the most likely hypothesis given the observations. The hypothesis hMAP representing the maximum of the probability mass is called the maximum a posteriori hypothesis:
Workpieces of only one type are stored in a pallet cage.
A produced workpiece is faultless (index g for “good”)
A produced workpiece is defective (index b for “bad”).
Due to a new manufacturing process, the prior probability distribution of the frequency of faultless and defective workpieces is unknown.
? Calculate the posterior distribution of the proportion of faultless workpieces step-by-step (produced workpieces) on the basis of the Bayesian methodology.
The input data are a sample of N workpieces, randomly drawn from the line!
The workpieces in the sample are tested independently!
Example: If an accident occurs, every tenth person of the population is able to provide initial medical treatment
How large is the probability that there are 0, 1, … up to10 persons of a total quantity of 10, who are able to provide initial medical treatment. 𝑒𝑒.𝑔𝑔. 𝑠𝑠 𝑋𝑋 = 1 → 𝑂𝑂𝑛𝑛𝑒𝑒 𝑠𝑠𝑒𝑒𝑠𝑠𝑠𝑠𝑝𝑝𝑛𝑛 𝑖𝑖𝑠𝑠 𝑠𝑠𝑎𝑎𝑎𝑎𝑒𝑒 𝑡𝑡𝑝𝑝 𝑚𝑚𝑠𝑠𝑚𝑚𝑒𝑒
For each measurement observation the initial uniform distribution is transformed into the Beta-type posterior distribution for the independent parameter pg.
Example of Bayesian methodology (IV)
𝑓𝑓𝑝𝑝 𝑠𝑠𝑔𝑔|𝑁𝑁,𝑛𝑛𝑔𝑔 =Γ 𝑁𝑁 + 2
Γ 𝑛𝑛𝑒𝑒 + 1 Γ 𝑁𝑁 − 𝑛𝑛𝑔𝑔 + 1𝑠𝑠𝑔𝑔𝑛𝑛 1 − 𝑠𝑠𝑔𝑔
𝑁𝑁−𝑛𝑛𝑔𝑔~𝛽𝛽 𝑁𝑁 + 2,𝑛𝑛𝑒𝑒 + 1
Due to the Bayesian methodology we can define the A-posteriori probability density:
Incremental measuring of the workpieces drawn from the production line leads to the samples:
after N = 5 measurements ng = 3 workpieces turned out to be faultless
after N = 10 measurements ng = 6 workpieces turned out to be faultless
after N = 15 measurements ng = 9 workpieces turned out to be faultless
Reason: Number of alternatives to factorize the joint probability distribution increases exponentially with the number of variables:
To classify and predict a discrete event system model with uncertainty, it is necessary to make assumptions about statistical independency of variables.
… encode conditional independency assumptions among subsets of random system variables
… are represented by a directed acyclic graphical model, with: - directed arcs between nodes (model structure) - conditional probability tables related to the random system variables (model parameters)
Nodes: Random variables as state variables and observables of the system model
Directed arcs: Causal dependencies of the system model from which the conditional independency of the random system variables follows
If a directed arc is drawn from node X (“Rain”) to node Y (“Wet Road”), node X is called parent node of Y and Y is called the child node of X
Nodes without parent nodes are called root nodes
A directed path from node X to Y is said to exist, if one can find a valid sequence of nodes starting from X and ending in Y such that each node in the sequence is a parent of the following node in the sequence
Each random variable Y with the parent nodes X1, ..., Xn is associated with a conditional
probability table (CPT) encoding the conditional probability P(Y=y | X1=x1, ..., Xn=xn)
1. Proposition: The joint probability distribution of a discrete Bayesian network with the random variables X1, X2, …, Xn can be factorized as follows:
Factorization of the joint probability distribution
Therefore, a transformation is only forward directed!
Note: Factorization mechanism is directly associated with the graphical model: Compared to a fully interlinked and structurally uninformative graph the number of
alternatives to factorize the joint probability distribution can be significantly reduced.
A graphical model can be developed from first principles and established theories about cause and effect relationships.
Note: Several valid factorizations can exist for a given joint probability distribution of a Bayesian model
( ) ( ) ( ) ( ) ( ), , , , , , , ( ) ( )P M E D G T S P M E D P E G T P D T P T S P S P G=
The joint probability distribution encoded by a discrete Bayesian network with the random variables X1, X2, …, Xn can be factorized as follows: ( )( )1 2
1
( , ,..., )n
n i ii
P X X X P X Parents X=
=∏
Remember
For the example the following parameter setting is developed:
Overall goal Probability calculation with Bayesian networks, also referred to as “inference”
Estimation of the probability mass functions of not-observable (hidden) random variables in the network, if (some) states of observable variables are known.
Child
Parent (root)
Machine Temperature
Grid Control
Electronics
Drive
Season
! If due to the network structure the child nodes are observable and hidden causes have to be estimated, the inference is called a diagnosis or bottom-up inference.
Example: P(“significant grid voltage jitters” | ”low productivity of machine”)
Grid
Machine
If root nodes or parent nodes are observable and effects have to be estimated, the inference is called a prognosis or top-down inference. ! Example: P (“low productivity of machine” | ”over-heated drive”)
! Inference in Bayesian networks is very flexible: The states of arbitrary network nodes can be defined and therefore the probability distributions of the other nodes can be updated.
Example: P(... | “winter season”, “significant grid voltage jitters“)
But… the exact calculation of probability values usually is a NP-incomplete problem.
Therefore, we only present closed-form solutions for chains of variables (like Markov chains) and a simple tree in this introductory course.
Case 4: Simple tree: X1 → Y → Z and {Z=z} is observed X2
( ) ( )( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
2
2
2
1 1 1 1 11
1 2 1 2 1 2 1
2 1 2
2 1 2
1( ) ( ) ( ) ( )( )
( ) , , ,
,
,
y x
y x
y x
bel x p x z p x p z x cp x l xp z
l x p z y x x p y x x p x x
p z y p y x x p x
p z y p y x x p x
= = =
=
=
=
∑∑
∑∑
∑ ∑
Moreover, it is possible to derive exact inference algorithms for trees with multiple layers as well as multiply connected trees. Multiply connected trees are converted into multiple layer trees. These algorithms are given in KOCH (2000).
In the previous lecture the approach of static Bayesian networks with discrete random variables was introduced, which is able to encode prior knowledge and independency assumptions of a problem domain to be modelled both efficiently and consistently in a graphical model and allows to infer the system state from incomplete data.
In this lecture the primary question is how we can exploit the methodology of Bayesian networks to
model and simulate stochastic processes. These processes were already analyzed in the 7th and 8th lecture. As in the case of Markov chains we are only interested in the total probability p(x´, x) of a transition from state x to state x´ and do not distinguish the events triggering the state transition.
For instance, it is possible to represent a discrete-state and discrete-time Markov chain as a
Bayesian network. Therefore, the time-indexed random variable Ot defined over the integers 1,2,… encodes the observable state of the chain in each time step t of the process:
Clearly, we can make use of the structure of the graphical model according to proposition 1 of the previous lecture to factorize the joint probability distribution of the observables:
Furthermore, we showed in the previous lecture how to compute the bottom-up inference
(diagnosis) in such a Markov chain using the Bayes theorem. According to the factorization of the joint distribution the predictive power of this simple process
model is limited, because the state transition mechanism considers only two neighboring time slices. In other words, if we have modeled the state sequence {O1, ..., Ot} and we want to predict the future state of the stochastic process Ot+1, the simple Markovian chain model considers only the distribution of the probability mass related to Ot in conjunction with the single-step transition probabilities pij. The previous instances of the process are irrelevant, given the present state.
This minimum chain model is also called a first-order Markov chain, because only two consecutive
time slices are linked in the graphical process model. The first-order Markov chain can be considered as the minimum structure of a dynamic Bayesian network.
1 2 1 2 1 3 2 1( , ,..., ) ( ) ( ) ( ) ( )T T TP O O O P O P O O P O O P O O −= ⋅⋅⋅
A significantly larger predictive power of the chain model is possible (without recoding states, see 8th lecture!), if the present (t) state of the chain does not only depend on the state in the previous time slice (t-1) but also on additional time slices in the past of the process (t-2, t-3, …). If the “memory depth” of the model is 2 it is called a second-order Markov chain and drawn as follows:
Clearly, the joint probability distribution of the second-order Markov chain can be factorized as:
1 2 1 2 1 3 2 1 4 3 2 1 2( , ,..., ) ( ) ( ) ( , ) ( , ) ( , )T T T TP O O O P O P O O P O O O P O O O P O O O− −= ⋅⋅⋅
1. Proposition: The joint probability distribution of a discrete-state, discrete-time Markov chain of order k can be factorized in each time step T as:
Markov chains (MC) of finite order k are able to simulate significant memory capacity, but the number of model parameters N = |λ| (λ represents the parameter tuple) that are stored in the prior and conditional probabilities tables grows polynomially with the order.
Consider a stochastic process with three states ot ∈ {1, 2, 3}. We have: • First-order MC: N1 = (3-1) + 3(3-1) (initial state prob. plus transition matrix; rows must sum up to 1) • Second-order MC: N2 = (3-1) + 3(3-1) + 32⋅(3-1) • k-th order MC: Nk = (3-1) + 3(3-1) +…+ 3k⋅(3-1) In order to avoid this rapid growth of the number of parameters and to be able to model processes
with latent dependency structures leading to long-range correlations, in engineering science the approach of Markov chains with hidden variables was invented. These Hidden Markov Models (HMM) distinguish a not directly observable state process {Qt} that satisfies the Markov property and a non Markovian observation process {Ot} that depends on the state process. We have the following structure of this kind of dynamic Bayesian network with hidden (latent) state variables:
2. Proposition: The joint probability distribution of a Hidden Markov Model can be factorized in each time step T as:
1 1 1 1 1 12
( ,..., , ,..., ) ( ) ( ) ( ) ( )T
T T t t t tt
P Q Q O O P Q P O Q P Q Q P O Q−=
= ∏
1. Def.: A discrete-time, discrete-state Hidden Markov Model is represented by the parameter tuple
λHMM = (Q, O, Π, A, B), where
- Q is a set of hidden states being mapped in the following onto the integers {1, 2,..., J}
- O is a set of observable states being mapped in the following onto the integers {1, 2,..., K}
- Π = (π1, ..., πJ) encodes the start vector indicating the initial distribution of the probability mass over the hidden states with πj = P(Q1 = j) (j = 1...J)
- A = (aij) = P(Qt = j | Qt-1 = i) encodes the transition matrix of the hidden process (i, j = 1...J)
- B = (bjk) = P(Ot = k | Qt = j) encodes the emission matrix of the observable states given the hidden states (j = 1...J , k = 1... K).
Therefore, the distribution of the probability mass Π (t) in time step t given the initial distribution Π, transition matrix A and emission matrix B can be calculated as follows:
A fluid in a chemical reactor has two states Q = {1 (non-toxic), 2 (toxic)}. According to the molecular properties of the fluid its state can change spontaneously (e.g. due to temperature jitters) from the non-toxic state to the toxic state with probability p12 = 0.01 at any time instant. This state switching is irreversible. Laboratory studies have shown that the temporal unfolding of the state process can be represented with a sufficiently high level of accuracy by a first-order Markov chain model. Initially, the fluid is filled in the non-toxic state into the reactor.
The measurement of the state of the fluid can only be carried out with the help of an integrated
sensor. A direct state observation is not possible. The sensor is fast enough to finish the measurement in the same time instant. The sensor identifies the toxic state with a reliability of 99.9% and the non-toxic state with a reliability of 95%.
How is the probability mass distributed over the observable states in time step t =4 when the
system is initialized in non-toxic state? Solution: [ ]1.00 0.00
2. Def.: A discrete-state, discrete-time dynamic Bayesian network is represented by the parameter tuple λDBN = (G1, Gtr, {Πi}i∈{1,...I}, {CPTj} j∈{1,...J}), where - G1 is a directed, acyclic graph of start nodes in the first time slice (t=1) encoding the initial
distribution of the probability mass, which has the same meaning as a static Bayesian network: “Each node is conditionally independent from its non-descendents, given its parents”,
- Gtr is a directed, acyclic graph of transition nodes in replicated time slices encoding the
transition probabilities between time steps with the same meaning as a static Bayesian network,
- Πi = (πi
km) encode the start vectors or start matrices of observable as well as hidden random variables X1
i of the start nodes in the first time slice (t=1) with the components
πi1m = P(X1
i= m) (i = 1...|G1|, m = 1,…, |X1i|) if X1
i is a root node or πi
km = P(X1i= m | Parents(X1
i) = w1k) (k = 1,…, |Parents(X1
i|) if X1i is not a root node,
- CPTj = (aj
rl) encode the transition matrices regarding observable as well as hidden random variables Xtr
j in the replicated time slices (t=2, 3, …) with the components
ajkm = P(Xtr
j = m | Parents(Xtrj) = wtr
k ) (j = 1...|Gtr(t=2)|-1) (k = 1...|Parents(Xtrj)|, m = 1... |Xtr
3. Def.: A DBN in two consecutive time slices and the aggregated random state variables is a net fragment Gtr and represents the probability distribution of a transition model according to
( )1 2, ,..., nt t tX X X X=
( )1 21 1 1, ,..., n
t t tX X X X+ + +′ =
( ) ( )( )1
n
tr i ii
P X X P X Parents X=
′ ′ ′≡∏
3. Proposition: The joint probability distribution of the aggregated random state variables of DBN can be factorized in each time slice T according to: Π1(.) represents the initial probability distribution of the aggregated state variables in the first time slice and Ptr(.) represents the transition model defined by Def. 3.