Notice that sampling methods could in general be used even when we don’t know the bayes net (and are just observing the world)! We should strive to make the sampling more efficient given that we know the bayes net
Generating a Sample from the Network
<C, ~S, R, W>
Network Samples Joint distribution
Note that the sample is generated in the causal order so all the required probabilities can be read off from CPTs! (To see how useful this is, consider generating the sample in the order of wetgrass, rain, sprinkler … To sample WG, we will first need P(WG), then we need P(Rain|Wg), then we need P(Sp|Rain,Wg)—NONE of these are directly given in CPTs –and have to be computed… Note that in MCMC we do (re)sample a node given its markov blanket—which is not in causal order—since MB contains children and their parents.
Notice that to attach the likelihood to the evidence, we are using the CPTs in the bayes net. (Model-free empirical observation, in contrast, either gives you a sample or not; we can’t get fractional samples)
Notice that to attach the likelihood to the evidence, we are using the CPTs in the bayes net. (Model-free empirical observation, in contrast, either gives you a sample or not; we can’t get fractional samples)
Note that the other parents of zj are part of the markov blanket
P(rain|cl,sp,wg) = P(rain|cl) * P(wg|sp,rain)
Prop logic
First order predicate logic(FOPC)
Prob. Prop. logic
Objects,relations
Degree ofbelief
First order Prob. logic
Objects,relations
Degree ofbelief
Degree oftruth
Fuzzy Logic
Time
First order Temporal logic(FOPC)
Assertions;t/f
Epistemological commitment
Ontological commitment
t/f/u Degbelief
facts
FactsObjectsrelations
Proplogic
Probproplogic
FOPC ProbFOPC
AtomicPropositionalRelationalFirst order
• Atomic representations: States as blackboxes..
• Propositional representations: States as made up of state variables
• Relational representations: States made up of objects and relations between them– First-order: there are
functions which “produce” objects.. (so essentially an infinite set of objects
• Propositional can be compiled to atomic (with exponential blow-up)
• Relational can be compiled to propositional (with exponential blo-up) if there are no functions– With functions, we
cannot compile relational representations into any finite propositional representation
“hig
her-
orde
r” r
epre
sent
atio
ns
can
(som
etim
es)
be c
ompi
led
to lo
wer
ord
er
Expressiveness of Representations
Connection to propositional logic: Think of “atomic sentences” as propositions…
general object referent
Can’t have predicates of predicates.. thus first-order
Important facts about quantifiers
• Forall and There-exists are related through negation..– ~[forall x P(x)] = Exists x ~P(x)
– ~[exists x P(x)] = forall x ~P(x)
• Quantification is allowed only on variables – can’t quantify on predicates; can’t say
– [Forall P Reflexive(P) forall x,y P(x,y) => P(y,x) —you have to write it once per relation)
• Order of quantifiers matters
Family Values:Falwell vs. Mahabharata
• According to a recent CTC study,
“….90% of the men surveyed said they will marry the same woman..”
“…Jessica Alba.”
Caveat: Order of quantifiers matters
),( yxlovesyx),( yxlovesxy
)],(),([
)],(),([
)],(),([),(
)],(),([
)],(),([
)],(),([),(
TweetyTweetylovesTweetyFidoloves
FidoTweetylovesFidoFidoloves
yTweetylovesyFidolovesyyxlovesxy
TweetyTweetylovesFidoTweetyloves
TweetyFidolovesFidoFidoloves
TweetyxlovesFidoxlovesxyxlovesyx
TweetyandFidowithworldaConsider
“either Fido loves both Fido and Tweety; or Tweety loves both Fido and Tweety”
“ Fido or Tweety loves Fido; and Fido or Tweety loves Tweety”
Loves(x,y) means x loves y
Intuitively, x depends on y as it is in the scope of the quantification on y (foreshadowing Skolemization)
Caveat: Decide whether a symbol is predicate, constant or function…
• Make sure you decide what are your constants, what are your predicates and what are your functions
• Once you decide something is a predicate, you cannot use it in a place where a predicate is not expected! In the previous example, you cannot say
)(DogSmall
More on writing sentences
• Forall usually goes with implications (rarely with conjunctive sentences)
• There-exists usually goes with conjunctions—rarely with implications
Everyone at ASU is smart
Someone at UA is smart
)(),(
)(),(
xSmartASUxxAt
xSmartASUxAtx
)(),(
)(),(
xSmartUAxAtx
xSmartUAxAtx
Apt-pet
• An apartment pet is a pet that is small
• Dog is a pet
• Cat is a pet
• Elephant is a pet
• Dogs and cats are small.
• Some dogs are cute
• Each dog hates some cat
• Fido is a dog )(
),()()(
)()(
)()(
)()(
)()(
)()(
)()(
)()()(
fidodog
yxhatesycatyxdogx
xcutexdogx
xsmallxcatx
xsmallxdogx
xpetxelephantx
xpetxcatx
xpetxdogx
xaptPetxpetxsmallx
Notes on encoding English statements to FOPC
• You get to decide what your predicates, functions, constants etc. are. All you are required to do is be consistent in their usage.
• When you write an English sentence into FOPC sentence, you can “double check” by asking yourself if there are worlds where FOPC sentence doesn’t hold and the English one holds and vice versa
• Since you are allowed to make your own predicate and function names, it is quite possible that two people FOPCizing the same KB may wind up writing two syntactically different KBs
• If each of the KBs is used in isolation, there is no problem. However, if the knowledge written in one KB is supposed to be used in conjunction with that in another KB, you will need “Mapping axioms” which relate the “vocabulary” in one KB to the vocabulary in the other KB.
• This problem is PRETTY important in the context of Semantic Web
The “Semantic Web” Connection
Two different Tarskian Interpretations
This is the same as the one on The left except we have green guy for Richard
Problem: There are too darned many Tarskian interpretations. Given one, you can change it by just substituting new real-world objects Substitution-equivalent Tarskian interpretations give same valuations to the FOPC statements (and thus do not change entailment) Think in terms of equivalent classes of Tarskian Interpretations (Herbrand Interpretations)
We had this in prop logic too—The realWorld assertion corresponding to a proposition
Herbrand Interpretations• Herbrand Universe
– All constants• Rao,Pat
– All “ground” functional terms • Son-of(Rao);Son-of(Pat);• Son-of(Son-of(…(Rao)))….
• Herbrand Base– All ground atomic sentences made with
terms in Herbrand universe• Friend(Rao,Pat);Friend(Pat,Rao);Friend(
Pat,Pat);Friend(Rao,Rao)• Friend(Rao,Son-of(Rao));• Friend(son-of(son-of(Rao),son-of(son-
of(son-of(Pat))– We can think of elements of HB as
propositions; interpretations give T/F values to these. Given the interpretation, we can compute the value of the FOPC database sentences
))(,(
),(
),(),(,
RaoofsonPatFriend
PatRaoFriend
yxLikesyxFriendyx
If there are n constants; andp k-ary predicates, then --Size of HU = n --Size of HB = p*nk
But if there is even one function, then |HU| is infinity and so is |HB|. --So, when there are no function symbols, FOPC is really just syntactic sugaring for a (possibly much larger) propositional database
Let us think of interpretations for FOPC that are more like interpretations for prop logic