Topics • Combining probability and first-order logic – BLOG and DBLOG • Learning very complex behaviors – ALisp: hierarchical RL with partial programs • State estimation for neurotrauma patients – Joint w/ Geoff Manley (UCSF), Intel, Omron • Metareasoning and bounded optimality • Transfer learning (+ Jordan, Bartlett, MIT, SU, OSU) • Knowing everything on the web • Human-level AI
14
Embed
Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Topics
• Combining probability and first-order logic– BLOG and DBLOG
• Learning very complex behaviors– ALisp: hierarchical RL with partial programs
• State estimation for neurotrauma patients– Joint w/ Geoff Manley (UCSF), Intel, Omron
• Metareasoning and bounded optimality• Transfer learning (+ Jordan, Bartlett, MIT, SU, OSU)• Knowing everything on the web• Human-level AI
End expiratory pressureEnd expiratory pressureRespiratory rateRespiratory rateVentilation modeVentilation mode
Cardiac outputCardiac output
Sedation levelSedation level
ICP waveICP wave
State estimation: 3x5 index card
Patient 2-13
Dynamic Bayesian Networks
DBNs contd:
Research plan
• DBN model: ~200 core state variables, ~500 sensor-related variables
• Learn model parameter distributions from DB• Infer patient-specific parameters online• Goals:
– Improved alarms– Diagnostic state estimation => improved treatment– Solve the treatment POMDP– Structure discovery => better understanding of
physiology
Possible worlds• Propositional
• First-order + unique names, domain closure
• First-order open-world
A B
C D
A B
C D
A B
C D
A B
C D
A B
C D
A B C D A B C D A B C D A B C D A B C D A B C D
Example: Citation Matching[Lashkari et al 94] Collaborative Interface Agents, Yezdi
Lashkari, Max Metral, and Pattie Maes, Proceedings of the Twelfth National Conference on Articial Intelligence, MIT Press, Cambridge, MA, 1994.
Metral M. Lashkari, Y. and P. Maes. Collaborative interface agents. In Conference of the American Association for Artificial Intelligence, Seattle, WA, August 1994.
Are these descriptions of the same object?What authors and papers actually exist, with what
attributes? Who wrote which papers?General problem: raw data -> relational KBOther examples: multitarget tracking, vision, NLP
Approach: formal language for specifying first-order open-world probability models
BLOG generative process
• Number statements describe steps that add some objects to the world
• Dependency statements describe steps that set the value of a function or relation on a tuple of arguments
• Includes setting the referent of a constant symbol (0-ary function)
• Both types may condition on existence and properties of previously added objects
• Theorem 1: Every well-formed* BLOG model specifies a unique distribution over possible worlds
• The probability of each (finite) world is given by a product of the relevant conditional probabilities from the model
• Theorem 2: For any well-formed BLOG model, there are algorithms (LW, MCMC) that converge to correct probability for any query, using finite time per sampling step
Citation Matching Results
Four data sets of ~300-500 citations, referring to ~150-300 papers
0
0.05
0.1
0.15
0.2
0.25
Reinforce Face Reason Constraint
Err
or
(Fra
ctio
n o
f C
lust
ers
No
t R
eco
vere
d C
orr
ectl
y)
Phrase Matching[Lawrence et al. 1999]
Generative Model + MCMC[Pasula et al. 2002]
Conditional Random Field[Wellner et al. 2004]
DBLOG
• BLOG allows for temporal models – time is just a logical variable over an infinite set
• Inference works (only finitely many relevant random variables) but is grossly inefficient
• DBLOG includes time as a distinguished type and predecessor as distinguished function; implements special-purpose inference:– Particle filter for temporally varying relations– Decayed MCMC for atemporal relations
Open Problems
• Inference– Applying “lifted” inference to BLOG (like Prolog)– Approximation algorithms for problems with huge/growing