22c:145 Artificial Intelligence Fall 2005 Uncertainty Cesare Tinelli The University of Iowa Copyright 2001-05 — Cesare Tinelli and Hantao Zhang. a a These notes are copyrighted material and may not be used in other course settings outside of the University of Iowa in their current or modified form without the express written permission of the copyright holders. 22c:145 Artificial Intelligence, Fall’05 – p.1/25
26
Embed
22c:145 Artificial Intelligencehomepage.cs.uiowa.edu/~tinelli/classes/145/Fall05/notes/13-uncerta… · Reasoning under Uncertainty A rational agent is one that makes rational decisions
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
22c:145 Artificial IntelligenceFall 2005
Uncertainty
Cesare Tinelli
The University of Iowa
Copyright 2001-05 — Cesare Tinelli and Hantao Zhang. a
a These notes are copyrighted material and may not be used in other course settings outside of the
University of Iowa in their current or modified form without the express written permission of the copyright
Agents almost never have access to the whole truth abouttheir environments.
Very often, even in simple worlds, there are importantquestions for which there is no boolean answer.
In that case, an agent must reason under uncertainty.
Uncertainty also arises because of an agent’s incomplete orincorrect understanding of its environment.
22c:145 Artificial Intelligence, Fall’05 – p.3/25
Uncertainty
Let action At = “leave for airport t minutes before flight”.Will At get me there on time?
Problems
partial observability (road state, other drivers’ plans, etc.)noisy sensors (unreliable traffic reports)uncertainty in action outcomes (flat tire, etc.)immense complexity of modelling and predicting traffic
22c:145 Artificial Intelligence, Fall’05 – p.4/25
Uncertainty
Let action At = “leave for airport t minutes before flight”.Will At get me there on time?
Hence a purely logical approach either
1. risks falsehood (“A25 will get me there on time”), or
2. leads to conclusions that are too weak for decision making(“A25 will get me there on time if there’s no accident on the way,it doesn’t rain, my tires remain intact, . . . ”)
22c:145 Artificial Intelligence, Fall’05 – p.4/25
Reasoning under Uncertainty
A rational agent is one that makes rational decisions (in order tomaximize its performance measure).
A rational decision depends on
the relative importance of various goals,
the likelihood they will be achieved
the degree to which they will be achieved.
22c:145 Artificial Intelligence, Fall’05 – p.5/25
Handling Uncertain Knowledge
Reasons FOL-based approaches fail to cope with domains like, forinstance, medical diagnosis:
Laziness: too much work to write complete axioms, or too hardto work with the enormous sentences that result.
Theoretical Ignorance: The available knowledge of the domainis incomplete.
Practical Ignorance: The theoretical knowledge of the domainis complete but some evidential facts are missing.
22c:145 Artificial Intelligence, Fall’05 – p.6/25
Degrees of Belief
In several real-world domains the agent’s knowledge can onlyprovide a degree of belief in the relevant sentences.
The agent cannot say whether a sentence is true, but only thatis true x% of the times.
The main tool for handling degrees of belief is ProbabilityTheory.
The use of probability summarizes the uncertainty that stemsfrom our laziness or ignorance about the domain.
22c:145 Artificial Intelligence, Fall’05 – p.7/25
Probability Theory
Probability Theory makes the same ontological commitments asFirst-order Logic:
Every sentence ϕ is either true or false.
The degree of belief that ϕ is true is a number P between 0 and 1.
P (ϕ) = 1 −→ ϕ is certainly true.P (ϕ) = 0 −→ ϕ is certainly not true.P (ϕ) = 0.65 −→ ϕ is true with a 65% chance.
22c:145 Artificial Intelligence, Fall’05 – p.8/25
Probability of Facts
Let A be a propositional variable, a symbol denoting aproposition that is either true or false.
P (A) denotes the probability that A is true in the absence ofany other information.
Similarly,P (¬A) = probability that A is false
P (A ∧ B) = probability that both A and B are true
P (A ∨ B) = probability that either A or B (or both) are true
Examples:P (¬Blonde) P (Blonde ∧ BlueEyed) P (Blonde ∨ BlueEyed)
where Blonde (BlueEyed) denotes that a person is blonde(blue-eyed).
22c:145 Artificial Intelligence, Fall’05 – p.9/25
Conditional/UnconditionalProbability
P (A) is the unconditional (or prior) probability of fact A.
An agent can use the unconditional probability of A to reason aboutA only in the absence of further information.
If further evidence B becomes available, the agent must use theconditional (or posterior) probability:
P (A | B)
the probability of A given that (all) the agent knows (is) B.
Note: P (A) can be thought as the conditional probability of A withrespect to the empty evidence: P (A) = P (A | ).
denotes their joint probability distribution (JPD), an n-dimensionalmatrix specifying the probability of every possible combination ofvalues for X1, . . . , Xn.
In terms of exponential explosion, conditional probabilities do notseem any better than JPD’s for computing the probability of a fact,given n > 1 pieces of evidence.