Top Banner
Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent (KBA) is an agent who: has knowledge about the world it is functioning in, and can reason about its possible courses of actions. To build KBAs, we must decide on: how to represent the agent’s knowledge, i.e. to address the so-called knowledge representation problem. how to carry out the agent’s reasoning.
21

Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Dec 13, 2015

Download

Documents

Kathlyn Cross
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself

A Knowledge-Based Agent (KBA) is an agent who:– has knowledge about the world it is functioning in, and– can reason about its possible courses of actions.

To build KBAs, we must decide on:– how to represent the agent’s knowledge, i.e. to address the so-

called knowledge representation problem.– how to carry out the agent’s reasoning.

Page 2: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Knowledge-based agent vs conventional computer program from a design point of view

KBA

1. Identify knowledge needed to solve the problem.

2. Select a representation framework in which this knowledge can be expressed.

3. Represent knowledge in the selected framework.

4. Run the problem, i.e. apply the reasoning mechanism of the selected logic to infer all possible consequences of initial knowledge.

Computer program

1. Design an algorithm to solve the problem.

2. Decide on a programming language to encode the algorithm.

3. Write a program.

4. Run the program.

Page 3: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Basic architecture of a knowledge-based agent.

Adding new facts Returning actions

Knowledge base

Inference engine

Fact 1Fact 2....Fact n

representedas sentencesin some KRlanguage

Rule 1Rule 2.....Rule k

define what follows from the facts inthe KB

Knowledge-based agent

Page 4: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Building the KB: declarative vs evolutionary approach

Declarative approach: building the initial knowledge base (the so-called “background knowledge”) is part of the design process. This knowledge reflects the designer’s knowledge about the agent’s world. The agent will add new facts as it perceives the world or reasons about the world my means of its own inference engine.

Evolutionary approach: the initial knowledge base is empty, but the agent possesses a learning capability by means of which it will gradually build its background knowledge. Such an agent can be fully autonomous, but research suggests that this process will be very inefficient.

The combination of declarative and evolutionary approaches may be a good compromise: the agent is provided with some background knowledge, which it gradually expands and refines by means of its learning capability.

Page 5: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

KBAs can be described at three different levels depending on the particular aspect of their design that we want to concentrate on. Knowledge (epistemological) level: defines an agent in terms of the

knowledge that it possesses. Logical level: defines how agent knowledge in encoded into formulas

from the selected knowledge representation language. Implementation level: defines how agent knowledge is stated in the

selected implementation language (for example, in LISP).

Example: Wumpus World. This is a grid of squares surrounded by walls, where each square may contain gold that the agent is hunting for, and a pit which is deadly for the agent - thus, it wants to avoid it. A wumpus leaves in this world, and the agent does not want to encounter him, because this might turn deadly for the agent. The agent has some knowledge about this world that may allow it to get to the gold, grab it and safely escape.

Page 6: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

The Wumpus World (WW) represented at the knowledge level (as shown on Fig. 7.2 AIMA)This is a static world with only one wumpus, one gold, and three pits.

– Percepts: Stench, meaning that the wumpus is either in the same square where the

agent is, or in one of the directly adjacent squares; Breeze, meaning that there is a pit in the squares directly adjacent to the one

where the agent currently is; Glitter, meaning that the gold is the square where the agent is; Scream, meaning that the wumpus was killed; Bump, meaning that the agent bumps into a wall.

– Actions: Go forward to the next square; Turn left 90 degrees and Turn right 90 degrees; Grad the gold; Shoot the wumpus, this action can be performed only once; Climb the wall, which can happen only in square [1,1] to get out of the cave.

– Goals: Get to the gold, grab it, return back to square [1,1] and get out of the cave. Stay alive.

Page 7: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

The role of reasoning in the WW: why a search-based agent cannot handle it?

Figures 7.3 and 7.4 represent an instance of the WW at the logical level.

Initial state [1,1] can be characterized as follows:– percept: [None, None, None, None, None]. Following the rules of the game,

the agent can conclude that [1,2] and [2,1] are save, i.e. the following two new facts will be added to the KB: not W[1,2], not P[1,2], not W[2,1], not P[2,1].

– possible actions: go forward, turn left and go forward. Assume that the agent has (arbitrary) decided to go forward, thus moving to

Next state [2,1]:– percept: [None, Breeze, None, None, None]. Following the rules of the

game, the agent can conclude that either [2,2] or [3,1] contain a pit, i.e. the following new fact will be added to the KB: P[2,2] v P[3,1]. Also, not W[2,2] and not W[3,1].

– possible actions: return back to [1,1] because the agent remembers that there is another save alternative there that it has not explored yet.

Page 8: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Next state [1,1]:– percept: [None, None, None, None, None]. – possible actions: go to [1,2] or return to [2,1]. Assuming that the agent is smart

enough to avoid loops, it will choose this time to go [1,2].

Next state [1,2]:– percept: [Stench, None, None, None, None]. The agent concludes that the

wumpus is either in [1,3] or [2,2]. But since no stench was felt in [2,1], the wumpus cannot be in [2,2], and therefore it must be in [1,3] (W[1,3] is added to the KB). Also, the agent concludes that there is no pit in [2,2], and therefore the pit was at [3,1] (not P[2,2], P[3,1] are added to the KB).

– possible actions: turn right and go forward.

Next state [2,2]:– percept: [None, None, None, None, None]. The agent concludes that [2,3] and

[3,2] are safe (not P[3,2], not P[2,3], not W[3,2], not W[2,3] are added to the KB).– possible actions: turn left and go forward, and go forward. Assume that the agent

has (arbitrary) decided to turn left and go forward.

Next state [2,3]:– percept: [Stench, Breeze, Glitter, None, None]. The agent concludes that it has

reached one of his goal (finding the gold), now it must grab it, and safely get back to [1,1] where the exit from the cave is.

– actions that follow next are trivial because the agent must remember its path, and it just goes backwards.

Page 9: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Knowledge representation: expressing knowledge in a form understandable by a computer.

Choosing an appropriate language to represent knowledge is the first and the

most important step in building an intelligent agent. Each language has 2 sides:

– Syntax: defines how to build sentences (formulas).

– Semantics: defines the meaning and the truth value of sentences by connecting them to the facts in the outside world.

If the syntax and the semantics of a language are precisely defined, we call that

language a logic, i.e

Logic = Syntax + Semantics

Page 10: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Connection between sentences in the KR language and facts in the outside world

Internal Entails

Representation Sentences Sentences

Outside world Facts Facts

Follows

There must exist an exact correspondence between the sentences entailed by

the agent’s logic and the facts that follow from other facts in the outside world. If

this requirement does not hold, our agent will be unpredictable and irrational.

Page 11: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Entailment and inference

Entailment defines if sentence A is true with respect to a given KB (written as

KB |== A), while inference defines whether sentence A can be derived from the

KB (written as KB |-- A).

Assume that the agent’s KB contains only true sentences comprising his

explicit knowledge about the world. To find out all consequences (“implicit”

knowledge) that follow from what the agent already knows, he can “run” an

inference procedure. If this inference procedure generates only entailed

sentences, then it is called sound, and if the inference procedure generates all

entailed sentences it is called complete. Ideally, we want to provide our agent

with a sound and complete inference procedure.

Page 12: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Proof theory: defines sound reasoning, i.e. reasoning performed by a sound inference procedure

Assume that A is derivable from a given KB by means of inference

procedure i (written as KB |--i A). If i is sound, the derivation process is

called a proof of A.

Note that we do not require inference procedure i to be complete; in some

cases this is impossible (for example, if the KB is infinite). Therefore,

we have no guarantee that a proof for A will be found even if KB |== A.

Page 13: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Knowledge representation languages: why do we need something different than programming languages or natural language?

Programming languages (Java, LISP, etc.)

1. Good for representing certain and concrete information. For example, “there is a wumpus is some square” cannot be represented.

2. Require a complete description of the state of the computer, For example, “there is a pit in either [2,2] or [3,1]” cannot be represented.

3. Follows from (1) and (2) -- not expressive enough.

4. Good for describing well-structured worlds where sequences of events can be algorithmically described.

Natural language

1. Expressive enough, but too much context-dependent.

2. Too ambiguous, because of the fuzzy semantics of connectives and modalities (or, some, many, etc.)

3. Because NL has communication, as well as representation role, often knowledge sharing is done without explicit knowledge representation.

4. Too vague for expressing logical reasoning.

Page 14: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

To formally express knowledge we need a language which is expressive

and concise, unambiguous and context independent, and computationally

efficient. Among the languages that fulfill at least partially these requirements are:

– Propositional Logic (PL). It can represent only facts, which are true or false.

– First-Order Logic (FOL). It can represent objects, facts and relations between objects and facts, which are true or false.

– Temporal Logic. This is an extension of FOL which take the time into account.

– Probabilistic Logic. Limits the representation to facts only, but these facts can be uncertain, true or false. To express uncertainty, it attaches a degree of belief (0..1) to each fact.

– Truth Maintenance Logic. Represents facts only, but these can be both unknown and uncertain, in addition to true and false.

– Fuzzy Logic. Represents facts which are not only true or false, but true to some degree (the degree of truth is represented as a degree of belief).

Page 15: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Introduction to logic: basic terminology

Interpretation establishes a connection between sentences of the

selected KR language and facts from the outside world.

Example: Assume that A, B and C are sentences of our logic. If we refer to the “Moon world”, A may have the following interpretation “The moon is green”, B -- “There are people on the moon”, and C -- “It is sunny and nice on the moon, and people there eat a lot of green cheese".

Given an interpretation, a sentence can be assigned a truth value. In PL, for

example, it can be true or false, where true sentences represent facts that hold

in the outside world, and false sentences represent facts that do not hold.

Sentences may have different interpretations depending on the

meaning given to them.

Page 16: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Example: Consider English language. The word “Pope” is to be understood as a “microfilm”, and the word “Denver” is to be understood as “pumpkin on the left side of the porch”. In this interpretation, sentence “The Pope is in Denver” means “the microfilm is in the pumpkin”.

Assume that we can enumerate all possible interpretations in all possible worlds

that can be given to the sentences from our representation. Then, we have the

following three types of sentences:

Valid sentences (or tautologies). These are true in all interpretations. Example: (A v not A) is always true even if we refer to the “Moon world” (“There are people on the moon or there are no people on the moon”).

Satisfiable sentences. These are true in some interpretations and false in others. Example: “The snow is red and the day is hot” is a satisfiable sentence if this is the case on Mars.

Unsatisfiable sentences. These are false in all interpretation. Example: (A & not A).

Page 17: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Propositional logic

To define any logic, we must address the following three questions:

1. How to make sentences (i.e. define the syntax).

2. How to relate sentences to facts (i.e. define the semantics).

3. How to generate implicit consequences (i.e. define the proof theory).

From the syntactic point of view, sentences are finite sequences of primitive

symbols. Therefore, we must first define the alphabet of PL. It consists of the

following classes of symbols:

– propositional variables A, B, C ...

– logical constants true and false

– parentheses (, )

– logical connectives &, v, <=>, =>, not

Page 18: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Well-formed formulas (wff)

Given the alphabet of PL, a wff (or sentence, or proposition) is inductively

defined as:

– a propositional variable;

– A v B, where A, B are sentences;

– A & B, where A, B are sentences;

– A => B, where A, B are sentences;

– A <=> B, where A, B are sentences;

– not A, where A is a sentence;

– true is a sentence;

– false is a sentence.

The following hierarhy is imposed on logical operators: not, &, v, =>, <=>.

Composite statements are evaluated with respect to this hierarhy, unless

parentheses are used to alter it.

Example: ((A & B) => C) is equivalent to A & B => C

(A & (B => C)) is a different sentence.

Page 19: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

The semantics of PL is defined by specifying the interpretation of wwfs and the meaning of logical connectives. If a sentence is composed by only one propositional symbol, then it

may have any possible interpretation. Depending on the interpretation, the sentence can be either true or false (i.e. satisfiable).

If a sentence is composed by a logical constant (true or false), then its interpretation is fixed:

– true has as its interpretation a true fact;

– false has as its interpretation a false fact. If a sentence is composite (complex), then its meaning is derived from

the meaning of its parts as follows (such semantics is called compositional, and this is known as a truth table method):

P Q not P P & Q P v Q P => Q P <=> Q

F F T F F T T

F T T F T T F

T F F F T F F

T T F T T T T

Page 20: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Example: using a truth table, define the validity of P & (Q & R) <=> (P & Q) & R

P Q R Q & R P & (Q & R) (P & Q) & R P & (Q & R)<=>(P & Q) & R

F F F F F F T

T F F F F F T

F T F F F F T

T T F F F F T

F F T F F F T

T F T F F F T

F T T T F F T

T T T T T T T

This formula is valid, because it is true in all possible interpretations of its

propositional variables. It is known as the “associativity of conjunction” law.

Page 21: Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.

Model and entailment

Any world in which a sentence is true under a particular interpretation is

called a model of that sentence under that interpretation.

With a notion of a model at hand, we can provide the following more precise

definition of entailment:

Sentence A is entailed by a given KB, if the models of KB are all models of A, or stated differently whenever KB is true, A is also true.

Models of complex sentences are defined in terms of the models of their

parts (see diagrams on Fig. 6.12 AIMA)