Top Banner
Introduction to AI Introduction to AI & & AI Principles (Semester 1) AI Principles (Semester 1) WEEK 7 – (all) WEEK 7 – (all) (2008/09) John Barnden Professor of Artificial Intelligence School of Computer Science University of Birmingham, UK
21

Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Dec 20, 2015

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Introduction to AI Introduction to AI &&AI Principles (Semester 1)AI Principles (Semester 1)

WEEK 7 – (all)WEEK 7 – (all)(2008/09)

John BarndenProfessor of Artificial Intelligence

School of Computer ScienceUniversity of Birmingham, UK

Page 2: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Natural Language Issues in

Planning the Delivery of One Drink

Page 3: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Reminder “Two spoonfuls of sugar please”

VAGUENESS inherent in concepts used

“A bit of / a lot of / not too much / … milk in it please” VAGUENESS of (mass) quantification

“Rob, several people want decaff coffee” VAGUENESS of (discrete) quantification

“Everyone laughed when I came in!” “Did someone spill their coffee on the floor?”

CONTEXT-SENSITIVITY of universal and existential quantification (respectively)

Page 4: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

“Do you want coffee or tea?” SCOPE and AMBIGUITY of disjunction

• “Want (drink coffee OR drink tea)?” ?? Also: Exclusive or inclusive?• “(Want drink coffee) OR (want drink tea)?” ?? Also: Exclusive or inclusive?• “Which single one of the following do you want: coffee, tea?”.

“Carry the sugar to that guy with the moustache.” PREPOSITIONAL PHRASE ATTACHMENT

• The guy has the moustache.

This is clearly the intended interpretation.

• The moustache is to be passed along with the sugar??!Consider: “Carry this suitcase to the car with the backpack.”

• The moustache is to help with the carrying??!Consider: “Carry these suitcases to the car with Mary.”

“Can you take this over to Mike?” INDIRECT SPEECH ACT – an implicit REQUEST not a Y/N question

Page 5: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

“Sally thinks that Pete’s friends can’t come.”

OPACITY/TRANSPARENCY within mental-state verb complements.

(The complement here is “Pete’s friends can’t come”.)

• Transparent interpretation: there are certain people who are Pete’s friends in reality. Sally thinks these specific people can’t come. But she is not claimed to be thinking of them as Pete’s friends.

• Opaque interpretation: Sally’s own thought is of the form “Pete’s friends can’t come.” She is not claimed to have any beliefs about (let alone correct knowledge about) who they are, in any other terms. Indeed, Pete may not even have any friends in reality.

Page 6: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Recall: “EU passports this way.”

METONYMY:

referring indirectly to something (here, people who have EU passports) by referring to something closely associated with it (here, EU passports)

“This is Susan” [pointing to a mug of coffee]

referring indirectly to Susan’s mug

by referring to Susan

Page 7: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

“The room is overflowing with people.” “Let’s knock some ideas around about where to go

tonight”

METAPHOR: Viewing something or talking (etc.) about something as if it

were something that it is not.

Page 8: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Other Exercises [15 mins] on PSs2) If WM contains holds0(Mike, B), can the PS infer holds3(Mike, B)? If so,

justify your answer, and, if not, create a rule that would allow the inference.

3) Provide a rule that would allow the PS to infer that an agent is holding something (somehow) if he/she/it has it.

4) Modify the rule to apply only to agents that are people. You will need to introduce a new predicate symbol. Show by means of a forward-chaining dependency diagram how it could apply to Mike.

5) What should happen if two different rule instances infer the same conclusion? Suggest a suitable piece of dependency diagram.

6) Suppose we introduce a predicate symbol distinct3 that can be used as in the following example: distinct3(M, B, Ego). This means that all three things are different. What rules would it be useful to have linking distinct3 to our existing predicate symbol distinct (which applies only to two things ata time)?

7) Suppose the WM contains next-to(M,Ego). How would/could the PS infer next-to(Ego,M) ?

Page 9: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

SHOPPING TRIPCASE STUDY

Page 10: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

a Search Problem

inShopping Trip

case study

Page 11: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Route Finding:Student Guild to Public Library

5W: Five Waystrain station

PL: PublicLibrary

SG: StudentGuild

U: Universitytrain station

BR: Bristol Rd bus stop

NS: New Streettrain station

3 miles

3 miles0.1 mile

2.5 miles

2.5 miles

0.5 mile

3.5 miles

1.5 miles

1 mile

1.5 miles

WALKWALK

TRAINTRAIN

BUSBUS

Page 12: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Operations (Actions)

WALK from X to Y Cost = 20 LinkLength(X,Y) minutes

Take BUS from X to Y Cost = 3 LinkLength(X,Y) ++ 15 minutes

Take TRAIN from X to Y Cost = 2 LinkLength(X,Y) ++ 10 minutes

The added amounts in the costs are average times for waiting. In reality the costs would be variable.

You can go between University and New Street by train without getting out at Five Ways. This counts as one action, with LinkLength 4 miles. So there is only a waiting-time at the beginning.

Page 13: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Example Routes WALKWALK

TRAINTRAIN

BUSBUS

5WSG U PL5 ++1010 30

55

SG U 5W NS PL10 5 ++10 ++ 3 20

48

SG PL70

70

BR NSSG PL2 9 ++15 20

46

costs in minutestotals

Page 14: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

“Depth-First” Search ProcessSG

At each state,first considering

WALK, then TRAIN,then BUS,

and quickest firstwithin each case

BR

NS 5W

U 5W NS PL

NS

PL

5W

NS PL NS

BR BRPL

NS5W 5W 5W

NS

U

PL U U

FAILFAIL FAILFAIL FAILFAIL FAILFAIL

i.e.i.e.No successors No successors

Page 15: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Observations about Depth-First At a state, it’s sensible to prune out (= completely discard) any action

that would lead to same successor as a cheaper action. Depth-first is NOT in general very good for finding optimal (= cheapest =

least-cost) solution paths. If want to consider more than one solution path, and are interested in path

cost, it’s sensible to keep track of lowest solution cost found so far, and use it to curtail new paths that get worse.

If just looking for a reasonable-cost path, depth-first may do well if the actions at each state are appropriately ordered (e.g., “cheapest first” – decreasing order of cheapness). May also want to prune out sufficiently bad actions. Depends on the problem.

Ordering and pruning criteria are types of “heuristic” (rule of thumb).

In applying depth-first to some types of problem (NOT ours) it’s a good idea to store all states encountered so far, to guard against repeating failed searches. To think about: Why not good for our route-finding?

If we wanted to keep track of exact scheduled times of trains, etc., we could take a state to be not just a place (e.g., 5W) but a place together with a time. We could also add a waiting action, rather than building the waiting into other actions.

Page 16: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

“Breadth-First” Search ProcessSG

At each state, first considering

WALK, then TRAIN,then BUS,

and quickest firstwithin each case

BR

NS 5WNS

PL

5W

NS PL NS

BR BRPL

NS5W 5W 5W

NS

U

PL U U

FAILFAIL FAILFAIL FAILFAIL FAILFAIL

PL5W NSU

Page 17: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Observations about Breadth-First NB: Expanding a state means computing all its successors. Expansion

can be (conceptually) all at once as in breadth-first, or incremental as in depth-first.

As in depth-first, it’s sensible to prune out any action that would lead to same successor as a cheaper action.

Breadth-first is good for finding shortest paths (i.e., paths with least number of steps) but not (in general) optimal paths. But may be better than depth-first for finding reasonable-cost paths if cost is roughly correlated with length.

Again, if want to consider more than one solution path, it’s sensible to keep track of lowest solution cost found so far, and use it to curtail new paths that get worse.

Must store at least all unexpanded states encountered so far. In many cases, may as well store all states encountered so far: it may not be that much more expensive (why not?).

If we re-find a stored state, probably wish to avoid duplicating the sub-search from that state, and if so may wish to choose the version of the state with the lesser cost path to it. (See Best-First search for an idea of what this would entail. But actually may as well now use Best-First unless just looking for shortest paths.)

Page 18: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

“Best-First” Search Process

Best-first: Have a heuristic evaluation function that estimates how “promising” any given state S is. This often estimates the lowest cost needed to get a goal from S. Traditionally notated as h(S).

At any stage, if we haven’t finished, fully expand one of the unexpanded states that looks best (most promising) according to the evaluation function. So, need to remember all states created.

If interested in solution paths, remember a parent for each state. If also interested in solution-path costs, and it’s possible to get to some states by more than one path:

A state S’s remembered parent should be its “best parent” (parent that leads back on a cheapest path to the state). Also remember S’s “best cost so far” (the cost of a cheapest path to the state so far). Traditionally notated as g(S).

If an old state S is re-found during an expansion, update the cheapest parent of S if necessary, and in this case propagate best-cost and best-parent changes, as necessary, down through the descendants of S.

Page 19: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Best-First contd A possible evaluation function for our route-

finding: h(S) = time it would take to traverse the STRAIGHT-

LINE DISTANCE from S to the goal (PL) by TRAIN (ignoring any waiting time).

That is, h(S) = 2 Distance(S, PL).

NB: this is an underestimator: its values are never greater than the actual least cost needed to get from S to PL. (Underestimators are important: explained later.)

If we included the train waiting time, we would have more accurate estimates in general, but we would no longer quite have an underestimator.

Page 20: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Best-First contd

Another possibility (no longer an underestimator, but may give better search because could be more accurate overall):

h(S) = time it would take to traverse the STRAIGHT-LINE DISTANCE to the goal (PL) by the quickest transport method available at S (now including any waiting time).

That is: h(S) = minimum of:

2 Distance(S,PL) ++ 10 (when S is a train stop)

3 Distance(S,PL) ++ 15 (when S is a bus but not train stop)

20 Distance(S,PL). (when walking is possible from S)

Page 21: Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) Introduction to AI & AI Principles (Semester 1) WEEK 7 – (all) (2008/09) John Barnden Professor.

Variants of Best-First:The A and A* Algorithms

The “A” algorithm: best-first, but instead of just using h(S) to choose which state to expand, use f(S) where:

f(S) = g(S) + h(S).

The A* algorithm is just A with an h that is an underestimator (an “admissible” h).

It is guaranteed that the (first) solution path that A* finds is optimal

(provided the goal-test for a state is done just before expansion rather than on creation, and a certain technical condition on action costs is satisfied).