Top Banner
Explanation-Based Learning (borrowed from mooney et al)
42
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Explanation-Based Learning (borrowed from mooney et al)

Explanation-Based Learning

(borrowed from mooney et al)

Page 2: Explanation-Based Learning (borrowed from mooney et al)

Explanation-Based Learning (EBL)

One definition:Learning general

problem-solving techniques by observing and analyzing solutions to specific problems.

Page 3: Explanation-Based Learning (borrowed from mooney et al)

SBL (vs. EBL)lots of data (examples)

• Similarity-based learning (SBL) are inductive:– generalizes from training data – empirically identifies patterns that distinguish between positive and

negative examples of a target concept.

• Inductive results are justified empirically (e.g., by statistical arguments such as those used in establishing theoretical results in PAC learning).

• Generally requires significant numbers of training examples in order to produce statistically justified conclusions.

• Generally does not require or exploit background knowledge.

Page 4: Explanation-Based Learning (borrowed from mooney et al)

EBL (vs. SBL)lots of knowledge

• Explanation-based learning (EBL) is (usually) deductive:– uses prior knowledge to “explain” each training example – Explanation identifies what properties are relevant to the target

function and which are irrelevant.

• Prior knowledge is used to reduce the hypothesis space and focus the learner on hypotheses that are consistent with prior knowledge about the target concept.

• Accurate learning is possible from very few (0) training examples (typically 1 example per learned rule).

Page 5: Explanation-Based Learning (borrowed from mooney et al)

The EBL Hypothesis• By understanding why an example is a member of a target concept,

one can learn the essential properties of the concept

• Trade-offthe need to collect many examples

forthe ability to “explain” single examples (via a domain theory)

• This assumes the domain theory is competent:– Correct: does not entail that any negative example is positive– Complete: each positive example can be “explained”– Tractable: an “explanation” can be found for each positive

example.

Page 6: Explanation-Based Learning (borrowed from mooney et al)

SBL vs. EBLentailment constraints

SBL: Hypothesis & Descriptions ╞ Classifications

Hypothesis is selected from restricted hypothesis space.

EBL:Hypothesis & Descriptions ╞ Classifications

Background╞ Hypothesis

Page 7: Explanation-Based Learning (borrowed from mooney et al)

EBL Task

• In addition to a set of training examples, EBL also takes as input a domain theory, background knowledge about the target concept that is usually specified as a set of logical rules (Horn clauses) and operationality criteria.

• The goal is to find an efficient or operational definition of the target concept that is consistent with both the domain theory and the training examples.

Page 8: Explanation-Based Learning (borrowed from mooney et al)

EBL Task: operationalityobservable vs. unobservable

• Operationality is often imposed by restricting the hypothesis space to using only certain predicates (e.g., those that are directly used to describe the examples).

• Observable: predicates used to describe examples• Unobservable: the target concept

• In “classical EBL” the learned definition is – logically entailed by the domain theory – a more efficient definition of the target concept – requires only “look-up” (pattern matching) using

observable predicates rather than search (logical inference) mapping observables to unobservables.

Page 9: Explanation-Based Learning (borrowed from mooney et al)

EBL Task

Given:• Goal concept• Training example• Domain Theory• Operationality Criteria

Find: a generalization of the training example that is a sufficient criteria for the target concept and satisfies the operationality criteria

Page 10: Explanation-Based Learning (borrowed from mooney et al)

EBL Example

• Goal concept: SafeToStack(x,y)

• Training Examples: One exampleSafeToStack (Obj1,Obj2)On(Obj1,Obj2) Owner(Obj1,Molly)Type(Obj1,Box) Owner(Obj2, Muffet)Type(Obj2,Endtable) Fragile(Obj2)Color(Obj1,Red) Material(Obj1,Cardboard)Color(Obj2,Blue) Material(Obj2,Wood)Volume(Obj1, 0.1) Density(Obj1,0.1)

Page 11: Explanation-Based Learning (borrowed from mooney et al)

EBL Example

• Domain Theory:SafeToStack(x,y) :- not(Fragile(y)).SafeToStack(x,y) :- Lighter(x,y).Lighter(x,y) :- Weight(x,wx), Weight(y,wy), wx < wy.

Weight(x,w) :- Volume(x,v), Density(x,d), w=v*d.Weight(x,5) :- Type(x,Endtable).Fragile(x) :- Material(x,Glass).

• Opertional predicates: Type, Color, Volume, Owner, Fragile, Material, Density, On, <, >, =.

Page 12: Explanation-Based Learning (borrowed from mooney et al)

EBL Method

For each positive example not correctly covered by an “operational” rule do:

1. Explain: Use the domain theory to construct a logical proof that the example is a member of the concept.

2. Analyze: Generalize the explanation to determine a rule that logically follows from the domain theory given the structure of the proof and is operational.

Add the new rule to the concept definition.

Page 13: Explanation-Based Learning (borrowed from mooney et al)

EBL Example

Training Example:

SafeToStack (Obj1,Obj2) Type(Obj2,Endtable)

Volume(Obj1, 0.1) Density(Obj1,0.1)

Domain Theory:

SafeToStack(x,y) :- Lighter(x,y).

Lighter(x,y) :- Weight(x,wx), Weight(y,wy), wx < wy.

Weight(x,w) :- Volume(x,v), Density(x,d), w=v*d.

Weight(x,5) :- Type(x,Endtable).

Page 14: Explanation-Based Learning (borrowed from mooney et al)

Example Explanation (Proof)

Volume(Obj1,2)

SafeToStack(Obj1,Obj2)

Lighter(Obj1,Obj2)

Weight(Obj1,0.6) Weight(Obj2,5)

Type(Obj2.Endtable)0.6=2*0.3

06.<5

Density(Obj1,0.3)

Page 15: Explanation-Based Learning (borrowed from mooney et al)

Generalization• Find the weakest preconditions A for a conclusion C such that A

entails C using the given proof.

• The general target predicate is regressed through each rule used in the proof to produce generalized conditions at the leaves.

• To regress a set of literals P through a rule H :- B1,...Bn (B={B1,...Bn}) using literal L element of P

Let Ф be the most general unifier of L and Happly the resulting substitution to all the literals in P and Band return: P = (PФ - LФ) U BФ

Also apply the substitution to update the conclusion: C=CФ

• After regressing the general target concept through each rule used in the proof return: C :- P1,...Pn (P={P1...Pn})

Page 16: Explanation-Based Learning (borrowed from mooney et al)

Generalization ExampleRegress {SafeToStack(x,y)} through

SafeToStack(x1,y1) :- Lighter(x1,y1).

Unifier: Ф = {x/x1, y/y1}

Result: {Lighter(x,y)}

Weight(Obj2,5)

Lighter(Obj1,Obj2)

Weight(Obj1,0.6)

06.<5

Page 17: Explanation-Based Learning (borrowed from mooney et al)

Generalization Example

Regress {Lighter(x,y)} through

Lighter(x2,y2) :- Weight(x2,wx2), Weight(y2,wy2), wx2 < wy2.

Unifier: Ф = {x/x2, y/y2}

Result:{Weight(x,wx), Weight(y,wy), wx < wy}

Weight(Obj1,0.6) Weight(Obj2,5)

Page 18: Explanation-Based Learning (borrowed from mooney et al)

Generalization ExampleRegress {Weight(x,wx), Weight(y,wy), wx < wy} through

Weight(x3,w) :- Volume(x3,v), Density(x3,d), w=v*d.

Unifeir: Ф = {x/x3, wx/w}

Result: {Volume(x,v), Density(x,d), wx=v*d,

Weight(y,wy), wx < wy}

Weight(Obj2,5)

Page 19: Explanation-Based Learning (borrowed from mooney et al)

Generalization ExampleRegress {… Weight(y,wy) …} through

Weight(x4,5) :- Type(x4,Endtable).

Unifier: Ф = {y/x4, 5/wy}

Result: {Volume(x,v), Density(x,d), wx=v*d,

Type(y,Endtable), wx < 5}

Learned Rule:

SafeToStack(x,y) :- Volume(x,v), Density(x,d), wx=v*d,

Type(y,Endtable), wx < 5.

Page 20: Explanation-Based Learning (borrowed from mooney et al)

Re Generalization

• Simply substituting variables for constants in the proof will not work because:– Some constants (Endtable,5) may come from the domain theory

and cannot be generalized and maintain soundness.– Two instances of the same constant may or may not generalize

to the same variable depending on structure of the proof (e.g. assume both the weight and density happened to be the same in the example, but they clearly don’t have to be the same in general).

• Since generalization is basically performing a set of unifications and substitutions and these operations have linear time complexity, generalization is a quick, linear-time process.

Page 21: Explanation-Based Learning (borrowed from mooney et al)

Knowledge as Bias

• The hypotheses produced by EBL are obviously strongly biased by the domain theory it is given.

• Being able to alter the bias of a learning algorithm by supplying prior knowledge in declarative form (declarative bias) is very useful (e.g., by adding new rules and predicates).

• EBL assumes a complete and correct domain theory, but theory refinement and other methods can be biased by incomplete and incorrect domain theories.

Page 22: Explanation-Based Learning (borrowed from mooney et al)

Perspectives on EBL

• EBL as theory guided generalization of examples:Explanations are used to distinguish relevant from irrelevant features.

• EBL as example guided reformulation of theories:Examples are used to focus on which operational concept reformulations to learn are “typical”

• EBL as knowledge compilation: Deductive consequences that are particularly useful (e.g., for reasoning about the training examples) are “compiled out” to subsequently allow for more efficient reasoning.

Page 23: Explanation-Based Learning (borrowed from mooney et al)

Standard Approach to EBL

goal

facts

After Learning (go directly from facts to solution):

goal

facts

An Explanation (detailed proof of goal)

Page 24: Explanation-Based Learning (borrowed from mooney et al)

Knowledge-Level Learning (Newell, Dietterich)

Knowledge closureall things that can be inferred from a collection of rules and facts

“Pure” EBL only learns how to solve faster, not how to solve problems previously insoluble.

Inductive learners make inductive leaps and hence can solve more after learning.

EBL is often called “Speed-up” learning (not knowledge-level learning)

What about considering resource-limits (e.g., time) on problem solving?

Page 25: Explanation-Based Learning (borrowed from mooney et al)

Utility of Knowledge Compilation

• Deductive reasoning is difficult and frequently similar conclusions must be derived repeatedly.

• Some domains have complete and correct theories and learning involves deriving useful consequences that make reasoning more efficient, e.g. chess, mathematics, etc.

Page 26: Explanation-Based Learning (borrowed from mooney et al)

Utility of Knowledge Compilation

• Different types of knowledge compilation:– Static: Not example-based, reformulate KB up front to

make it more efficient for general inferences of a particular type.

– Dynamic: Uses examples, perhaps, incrementally, to tune a system to improve efficiency on a particular distribution of problems.

• Dynamic systems like EBL make the inductive assumption that improving performance on a set of training cases will generalize to improved performance on subsequent test cases.

Page 27: Explanation-Based Learning (borrowed from mooney et al)

Utility Problem• After learning many macro-operators, macro-rules, or search control

rules, the time to match and search through this added knowledge may start to outweigh its benefits (Minton 1988)

• A learned rule must be useful in solving new problems frequently enough and save enough processing time in order to compensate for the time need to attempt to match it every time.

Utility = (AvgSavings x ApplicFreq) - AvgMatchCost

• EBL methods can frequently result in learning a set of rules with negative overall utility resulting in slowdown rather than the intended speedup.

Page 28: Explanation-Based Learning (borrowed from mooney et al)

Addressing the Utility Problem• Improve Efficiency of Matching: Preprocess learned

rules to improve their match effiicency.

• Restrict Expressiveness: Prevent learning of rules with combinatorial match costs.

• Selective Acquisition: Only learn rules whose expected benefit outweighs their cost.

• Selective Retention: Dynamically forget expensive rules that are rarely used.

• Selective Utilization: Restrict the use of learned rules to avoid undue cost of application.

Page 29: Explanation-Based Learning (borrowed from mooney et al)

Imperfect Theories and EBL

Incomplete Theory ProblemCannot build explanations of specific problems because

of missing knowledge

Intractable Theory ProblemHave enough knowledge, but not enough computer time

to build specific explanation

Inconsistent Theory ProblemCan derive inconsistent results from a theory (e.g.,

because of default rules)

Page 30: Explanation-Based Learning (borrowed from mooney et al)

Applications

• Planning (macro operators in STRIPS)

• Mathematics (search control in LEX)

Page 31: Explanation-Based Learning (borrowed from mooney et al)

Planning with Macro-Operators

• AI planning using Strips operators is search intensive.• People seem to utilize “canned” plans to achieve

everyday goals.• Such pre-packaged planning sequences (macro-

operators) can be learned by generalizing specific constructed or observed plans.

• Method is analogous to composing Horn-clause rules by generalizing proofs.

• A problem is solved by first trying to use learning macro-operators, falling back on general planning as a last resort.

Page 32: Explanation-Based Learning (borrowed from mooney et al)

STRIPS

Original planning system which used means-ends analysis and theorem proving in robot planning

Sample actions:

GoThru(A,D,R1,R2)Preconditions: In(A,R1), Connects(D,R1,R2)Effects: In(A,R2), ⌐In(A,R1)

PushThru(A,O,D,R1,R2)Preconditions: In(A,R1), In(O,R1) Connects(D,R1,R2)Effects: In(A,R2), In(O,R2),⌐In(A,R1), ⌐In(O,R1)

Page 33: Explanation-Based Learning (borrowed from mooney et al)

STRIPS

• Sample Problem:State:

In(r,room1), In(box,room2), Connects(d1,room1,room2),Connects(d2,room2,room3)

Goal: In(box,room1)

• Sample Solution:GoThru(r,d1,room1,room2)PushThru(r,box,d1,room2,room1)

Page 34: Explanation-Based Learning (borrowed from mooney et al)

Learned Macro-OperatorEBL generalizing this plan produces the following macro-operator:

GoThruPushThru(A,D1,R1,R2,O,D2,R3)Preconditions:

InRoom(A,R1), InRoom(O,R2), Connects(D1,R1,R2), Connects(D2,R2,R3), ⌐(A=O & R1=R2)Effects:

InRoom(O,R3), InRoom(A,R3), ⌐InRoom(A,R2), ⌐InRoom(O,R2), ⌐(R3=R1) → ⌐InRoom(A,R1)

• Extra preconditions needed to prevent precondition clobbering during execution of generalized plan.

• Conditional effects come from possible deletions in the generalized plan.

Page 35: Explanation-Based Learning (borrowed from mooney et al)

Representing Plan MACROPS

Strips actually used a “triangle table” to implicitly store macros for every subsequence of the actions in the plan.

Plan: [State] OP1 → OP2 → OP3 → OP4 → OP5 [Goal]

“Op1 Op1 Op2Op1 Op2 Op3Op1 Op2 Op3 Op4Op1 Op2 Op3 Op4 Op5”

The triangle table supports treating any of the 10 subsequence of the generalized plan as a macrop in future problems.

Page 36: Explanation-Based Learning (borrowed from mooney et al)

Experimental Results

Planning time with and without learning (min:sec)

trial 1 2 3 4 5

No learn 3:05 9:42 7:03 14:09 --

learning 3:05 3:54 6:34 4:37 9:13

Page 37: Explanation-Based Learning (borrowed from mooney et al)

Learning Search Control• Search control rules are used to select operators during search.

IF the state is of the form ∫ r f(x) dx,THEN apply the operator MoveConstantOutsideIntegral

• Such search control rules can be learned by explaining how the application of an operator in a sample problem led to a solution:

∫ 3sin(x)dx → 3 ∫ sin(x)dx → 3 cos(x)

• Positive examples of when to apply an operator are states in which applying that operator leads to a solution, negative examples are states in which applying the operator leads away from the solution (i.e. another operator leads to the solution).

• Induction and combinations of explanation and induction can also be used to learn search control rules.

Page 38: Explanation-Based Learning (borrowed from mooney et al)

EBL variations

• Generalizing to N: handling recursive rules in proofs

• Knowledge Deepening: explaining shallow rules

• Explanation-based induction and abductive generalization

Page 39: Explanation-Based Learning (borrowed from mooney et al)

Generalizing to N(Shavlik, BAGGER2)

Handling recursive or iterative concepts

(recursive rules in proofs).goal

P

P P

P P

12

43

5 6

Learned rules:

Goal ← P & gen-2

P ← gen-3 V gen-5 V gen-6 V recursive-gen-1 V recursive-gen-2

Page 40: Explanation-Based Learning (borrowed from mooney et al)

Knowledge Deepening

When two proofs, A and B, exist for a proposition, and proof A involves a single (shallow) rule, P→Q, and the weakest preconditions of proof B is equivalent to P, then proof B “explains” rule P→Q.

Shallow rule: “leaves are green”

Explanation: “leaves are green because they contain mesophylls, which contain chlorophyll, which is a green pigment.

Page 41: Explanation-Based Learning (borrowed from mooney et al)

Knowledge Deepening

(leaf ?x) (green ?x)

Part(?x ?y) & (isa ?y Mesophyll) (green ?y)

Part(?y ?z) & (isa ?z Chrorophyll) (green ?z)

The weakest preconditions of both proofs are the same: (leaf ?x)

Use the more complicated proof to explain the shallow rule.

Page 42: Explanation-Based Learning (borrowed from mooney et al)

Explanation-Based InductionTeleology: function suggests structure

• Identify a “teleologic explanation”Structural properties supporting physiological goal:

“leaf dehydration is avoided by the cutilcle covering the leaf’s epidermis”

• Identify the weakest preconditions of the explanation.• Separate into

– Structural preconditions: epidermis covered by cuticle– Qualifying preconditions: performs transpiration

• Find other organs satisfying the qualifying conditions: stems, flowers, fruit.

• Hypothesize they also have the structural conditions: “are the epidermises of stems, flowers, and fruit also covered by a cuticle?”