Top Banner
CogSci 131 The problem of induction Tom Griffiths
39
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Induction

CogSci 131

The problem of induction

Tom Griffiths

Page 2: Induction

thought string = ‘computation’;disp(string); ≈

Minds and computers are both formal systems

Page 3: Induction

Computational problems

•  Problems of deduction and search: – arithmetic, algebra, chess

•  We know what the underlying formal system should be for these problems – we know how computers can solve these

problems (at least in principle) –  in many cases, computers can solve these

problems better than people

Page 4: Induction

Computational problems

•  Problems of deduction and search: – arithmetic, algebra, chess

•  But what about: –  learning and using language – sophisticated senses: vision, hearing – similarity and categorization –  inferring causal relationships – scientific investigation

Page 5: Induction

Outline

Inductive problems

Break

The problem of induction

Page 6: Induction

Inductive problems

•  Evaluating a set of hypotheses whose truth is underdetermined by the available data

•  Examples: –  learning and using language – sophisticated senses: vision, hearing – similarity and categorization –  inferring causal relationships – scientific investigation

Page 7: Induction

Learning language Red: Target language Blue: Current hypothesis

Multiple hypotheses can be consistent with the data

Page 8: Induction

Learning language Hypotheses: (all consistent) •  Rabbit •  Dinner •  Rabbit before t, dinner after •  Undetached rabbit parts •  Momentary rabbit-stage •  Mass of rabbithood •  Temporal cross-section of a

four-dimensional space-time extension of a rabbit

“Gavagai!”

Page 9: Induction

Vision

•  Two consistent hypotheses: – a cube – a cunningly shaded 2D shape

Page 10: Induction

Vision

Page 11: Induction

Vision

Page 12: Induction

Vision

Page 13: Induction

Categorization

Page 14: Induction

Categorization

cat ⇔ small ∧ furry ∧ domestic ∧ carnivore

How do you find the appropriate rule?

Page 15: Induction

Scientific discovery

Page 16: Induction

76 y

ears

75

yea

rs

Halley, 1752

Page 17: Induction

Inductive problems

P ⇒ Q P Q

P Q P Q P Q P ⇒ Q

P ⇒ Q Q P

deduction induction abduction

inductive problems

Page 18: Induction

Causal induction

P Q P Q P Q P ⇒ Q

PushSwitch Light PushSwitch Light PushSwitch Light PushSwitch ⇒ Light

Page 19: Induction

Causal reasoning

P ⇒ Q Q P

PushSwitch ⇒ Light Light PushSwitch

Page 20: Induction

Inductive problems

P ⇒ Q P Q

P Q P Q P Q P ⇒ Q

P ⇒ Q Q P

deduction induction abduction

Philosophers: “a puzzle”, “a scandal”, “a myth”

Page 21: Induction

Break

Up next: The problem of induction

Page 22: Induction

Three problems •  Plato’s problem

– how do we know so much? – why are our inductions so successful?

•  Hume’s problem –  induction can only be justified by induction

•  Goodman’s “new riddle” – no simple syntactic rules for induction

Page 23: Induction

Three problems •  Plato’s problem

– how do we know so much? – why are our inductions so successful?

•  Hume’s problem –  induction can only be justified by induction

•  Goodman’s “new riddle” – no simple syntactic rules for induction

Page 24: Induction

Hume’s problem

•  Inductive inferences assume that the future will be like the past

•  What is the basis for this assumption?

•  Induction can only be justified by induction

“It is impossible … that any arguments from experience can prove this resemblance from past to future; since all these arguments are founded on the supposition of that resemblance.”

Page 25: Induction

The No Free Lunch Theorem (Wolpert)

•  Averaged over all possible worlds, no learning algorithm is better than any other – e.g.sequence prediction: given x1, x2, predict x3

000 001 010 011 100 101 110 111

Worlds: Data:

00 00 01 01 10 10 11 11

Correct answer:

0 1 0 1 0 1 0 1

Page 26: Induction

The No Free Lunch Theorem (Wolpert)

•  In order for an algorithm to work better, the distribution over worlds must be constrained – e.g.the future is like the past

000 001 010 011 100 101 110 111

Worlds: Data:

00 00 01 01 10 10 11 11

Correct answer:

0 1 0 1 0 1 0 1

Page 27: Induction

Anthropic argument

Page 28: Induction

Goodman’s response

•  Induction is no less justified than deduction –  the formal system underlying deduction was

refined to confirm to our intuitions –  the same process can yield rules for induction

•  Instead of searching for justification, we should search for the rules of induction – what learning algorithm do people use?

•  Some inductions are better than others…

Page 29: Induction

Better and worse inductions

PushSwitch Light PushSwitch Light PushSwitch Light PushSwitch ⇒ Light

September17 Light September17 Light September17 Light September17 ⇒ Light

quality differs, despite the same syntax

suggests we’re missing some premises…

Page 30: Induction

The “new riddle”

“Only a statement that is lawlike … is capable of receiving confirmation from an instance of it; accidental statements are not. Plainly, then, we must look for a way of distinguishing lawlike from accidental statements.”

What makes a statement lawlike (projectible)?

Page 31: Induction

Grue

•  Grue = “Green before t, blue after t” •  Observe three green emeralds before t

•  Both “all emeralds are green” and “all emeralds are grue” are equally confirmed

•  So… why is green lawlike, but not grue?

Page 32: Induction

Syntactic complexity

•  Grue = “Green before t, blue after t” “This is a complicated property - perhaps

induction only works with simple properties?”

•  Green = “Grue before t, bleen after t” – where bleen = “Blue before t, green after t”

Page 33: Induction

Goodman’s conclusion “lawlike or projectible hypotheses cannot be distinguished on any merely syntactical grounds”

•  There is some kind of extra knowledge (as to what is projectible) that enters into our inductive inferences

•  So… there might be rules of induction, but they need to take this knowledge into account

Page 34: Induction

The challenge for formal systems •  Our best example of a formal system is

deductive logic, but induction has its own rules

•  It’s not clear that assumptions like simple truth or falsehood apply – are you 100% sure this is the grammar? – are you 100% sure this is a cat? – are you 100% sure the comet will return?

Page 35: Induction

The challenge for formal systems •  Inductive problems:

–  learning and using language – sophisticated senses: vision, hearing – similarity and categorization –  inferring causal relationships – scientific investigation

•  The situation differs from deductive problems: – what are the formal rules for induction? – how can computers solve these problems?

Page 36: Induction

thought string = ‘computation’;disp(string); ≈

Minds and computers are both formal systems

Page 37: Induction

Making computational models

Goodman suggests that we identify formal rules by iterative refinement

1. Develop models of inductive inferences 2. Test those models against human data 3. Modify models in light of data

Page 38: Induction

“Why is a single instance, in some cases, sufficient for a complete induction, while in others myriads of concurring instances, without a single exception known or presumed, go such a very little way towards establishing a general proposition? Whoever can answer this question knows more of the philosophy of logic than the wisest of the ancients, and has solved the problem of Induction.”

John Stuart Mill (A System of Logic, 1843)

The challenge of induction

Page 39: Induction

Next week

•  Typicality and categorization –  fuzzy borders and uncertainty

•  Part II: Similarity, spaces, and features