Top Banner
Introduction to AI – 3 rd Lecture 1960’s – The Golden Years of AI Wouter Beek [email protected] 22 September 2010
19

Introduction to AI - Third Lecture

Nov 29, 2014

Download

Education

Wouter Beek

 
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Introduction to AI - Third Lecture

Introduction to AI – 3rd Lecture1960’s – The Golden Years of AI

Wouter [email protected]

22 September 2010

Page 2: Introduction to AI - Third Lecture

OVERVIEW OF THE 1960’SPart I

Page 3: Introduction to AI - Third Lecture

1964-1966 ELIZA

• Joseph Weizenbaum @ MIT• Natural language processing scripts.• Script DOCTOR implementing a Rogerian

psychotherapist (still in Emacs).• Pattern-matching techniques.• Even though intended as a parody, many users

took this chatterbot seriously.– The first successful Turing test?

Page 4: Introduction to AI - Third Lecture

1966-1972, Shakey

• Shakey: the first mobile robot that was able to reason its own actions.

• Charles Rosen (and many others), 1966-1972, SRI.• Natural language processing: interpretation of

the goal.• Computer vision: e.g. Hough transformation

for feature extraction.• Robotics: e.g. visibility graph method for

Euclidean shortest path finding.• Other techniques: LISP, A* search.

Page 5: Introduction to AI - Third Lecture
Page 6: Introduction to AI - Third Lecture

Late 1960’s, Micro-Worlds

• Marvin Minsky @ MIT• Choose limited problems in idealized settings

(as, arguably, is the case in physics).• Blocks World: the world consists of a limited

number of blocks of different color and shape, and all actions in the world involve arranging these blocks.

• E.g. SHRDLU, Terry Winograd, 1968-1970

Page 7: Introduction to AI - Third Lecture

Two SHRDLU examplesPerson: Pick up a big red block.Computer: OK.Person: Grasp the pyramid.Computer: I don't understand which pyramid you mean.Person (changing their mind):

Find a block which is taller than the one you are holding and put it into the box.

Computer: By "it", I assume you mean the block which is taller than the one I am holding.

Person: What does the box contain?Computer: The blue pyramid and the blue block.Person: What is the pyramid supported by?Computer: The box.Person: How many blocks are not in the box?Computer: Four of them.Person: Is at least one of them narrower than the one which I told you to pick up?Computer: Yes, the red cube.

Page 8: Introduction to AI - Third Lecture

1956-1974, Golden Years of AI

• 1957, Sputnik launch– U.S. are afraid that the Russians are technologically

ahead: huge amounts of Russian scientific articles are translated.

– U.S. National Research Council starts funding automatic translation research.

• DARPA funding• Based on ambitious claims:– “In from three to eight years we will have a machine

with the general intelligence of an average human being.” [Marvin Minsky, 1970, Life Magazine]

Page 9: Introduction to AI - Third Lecture

1974, first AI winter

• Too ambitious / too big claims:– “The vodka is good, but the meat is rotten.”

“The spirit is willing, but the flesh is weak.”(allusion to Mark 14:38)

– 1966, negative report by an advisory committee, government funding of automatic translation cancelled.

• Limited knowledge of the outside world:– Restricted to micro-worlds (e.g. Blocks World)– Restricted to pattern-matching (e.g. ELIZA)

• Inherent limitations of computability:– Intractability, combinatorial explosion (to be discussed next

week).– Undecidability

Page 10: Introduction to AI - Third Lecture

Inherent limitations: halting problem

• Decision problem: any yes-no question on an infinite set of inputs.

• Halting problem: Given a description of a program and a finite input, decide whether the program finishes running or will run forever.

• No resource limitations on space (memory) or time (processing power).

• Example of a program that will finish:– writef(‘Hello, world!’).

• Example of a program that will run forever:– lala(X):- lala(X). with query lala(a)

• Rephrasing the problem: function h is computable:

Page 11: Introduction to AI - Third Lecture

Halting problem• We do this for any totally computable function f(x,y).• Define a partial function g: • If f is computable, then g is partially computable.• The algorithm that computes g is called e.• Two possibilities:

– If g(e)=0, then f(e,e)=0 (definition of g), but then h(e,e)=1 (since e halts on input e).

– If g(e)=undefined, then (definition of g), but then h(e,e)=0 (since e does not halt when run on e).

• So no computable function f can be h, i.e. the halting problem is undecidable.

Page 12: Introduction to AI - Third Lecture

Some undecidable problems

• Halting problem• But also: first-order logic (FOL)– Used for the blocks world, Logic Theorist, etc.

• More general: any logical language including the equality predicate and any other binary predicate.

• Entailment in FOL is semidecidable:– For every sentence S: if S is entailed, then there exists an

algorithm that says so.– For some sentence S: if S is not entailed, then there does

not exist an algorithm that says so.

Page 13: Introduction to AI - Third Lecture

PHYSICAL SYMBOL SYSTEMSPart II

Page 14: Introduction to AI - Third Lecture

Physical Symbol System (PSS): Ingredients

• Symbols: physical patterns.• Expressions / symbol structures: (certain)

sequences of symbols.• Processes: functions mapping from and to

expressions.

Page 15: Introduction to AI - Third Lecture

PSS: Designation & interpretation• E is an expressions, P is a process, S is a physical symbol

system.• We call all physical entities objects, e.g. O.

– Symbols are objects.– Expressions are objects, and are collections of objects that adhere

to certain strictures.– Processes are objects!– Machines are objects, and are collections of the foregoing objects.

• E designates O according to S:I. Given E, S can affect O, orII. given E, S can behave according to O.

• S interprets E:– E designates P, as in (II).

• Machines are experimental setups for designating and interpreting symbols.

Page 16: Introduction to AI - Third Lecture

PSS Hypothesis

• “A Physical Symbol System has the necessary and sufficient means for general intelligent action.”

• Necessary: if something is intelligent, then it must be a PSS.

• Sufficient: if something is a PSS, then it must be intelligent.

• General intelligent action: the same scope of intelligence as we see in human action.– Behavioral or functional interpretation of

intelligence (as in Turing1950).

Page 17: Introduction to AI - Third Lecture

Remember: Church-Turing Thesis

• Chruch-Turing Thesis: Any computation that is realizable can be realized by a Universal Machine (or Turing Machine, or General Purpose Computer).

• This thesis is likely since the following three abstractions of computability were developed independently and are yet equivalent:– Post productions (Emil Post)– Recursive (lambda-)functions (Alonzo Church)– Turing Machines (Allan Turing)

Page 18: Introduction to AI - Third Lecture

PSS: Conceptual History

1. Reasoning as formal symbol manipulation (Frege, Whitehead, Russell, Shannon)– Reasoning/information/communication theory abstracts away

from content.– Think of Shannon’s notion of information entropy and of logical

deduction.

2. Automating (1): Computation is a physical process.3. Stored program concept: programs are

represented and operated as data.– Think of the tape in a Turing Machine.– Interpretation in a PSS.

4. List processing: patterns that have referents– Designation in a PSS.

Page 19: Introduction to AI - Third Lecture

PSS: Evaluating the hypothesis

• Remember the PSS Hyptohesis: “A Physical Symbol System has the necessary and sufficient means for general intelligent action.”

• This is not a theorem.– The connection between PSS and intelligence cannot be

proven.• This is an empirical generalization.– Whether it is true or false is found out by creating machines

and observing their behavior.• This makes AI an empirical science (e.g. like physics).– AI can corroborate hypotheses, but cannot prove theorems.