Top Banner
AI Developments from 1963 – 1970 Micro-Worlds
36
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: print version

AI Developments from 1963 – 1970

Micro-Worlds

Page 2: print version

Minsky’s Students’ progress at MIT…

STUDENT program

Bill's father's uncle is twice as old as Bill's father. Two years from now Bill's father will be three times as old as Bill. The sum of their ages is 92. Find Bill's age.

Could not solve standard problems in a textbook But could solve most if they were rewritten for it

Page 3: print version

Minsky’s Students’ progress at MIT… SIR program

Every boy is a person. I UNDERSTAND. A finger is part of a hand. I UNDERSTAND. Each person has two hands. THE ABOVE SENTENCE IS AMBIGUOUS—

PLEASE REPHRASE IT. There are two hands on each person. I UNDERSTAND.

Page 4: print version

Minsky’s Students’ progress at MIT…How many fingers does John have?THE ABOVE SENTENCE IS AMBIGUOUS—BUT I ASSUME “HAS” MEANS “HAS AS PARTS”. I DON’T KNOW WHETHER FINGER IS PART OF JOHN.

John is a boy.I UNDERSTAND.How many fingers does john have?HOW MANY FINGERS PER HAND?Every hand has 5 fingers.I UNDERSTAND.How many fingers does john have?THE ANSWER IS 10.

Page 5: print version

Minsky’s Students’ progress at MIT… SIR program

Worked in very constrained domain E.g. “x times y” “the number of times I went to the movies”

Page 6: print version

Minsky’s Students’ progress at MIT… Computer vision problem

No idea how hard it was Minsky hired Gerry Sussman, a first year undergraduate student Told him to build a computer vision system As a Summer project!

Eventually hired a team to work on vision Invented “Blocks Micro World” Regular square and rectangular blocks and pyramids Successful in recognising objects based on edges

Robotics Built robot hand to grab and manipulate blocks Took a lot of work They felt it was similar in complexity to

NASA’s Apollo moon-landing program

Page 7: print version

Stanford’s Shakey Moved around specially built rooms Moved boxes around Used STRIPS language Preconditions

ONTABLE(X), HANDEMPTY, CLEAR(X)

Delete List ONTABLE(X) , HANDEMPTY

Add list HOLDING(X)

Life Magazine “the first electronic person” “could travel around the moon for

months at a time without a single beep of direction from the earth”

Page 8: print version

MIT’s SHRDLU Pick up a red block

OK Find a block which is taller than the one you are holding and put it into the

box. By "it," I assume that you mean the block which is taller than the one I

am holding. OK. What does the box contain?

The blue pyramid and the blue box What is the pyramid supported by?

the box How many blocks are not in the box?

four of them Is at least one of them narrowed than the one which I told you to pick up?

yes, the red cube Is it supported?

yes, by the table

Page 9: print version

MIT’s SHRDLU Pick up a red block

OK Find a block which is taller than the one you are holding and put it into the

box. By "it," I assume that you mean the block which is taller than the one I

am holding. OK. What does the box contain?

The blue pyramid and the blue box What is the pyramid supported by?

the box How many blocks are not in the box?

four of them Is at least one of them narrowed than the one which I told you to pick up?

yes, the red cube Is it supported?

yes, by the table

Page 10: print version

Herbert Simon, 1957

““A digital computer will be the world's A digital computer will be the world's chess champion unless the rules bar it from chess champion unless the rules bar it from

competition;competition;A digital computer will discover and prove A digital computer will discover and prove an important new mathematical theorem;an important new mathematical theorem;

A digital computer will write music that will A digital computer will write music that will be accepted by critics as possessing be accepted by critics as possessing

considerable aesthetic value;considerable aesthetic value;Most theories in psychology will take the Most theories in psychology will take the

form of computer programs, or of form of computer programs, or of qualitative statements about the qualitative statements about the

characteristics of computer programs..”characteristics of computer programs..”

Page 11: print version

Herbert Simon, 1958

“… “… only moderate extrapolation is required only moderate extrapolation is required from the capacities of current programs from the capacities of current programs

already in existence to achieve the already in existence to achieve the additional problem-solving power needed additional problem-solving power needed

for such simulation.”for such simulation.”

Page 12: print version

Herbert Simon, 1965

““Machines will be capable, Machines will be capable, within twenty years, within twenty years,

of doing any work that a man can of doing any work that a man can do.”do.”

Page 13: print version

Herb Simon’s Predictions…

Simon

said

reality

Computer chess champ

1967 Probably 1997-2007

Still close contests

Discover important maths theorem

1967 Not really, but HR is probably pretty close (by Simon Colton, c. 2000)

Compose music 1967 Possibly in the 80’s(depends on how aesthetic it needs to be)

Most Psychology theories are programs

1967 Probably not yet

Page 14: print version

Summarising AI Developments up to 1970 Success in micro worlds The domains used were constrained Also, human intelligence used to abstract problems

Monkey – chair – banana Textbook problems for STUDENT program

Seemed a reasonable way to do science… like physicists’ carefully controlled experiments

They thought it would be possible to expand the domains SHRDLU author thought approach could easily be extended

… this proved not to be the case Discovered that a vast amount of commonsense knowledge

is needed for the most basic tasks in real world Note again the first law

E.g. STUDENT or Robot arm to move blocks

Page 15: print version

Patrick H. Winston, 1976

““Artificial intelligence has done well in Artificial intelligence has done well in tightly constrained domains. tightly constrained domains. Winograd Winograd [SHRDLU], for example, , for example, astonished everyone with the astonished everyone with the

expertise of his blocks-world natural expertise of his blocks-world natural language. language.

Extending this kind of ability to larger Extending this kind of ability to larger worlds has not proved worlds has not proved

straightforward, however…straightforward, however…The time has come to treat the The time has come to treat the

problems involved as central issues.”problems involved as central issues.”

Page 16: print version

What about all that military money?

US National research council and military had put millions into Machine Translation

Wanted to translate many Russian texts (cold war) Thought it should be easy

– like computers could translate WWII codes 1966 “Any task that requires real “Any task that requires real

understanding of natural language is understanding of natural language is too difficult for a computer”too difficult for a computer” - Bar-Hillel

1966 Funding ended

This didn’t affect Stanford, MIT, CMU They weren’t doing translation

Page 17: print version

What about all that military money? Meanwhile at Stanford, MIT, CMU… DARPA started looking for results… Had funded speech understanding system at Carnegie Mellon System worked by constraining grammar, with 1000 words

Academics were pleased with their progress … but Was not very useful… User had to keep guessing what way to say something to be accepted More troublesome for military personnel to use than menu system

MIT’s SHRDLU did not extend to larger domains Hit commonsense knowledge problem

Stanford’s “Shakey” was not fooling DARPA people Had high probability of failure on any action Could not reliably do a sequence of actions

1974 Stanford, MIT, CMU all got funding cut to almost nothing

Page 18: print version

Example of Commonsense Knowledge Problem Tried to build a system to understand children’s stories.

Fred was going to the store. Today was Jack’s birthday and Fred was going to get a present.

Can a system answer questions on the story? Why is Fred going to the store? Who is the present for? Why is Fred buying a present?

Cannot answer questions without extra (commonsense) knowledge…

Objects got at stores are usually bought. Presents are often bought at stores. If a person is having a birthday, he is likely to get presents.

Page 19: print version

Meanwhile in UK… 1973 Lighthill report commissioned by Research Council…

““in no part of the field have discoveries made so in no part of the field have discoveries made so far far

produced the major impact that was then produced the major impact that was then promised”promised”

Category A: advanced automation or applications approves of it in principle

Category C: studies of central nervous system computer modeling for neurophysiology and psychology

approves of it in principle

Category B: “building robots” and “bridge” between A&C Does not approve

No place for exploring intelligent information processing for its own sake

No money for AI

Page 20: print version

Chatterbots: ELIZA Criticism of AI from within… 1966 written by Weizenbaum (MIT) simple parsing + substitution of key words into canned phrases E.g. for family terms: “tell me more about your …” Chose psychiatrist because he could get away with a lot “Tell me more about streetlights”

Weizenbaum wanted to show that an AI program could easily appear intelligent

… but there was not much going on inside

Weizenbaum said AI takes rational logic viewbecause the light is better there

He believed that there was more to human int…

Page 21: print version

Minsky and Papert’s “Perceptrons” More criticism of AI from within… There had been great excitement about neural networks 1958 Rosenblatt had an article in Science magazine, titled

“Human Brains Replaced?” Rosenblatt said the perceptron could

“tell the difference between a dog and a cat” 1969 Minsky and Papert published book “Perceptrons” Mathematically proved that there were some simple things that

perceptrons could never learn For example: XOR Killed off neural networks work for 10 years (funding stopped)

Rosenblatt died in a boating accident in 1969

Page 22: print version

Expert Systems (1970s) Researchers realised “Knowledge” is the key… Computer memory was starting to get larger DENDRAL was first expert system To help organic chemists identify unknown organic molecules Embodied expert chemists’ knowledge as IF THEN rules Searched a tree of possibilities System was successful

MYCIN system diagnose infectious blood diseases recommend antibiotics and dosage

Performed well 65% correct – better than non-specialist doctors Not used (ethical concerns)

Page 23: print version

Commercial Expert Systems 1978 XCON by John McDermott of CMU

Assist in ordering DEC's VAX computer systems Automatically selecting components based on customer requirements

Went into use in 1980 in DEC's plant By 1986, had processed 80,000 orders

Achieved 95-98% accuracy Saved DEC $25M a year No need to give customers free components when technicians made

errors Speeded assembly process Increased customer satisfaction

Page 24: print version

Micro Worlds had problems to scale up to the real world and commonsense knowledge

Expert Systems didn’t have a problem…

Expertise is a sort of Micro-World

Page 25: print version

More Developments in the 1970s The emergence of Neats and Scruffies Roger Schank introduced scripts to help story understanding His programs had to act like humans… sometimes making

illogical assumptions He called his approach “scruffy” to distinguish from McCarthy Neats required everything to be logical

Consistent, provably correct, clear cut, guaranteed stability (learning)

Scruffies want something that works Don’t care how! Will use any technique, even some logic Procedural, without mathematical proof

Aaron Sloman’s view: Scruffiness is inevitable

http://www.cs.bham.ac.uk/research/projects/cogaff/sloman.scruffy.ai.pdf

Page 26: print version

More Developments in the 1970sThe Neats Vs. The Scruffies

Neats complain about Scruffies: programs are complex, ill-structured not possible to explain or predict their behaviour not possible to prove that they do what they are intended to do.

Scruffies complain about Neats: Neats’ programs only work on toy domains get bogged down with real world complexity Use fancy mathematical techniques just to show how smart they are …but no increase in understanding of any interesting phenomenon

Minsky also moved away from logic Came up with “Frames” for knowledge

Meanwhile in the Neat camp…1972 Prolog developed (Edinburgh and Marseille)

Prolog gained popularity outside US LISP dominant in US

Page 27: print version

Developments in the 1980s Great commercial success for expert systems Neural Networks back on the scene

1986 Parallel Distributed Processing By Rumelhart and psychologist McClelland Neural networks become commercially successful in 1990s… optical character recognition speech recognition

Another “school of thought” division Symbolic Vs. Sub-symbolic (aka non-symbolic)

Symbolic means Typically logic, symbols are high level concepts, monkey-chair-banana Also called GOFAI (Good Old Fashioned AI)

Sub-symbolic means (still has symbols, but lower level symbols) Neural networks (aka connectionist) Fuzzy systems Evolutionary computation

Page 28: print version

Developments in the 1980s 1984 Doug Lenat – Cyc

Attack commonsense knowledge problem directly Massive knowledge base All facts that the average person knows Expected to take only two person-centuries Nowhere near success in 2007

Page 29: print version

Developments in the 1980s Autonomous Land Vehicle (ALV)

Military project started in 1983 25 million funding per year Robot wanderer, travel over rugged terrain By 1987 – clear that goals were far away Longest ever off road outing: 600 yards at 2mph 1989 Pentagon ended the project (although 20 years later technology is capable)

In general: in the 1980s Funding returned… …but boom-bust cycle again AI did not meet expectations again Mini-winter 1987-1993

1991 First Gulf War AI made substantial contribution… but not exactly where expected E.g. in scheduling, arranging supplies

Page 30: print version

More Modern Trends… 1987 Rodney Brooks “Intelligence Without Representation” “Embodied Intelligence” idea Belief that intelligence must have a real body

interacting with the real world needs to perceive, move and survive in the world

Reject work on simulations as oversimplifying the problem Often try to build animal level intelligences first

Page 31: print version

More Modern Trends… Developmental Robotics (also called Epigenetic Robotics) Follows Turing’s idea:

““Instead of trying to produce a programme Instead of trying to produce a programme to simulate the adult mind, why not rather to simulate the adult mind, why not rather try to produce one which simulates the try to produce one which simulates the child's? If this were then subjected to an child's? If this were then subjected to an appropriate course of education, one would appropriate course of education, one would obtain the adult brain.”obtain the adult brain.”

http://www.youtube.com/watch?v=FPw9z2aaTa8

Social Interaction and Imitation approaches Believe that human infants learn a lot by imitating Also human infants learn what is important

Mother points to interesting objects Infant looks to mother’s face to see if she approves Infant responds to emotion in mother’s voice

Page 32: print version

The Strong / Weak Division Strong AI

Trying to build a system that is equal or better than a human, on general tasks

e.g. Weizenbaum’s view (MIT): The goal of strong AI is

“nothing less than to build a machine on the “nothing less than to build a machine on the model of man, a robot that is to have its model of man, a robot that is to have its childhood, to learn language as a child does, to childhood, to learn language as a child does, to gain its knowledge of the world by sensing the gain its knowledge of the world by sensing the world through its own organs, and ultimately to world through its own organs, and ultimately to contemplate the whole domain of human contemplate the whole domain of human thought.”thought.”

Weak AI (also called “applied AI”) Building useful applications,

usually restricted to a particular domain, specific tasks e.g. an autonomous vehicle, or a speech recognition system

Most people work on weak AI Many people wonder why more people don’t work on strong AI Perhaps the history of AI answers this question… Because people working on strong AI don’t get anywhere!

Page 33: print version

Alan Perlis

““A year spent in artificial A year spent in artificial intelligence is enough to make intelligence is enough to make

one believe in God.”one believe in God.”

Page 34: print version

Up to present day… what happened? AI fragmented into sub-disciplines…

(From AAAI 2008 conference)

Agents (includes Multi-Agent Systems) Cognitive modeling and human interaction Commonsense reasoning Constraint satisfaction Evolutionary computation Game playing and interactive entertainment Information integration and extraction Knowledge acquisition and ontologies Knowledge representation and reasoning Machine learning and data mining

Page 35: print version

Up to present day… what happened? Machine learning and data mining Model-based systems Natural language processing Planning and scheduling Probabilistic reasoning Robotics Search Semantic web Vision and perception

Page 36: print version

Up to present day… what happened? 2006 Minsky complained:

central problems, like commonsense reasoning, neglected majority of researchers pursued commercial applications e.g. commercial applications of neural nets or genetic

algorithms

"So the question is "So the question is why we didn't get HAL in 2001? why we didn't get HAL in 2001?

I think the answer is I think the answer is I believe we could have."I believe we could have."