Transcript

Introduction to AI4th Lecture - 1970’sThe first AI winter

Wouter Beekme@wouterbeek.com

29 September 2010

The cyclic process of pessimism

0Cycle:1. Pessimism in the research community2. Pessimism in the press3. Cutback in funding4. End of serious research, goto (1)

0Common in emerging technologies:0 Railway mania0 Dot-com bubble

1966, NRC/ALPAC report on MT0NRC, National Research Council0ALPAC, Automatic Language Processing Advisory

Committee01966, very negative report, Language and Machines:

Computers in Translation and Linguistics.0Comparing cost and effectiveness of human and computer

translators.0Human translation is cheaper and more accurate than

machine translation.0Funding cancelled.

1969, Mansfield Amendment01960’s, DARPA funded research without a clear application.

0 Director Licklider: “We fund people, not projects.”01969’s amendment: DARPA should fund “mission-oriented

research, rather than basic undirected research”.0American Study Group: AI research is unlikely to produce

military applications in the foreseeable future.0Funding cancelled.

1973, Lighthill project

0UK Parliament asked professor Lighthill to evaluate the state of AI research.

0He identified the following problems:0 Combinatorial explosion0 Intractability0 Limited applicability, termed ‘toy problems’

0Research funding cancelled across Europe.

1974, SUR program0SUR, Speech Understanding Research

0 At Carnegie Mellon University0A system that responds to a pilot’s voice commands.0 It worked! If the words were spoken in a particular order…0Funding cancelled by DARPA.0 Incredibly useful research: Hidden Markov models for

speech recognition.

Unrealistic predictions“Many researchers were caught up in a web of increasing

exaggeration. Their initial promises to DARPA has been much too optimistic. Of course, what they delivered stopped considerably

short of that. But they felt they couldn’t in their next proposal promise less that in the first one, so they promised more.”

[Hans Moravec]

“At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as

wild-eyed dreamers.”

[John Markoff]

Qualitative/quantitative distinction0Researchers of the 60’s saw only quantitative hurdles.

0 “The criticism that a machine cannot have much diversity of behaviour is just a way of saying that it cannot have much storage capacity.” [Turing1950]

0The real hurdles of AI are qualitative:0 Commonsense knowledge problem0 Intractability / combinatorial explosion0 Moravec’s paradox0 Qualification problem0 Frame problem0 Various objections (disability, informality, mathematical)0 Philosophical underpinnings

Commonsense knowledge problem0Disambiguation: in order to translate a sentence, a

machine must have some idea what the sentence is about.0 “The spirit is willing but the flesh is weak.” “The vodka is

good but the meat is rotten.”0 “Out of sight, out of mind.” “Blind idiot!”

0Partially due to the micro-world approach of the 60’s.0 This relates to what Lighthill called ‘toy problems’!

Intractability, combinatorial explosion

0Richard Karp shows that many problems require exponential time (i.e. 2n).

0This means that only toy problems can be handled.0 Intractable problem: A problem that cannot be

solved fast enough for the solution to be useful.

Moravec’s paradox0Optimism because machines could do things that are difficult

for humans:0 Solve geometrical problems0 Give logical proofs0 Play a game of chess

0But things that are easy for humans are often difficult for machines:0 Taking the garbage out0 Recognizing the man walking across the street is Joe

0Sensorimotor skills and instincts are (arguably) necessary for intelligent behavior, but pose enormous problems for machines.

Qualification problem0 In order to fully specify the conditions under which a rule

applies, one has to provide an impractical number of qualifications.

0 I want to cross a river in a rowboat:0 The rowboat must have two oars.0 The two oars must have approximately the same length.

0Further specify what ‘approximately’ means here!0 The water must not be too cold.0 The oars must not be made of cardboard.0 The oars must not be too heavy (e.g. lead).0 The rowboat must not have a hole in it.0 Etc.

Frame problem (1/2)0Fluent: a condition whose truth value changes over time.0Fred the turkey and a gun.

0 t=1: load the gun0 t=2: wait for a bit0 t=3: shoot the gun, killing poor Fred

0alive(0), ~loaded(0), trueloaded(1), loaded(2)~alive(3)0Frame problem: this is consistent with ~alive(1), but Fred

didn’t die in t=1!0The problem is that we only describe what changes, but

not what stays the same.0We need many additional propositions stating what does

not change!

Frame problem (2/2)

0alive(0), ~loaded(0), trueloaded(1), loaded(2)~alive(3)0Solution: minimize the changes that are made to those due

to the actions.0One possible minimization: alive(0,1,2), ~alive(3),

~loaded(0), loaded(1,2,3)0Another possible minimization: alive(0,1,2,3),

~loaded(0,2,3), loaded(1)0More advances solutions are needed…

Arguments from disability

0Arguments that take on the form: “A machine can never do X.”

0 Introduced by Turing1950.0 Remember his refutation of this problem in quantitative

terms!0 Instances of tasks filling in for X:

0 Moravec’s paradox: simple/instinctive tasks fill in for X.0 Creative tasks fill in for X (art, humor, taste)0 Emotional tasks fill in for X (empathy, love)

Arguments from informality

0Machines can only follow rules as supplied by humans, and that is not sufficient for intelligence.

0Originated by Ada Lovelace:0 “The Analytical Engine has no pretensions whatever to

originate anything. It can do whatever we know how to order it to perform.”

Mathematical objections0Undecidability: remember last week’s Halting problem.0Gödel’s first incompleteness theorem:

0 Any formal theory capable of expressing elementary arithmetic cannot be both consistent and complete.

0 In other words: In every interesting consistent formal theory there is a statement that is true but not provable in that theory.

0Gödel sentences: sentences that are true but unprovable.0Gödel’s second incompleteness theorem:

0 For any formal theory T that includes basic arithmetic and the notion of formal provability we have: T includes a statement of its own consistency if and only if T is inconsistent.

0 The consequence of formulating the first incompleteness theorem within the theory itself.

Philosophical positions0Behaviorism: a mental state is attributed when certain

external observations regarding an entity have been made.0Functionalism: a mental state is determined by the causal

connections between input and output.0 A chip with the same connections as a brain is intelligent.

0Biological naturalism: the existence of a mental state crucially depends on it being present in a neurological substrate.0 If a chip has the same connections as a brain, then this can at

most be a simulation of intelligence.0 The Turing Test does not suffice for establishing intelligence.

Weak AI || Strong AI

0Weak AI: machines simulate intelligence / behave as if they are intelligent.0 Biological naturalism

0Strong AI: machines are intelligent.0 Behaviorism, functionalism

top related