Top Banner
AI Philosophy: Computers and Their Limits G51IAI – Introduction to AI Andrew Parkes http://www.cs.nott.ac.uk/~ajp/
42
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: AI Philosophy.

AI Philosophy:Computers and Their Limits

G51IAI – Introduction to AIAndrew Parkes

http://www.cs.nott.ac.uk/~ajp/

Page 2: AI Philosophy.

Natural Questions

• Can a computer only have a limited intelligence? or maybe none at all?

• Are there any limits to what computers can do?

• What is a “computer” anyway?

Page 3: AI Philosophy.

Turing Test• The test is conducted with two people and a

machine.• One person plays the role of an interrogator and is

in a separate room from the machine and the other person.

• The interrogator only knows the person and machine as A and B. The interrogator does not know which is the person and which is the machine.

• Using a teletype, the interrogator, can ask A and B any question he/she wishes. The aim of the interrogator is to determine which is the person and which is the machine.

• The aim of the machine is to fool the interrogator into thinking that it is a person.

• If the machine succeeds then we can conclude that machines can think.

Page 4: AI Philosophy.

Turing Test: Modern

• You’re on the internet and open a chat line (modern teletype) to two others “A” and “B”

• Out of A and B – one is a person– one is a machine trying to imitate a person (e.g.

capable of discussing the X-factor?)

• If you can’t tell the difference then the machine must be intelligent

• Or at least act intelligent?

Page 5: AI Philosophy.

Turing Test

• Often “forget” the second person

• Informally, the test is whether the “machine” behaves like it is intelligent

• This is a test of behaviour• It is does not ask “does the

machine really think?”

Page 6: AI Philosophy.

Turing Test Objections

• It is too culturally specific?– If B had never heard of “The X-Factor”

then does it preclude intelligence? – What if B only speaks Romanian?– Think about this issue!

• It tests only behaviour not real intelligence?

Page 7: AI Philosophy.

Chinese Room• The system comprises:

– a human, who only understands English– a rule book, written in English– two stacks of paper.

• One stack of paper is blank. • The other has indecipherable symbols on them.

• In computing terms – the human is the CPU– the rule book is the program – the two stacks of paper are storage devices.

• The system is housed in a room that is totally sealed with the exception of a small opening.

Page 8: AI Philosophy.

Chinese Room: Process• The human sits inside the room waiting for pieces

of paper to be pushed through the opening. • The pieces of paper have indecipherable symbols

written upon them.

• The human has the task of matching the symbols from the "outside" with the rule book.

• Once the symbol has been found the instructions in the rule book are followed. – may involve writing new symbols on blank pieces of paper,– or looking up symbols in the stack of supplied symbols.

• Eventually, the human will write some symbols onto one of the blank pieces of paper and pass these out through the opening.

Page 9: AI Philosophy.

Chinese Room: Summary

• Simple Rule processing system but in which the “rule processor” happens to be intelligent but has no understanding of the rules

• The set of rules might be very large

• But this is philosophy and so ignore the practical issues

Page 10: AI Philosophy.

Searle’s Claim

• We have a system that is capable of passing the Turing Test and is therefore intelligent according to Turing.

• But the system does not understand Chinese as it just comprises a rule book and stacks of paper which do not understand Chinese.

• Therefore, running the right program does not necessarily generate understanding.

Page 11: AI Philosophy.

Replies to Searle

• The Systems Reply • The Robot Reply• The Brain Simulator Reply

Page 12: AI Philosophy.

Blame the System!

• The Systems Reply states that the system as a whole understands.

• Searle responds that the system could be internalised into a brain and yet the person would still claim not to understand chinese

Page 13: AI Philosophy.

“Make Data”?

• The Robot Reply argues we could internalise everything inside a robot (android) so that it appears like a human.

• Searle argues that nothing has been achieved by adding motors and perceptual capabilities.

Page 14: AI Philosophy.

Brain-in-a-Vat

• The Brain Simulator Reply argues we could write a program that simulates the brain (neurons firing etc.)

• Searle argues we could emulate the brain using a series of water pipes and valves. Can we now argue that the water pipes understand? He claims not.

Page 15: AI Philosophy.

AI Terminology

• “Weak AI”– machine can possibly act intelligently

• “Strong AI”– machines can actually think intelligently

• AIMA: “Most AI researchers take the weak hypothesis for granted, and don’t care about the strong AI hypothesis” (Chap. 26. p. 947)

• What is your opinion?

Page 16: AI Philosophy.

What is a computer?

• In discussions of “Can a computer be intelligent?”

• Do we need to specify the “type” of the computer?– Does the architecture matter?

• Matters in practice: need a fast machine, lots of memory, etc

• But does it matter “in theory”?

Page 17: AI Philosophy.

Turing Machine

• A very simple computing device– storage: a tape on which one can

read/write symbols from a list– processing: a “finite state automaton”

Page 18: AI Philosophy.

Turing Machine: Storage

• Storage: a tape on which one can read/write symbols from some fixed alphabet– tape is of unbounded length

• you never run out of tape

– have the options to • move to next “cell” of the tape• read/write a symbol

Page 19: AI Philosophy.

Turing Machine: Processing

• “finite state automaton” – The processor can has a fixed finite

number of internal states– there are “transition rules” that take

the current symbol from the tape and tell it • what to write• whether to move the head left or right• which state to go to next

Page 20: AI Philosophy.

Turing Machine Equivalences

• The set of tape symbols does not matter!

• If you have a Turing machine that uses one alphabet, then you can convert it to use another alphabet by changing the FSA properly

• Might as well just use binary 0,1 for the tape alphabet

Page 21: AI Philosophy.

Universal Turing Machine

• This is fixed machine that can simulate any other Turing machine– the “program” for the other TM is

written on the tape– the UTM then reads the program and

executes it• C.f. on any computer we can

write a “DOS emulator” and so read a program from a “.exe” file

Page 22: AI Philosophy.

Church-Turing Hypothesis

“All methods of computing can be performed on a Universal Turing Machine (UTM)”

• Many “computers” are equivalent to a UTM and hence all equivalent to each other

• Based on the observation that – when someone comes up with a new method of

computing– then it always has turned out that a UTM can simulate it,– and so it is no more powerful than a UTM

Page 23: AI Philosophy.

Church-Turing Hypothesis

• If you run an algorithm on one computer then you can get it to work on any other– as long as have enough time and space then

computers can all emulate each other– an operating system of 2070 will still be able to run a

1980’s .exe file

• Implies that abstract philosophical discussions of AI can ignore the actual hardware?– or maybe not? (see the Penrose argument later!)

Page 24: AI Philosophy.

Does a Computer have any known limits?

• Would like to answer:“Does a computer have any limit on intelligence?”

• Simpler to answer“Does a computer have any limits on

what it can compute?”– e.g. ask the question of whether certain

classes of program can exist in principle– best-known example uses program

termination:

Page 25: AI Philosophy.

Program Termination

• Prog 1: i=2 ; while ( i >= 0 ) { i++; }

• Prog 2:i=2 ; while ( i <= 10 ) { i++; }

• Prog 1 never halts(?)• Prog 2 halts

Page 26: AI Philosophy.

Program Termination

• Determining program termination

• Decide whether or not a program – with some given input – will eventually stop– would seem to need intelligence?– would exhibit intelligence?

Page 27: AI Philosophy.

Halting Problem• SPECIFICATION: HALT-CHECKER

• INPUT: 1) the code for a program P 2) an input I

• OUTPUT: determine whether or not P halts eventually when given input I

return true if “P halts on I”, false if it never halts

• HALT-CHECKER itself must always halt eventually– i.e. it must always be able to answer true/false to “P halts on

I”

Page 28: AI Philosophy.

Halting Problem

• SPECIFICATION: HALT-CHECKER• INPUT:

the code for a program P, and an input I• OUTPUT: true if “P halts on I”, false

otherwise

• HALT-CHECKER could merely “run” P on I?• If “P halts on I” then eventually it will return

true; but what if “P loops on I”?• BUT cannot wait forever to say it fails to

halt!• Maybe we can detect all the loop states?

Page 29: AI Philosophy.

Halting Problem

• TURING RESULT: HALT-CHECKER (HC) cannot be programmed on a standard computer (Turing Machine)– it is “noncomputable”

• Proof: Create a program by “feeding HALT-CHECKER to itself” and deriving a contradiction (you do not need to know the proof)

• IMPACT: A solid mathematical result that a certain kind of program cannot exist

Page 30: AI Philosophy.

Other Limits?

• “Physical System Symbol Hypothesis” is basically– “a symbol-pushing system can be

intelligent”

• For the “symbol manipulation” let’s consider a “formal system”:

Page 31: AI Philosophy.

“Formal System”Consists of• Axioms

– statements taken as true within the system• Inference rules

– rules used to derive new statements from the axioms and from other derived statements

Classic Example:• Axioms:

– All men are mortal– Socrates is a man

• Inference Rule: “if something is holds ‘for all X’ then it hold for any one X”

• Derive – Socrates is mortal

Page 32: AI Philosophy.

Limits of Formal Systems

• Systems can do logic• They have the potential to act

(be?) intelligent

• What can we do with “formal systems”?

Page 33: AI Philosophy.

“Theorem Proving”

• Bertrand Russell & Alfred Whitehead • Principia Mathematica 1910-13• Attempts to derive all mathematical

truths from axioms and inference rules• Presumption was that

– all mathematics is just• set up the reasoning• then “turn the handle”

• Presumption was destroyed by Gödel:

Page 34: AI Philosophy.

Kurt Gödel

• Logician, 1906-1978• 1931,

Incompleteness results• 1940s,

“invented time travel”– demonstrated existence of

"rotating universes“, solutions to Einstein's general relativity with paths for which ..

– “on doing the loop you arrive back before you left”

• Died of malnutrition

Page 35: AI Philosophy.

Gödel's Theorem (1931)

Applies to systems that are:• formal:

– proof is by means of axioms and inference rules following some mechanical set of rules

– no external “magic”

• “consistent” – there is no statement X for which we can prove both

X and “not X”

• powerful enough to at least do arithmetic– the system has to be able to reason about the

natural numbers 0,1,2,…

Page 36: AI Philosophy.

Gödel's Theorem (1931)

In consistent formal systems that include arithmetic then

“There are statements that are true but the formal system cannot prove”

• Note: it states the proof does not exist, not merely that we cannot find one

• Very few people understand this theorem properly – I’m not one of them – I don’t expect you to understand it either! …– just be aware of its existence as a known limit of what one

can do with one kind of “symbol manipulation”

Page 37: AI Philosophy.

Lucas/Penrose Claims

• Book: “Emperor's New Mind” 1989Roger Penrose, Oxford Professor, Mathematics

• (Similar arguments also came from Lucas, 1960s)

• Inspired by Gödel’s Theorem:– Can create a statement that they can see is true in a

system, but that cannot be shown to be true within the system

• Claim: we are able to show something that is true but that a Turing Machine would not be able to show

• Claim: this demonstrates that the human is doing something a computer can never do

• Generated a lot of controversy!!

Page 38: AI Philosophy.

Penrose Argument

• Based on the logic of the Gödel’s Theorem• That there are things humans do that a

computer cannot do• That humans do this because of physical

processes within the brain that are noncomputable, i.e. that cannot be simulated by a computer– compare to “brain in a vat” !?

• Hypothesis: quantum mechanical processes are responsible for the intelligence

• Many (most?) believe that this argument is wrong

Page 39: AI Philosophy.

Penrose Argument

• Some physical processes within the brain are noncomputable, i.e. cannot be simulated by a computer (UTM)

• These processes contribute to our intelligence

• Hypothesis: quantum mechanical and quantum gravity processes are responsible for the intelligence (!!)

• (Many believe that this argument is wrong)

Page 40: AI Philosophy.

One Reply to Penrose

• Humans are not consistent and so Gödel's theorem does not apply

• Penrose Response:– In the end, people are consistent– E.g. one mathematician might make

mistakes, but in the end the mathematical community is consistent and so the theorem applies

Page 41: AI Philosophy.

Summary• Church-Turing Hypothesis

– all known computers are equivalent in power – a simple Turing Machine can run anything we can program

• Physical Symbol Hypothesis – intelligence is just symbol pushing

• There are known limits on “symbol-pushing” computers– halting problem, Gödel’s theorem

• Penrose-Lucas: we can do things symbol pushing computers can’t– Some “Turing Tests” will be failed by a computer– Some tasks cannot be performed by a “Chinese room”– but the argument is generally held to be in error

Page 42: AI Philosophy.

Questions?