Top Banner
Turing’s Legacy Minds & Machines October 14, 2004
35
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Turing’s Legacy Minds & Machines October 14, 2004.

Turing’s Legacy

Minds & Machines

October 14, 2004

Page 2: Turing’s Legacy Minds & Machines October 14, 2004.

Turing’s Legacy

• Turing’s legacy consists of 2 parts:– Turing Machines– Turing Test

Page 3: Turing’s Legacy Minds & Machines October 14, 2004.

Formal Logic

H B

HA

~A

~H

B

2, 3 MT

A.

A.

A.

5.

4.

3.

2.

1.

1, 4 DS

The housemaid or the butler did it

If the housemaid did it, the alarm would have gone off

The alarm did not go off

… therefore …

The butler did it!

Page 4: Turing’s Legacy Minds & Machines October 14, 2004.

Algorithms

• An algorithm is a systematic, step-by-step procedure:– Steps: Algorithms take discrete steps– Precision: Each step is precisely defined– Systematicity: After each step it is clear which step to

take next

• Examples:– Cookbook recipe– Filling out tax forms (ok, maybe not)– Long division

Page 5: Turing’s Legacy Minds & Machines October 14, 2004.

Computations

• Computations are where the ideas of formal logic and algorithms come together.

• A computation is a symbol-manipulation algorithm.– Example: long division.

• Not every algorithm is a computation– Example: furniture assembly instructions

Page 6: Turing’s Legacy Minds & Machines October 14, 2004.

Computers

• A ‘computer’ is something that computes, i.e. something that performs a computation, i.e. something that follows a systematic procedure to transform input symbol strings into output symbol strings.

• As long as the procedure is effective, humans can take the role of a computer by following that procedure. Indeed, some 60 years ago, a ‘computer’ was understood to be a human being!

• By mechanizing this process, we obtain computers as we now know them.

Page 7: Turing’s Legacy Minds & Machines October 14, 2004.

The Scope and Limits of Effective Computation I

• An algorithm or procedure that we humans are able to follow or execute is called ‘effective’.

• In 1936, Turing wrote a paper in which he explored the scope and limits of effective computation.

• Turing tried to find the basic elements (the atomic components) of such a process.

Page 8: Turing’s Legacy Minds & Machines October 14, 2004.

The Scope and Limits of Effective Computation II

• Take the example of multiplication: we make marks on any place on the paper, depending on what other marks there already are, and on what ‘stage’ in the algorithm we are (we can be in the process of multiplying two digits, adding a bunch of digits, carrying over).

• So, when going through an algorithm we go through a series of stages or states that indicate what we should do next (we should multiply two digits, we should write a digit, we should carry over a digit, we should add digits, etc).

Page 9: Turing’s Legacy Minds & Machines October 14, 2004.

The Scope and Limits of Effective Computation III

• The stages we are in vary widely between the different algorithms we use to solve different problems.

• However, no matter how we characterize these states, what they ultimately come down to is that they indicate what symbols to write based on what symbols there are.

• Hence, all we should be able to do is to be able to discriminate between different states, but what we call them is completely irrelevant.

• Moreover, although an algorithm can have any number of stages defined, since we want an answer after a finite number of steps, there can only be a finite number of such states.

• One could also try and argue that we are cognitively only able to discriminate between, or even simply define, a finite number of states since our memory is limited. Thus, again, there can only be a finite number of states.

Page 10: Turing’s Legacy Minds & Machines October 14, 2004.

The Scope and Limits of Effective Computation IV

• Next, Turing reasoned that while one can write as many symbols as one wants at any location on the paper, one can only write one symbol at a a time, and symbols have a discrete location on the paper. Therefore, at any point in time the number of symbols on the paper is finite, hence we can number them, and hence we should be able to do whatever we did before by writing the symbols in one big long string of symbols, possibly using other symbols to indicate relationships between the original symbols, and adding symbols to the left or right as needed.

Page 11: Turing’s Legacy Minds & Machines October 14, 2004.

The Scope and Limits of Effective Computation V

• Moreover, to get to some location (whether to read or write a symbol), we just need to be able to go back and forth, one symbol at a time, along this one big symbol string. We can add a few states to indicate that we are in the process of doing so, so this should pose no restrictions on what we would be able to do.

• Finally, while the marks can be arbitrary, they can only have a finite size, and hence there can only be finitely many symbols, or else there would have to be two symbols that are so much alike that we can no longer perceptually discriminate between them.

Page 12: Turing’s Legacy Minds & Machines October 14, 2004.

The Scope and Limits of Effective Computation VI

• Turing thus obtained the following basic components of effective computation:– A finite set of states– A finite set of symbols– One big symbol string that can be added to on either

end– An ability to move along this symbol string (to go left

or right)– An ability to read a symbol– An ability to write a symbol

Page 13: Turing’s Legacy Minds & Machines October 14, 2004.

Turing Machines Demo

Page 14: Turing’s Legacy Minds & Machines October 14, 2004.

Representations, Efficiency, and Computational Power

• In our example, we used a ‘unary’ representation of numbers (i.e. the number four was represented as ‘1111’), but we could also have used some other representation, such as ‘4’ or ‘IV’.

• The choice in representation obviously has an effect on the nature of the program that is needed to do the ‘right’ thing (now you need rules for encountering a ‘4’ instead of a ‘I’).

• Depending on the problem, some representations lead to computations that are simpler (I always wondered how the Romans did long division!) and more efficient than others.

• However, if one is able to eventually get the answer using some representational scheme then, no matter how inefficient that representational scheme is, the computational ‘power’ of that scheme is equal to that of some other scheme that gets the same answer.

Page 15: Turing’s Legacy Minds & Machines October 14, 2004.

0’s and 1’s

• As it turns out, we can always use a string of bits, i.e. a string of 0’s and 1’s to represent objects, without losing any computational power.

• This is indeed how the modern ‘digital computer’ does things. That is, at the machine level, it’s all 0’s and 1’s.

• Of course, the 0’s and 1’s are just abstractions here; they refer to some kind of physical dichotomy, e.g. hole in punch card or not, voltage high or low, quantum spin up or down, penny on piece of toilet paper or not, etc.

• The fact that one can decide to use anything to represent anything is exactly why there can be mechanical computers, electronic computers, bio-chemical computers, DNA computers, optical computers, quantum computers and, as demonstrated by the Turing machine, computers made of one (very) big roll of toilet paper and (a lot of) pennies!

Page 16: Turing’s Legacy Minds & Machines October 14, 2004.

The Church-Turing Thesis

• Many definitions have been proposed to capture the notion of an ‘effective computation’ other than Turing-Machines.

• It turns out that all proposed definitions up to this date are equivalent in the sense that whatever one computational method is able to compute, any other method can compute as well.

• The Church-Turing thesis states that Turing-machines capture the notion of effective computation: whatever is effectively computable, Turing-machines can compute.

• The Church-Turing thesis shows the amazing computational power of Turing machines: Turing machines can compute what this very laptop computes, and they can do so using only 0’s and 1’s!

Page 17: Turing’s Legacy Minds & Machines October 14, 2004.

The Turing-Limit and Hyper-Computation

• Still, there are certain problems that are not Turing-computable, and therefore (by Church’s Thesis) probably not effectively computable either.– Example: The Halting Problem: deciding whether some Turing-

machine will or will not halt for some input.• This means that there is a non-trivial limit (called the Turing-

limit) to what can be effectively computed.• There have been mathematical models of computation

proposed that go beyond the Turing-Limit. This kind of computation is called hyper-computation.

• Hyper-computations are not effective computations as they appeal to infinity in some way or other: infinitary precision, infinitary time, etc.

• An interesting question is whether hyper-computation can be physically implemented. Maybe certain aspects of human cognition rely on hyper-computation?

Page 18: Turing’s Legacy Minds & Machines October 14, 2004.

Universal Turing Machines

• One of Turing’s great achievements was his finding that one can make a Universal Turing Machine, which is a Turing Machine U that can simulate the behavior of any Turing Machine M by giving a description of that machine M and the input I that M would work on to machine U.

• This led to the notion of stored programs (programs as part of the data), and thus to programmable, all-purpose, computers.

• Thus, what was used as part of a proof of a rather abstract mathematical result turned out to revolutionize our lives!

Page 19: Turing’s Legacy Minds & Machines October 14, 2004.

Computationalism

• Computationalism is the view that cognition is computation (i.e. computers can think, have beliefs, be intelligent, self-conscious, etc.)

• The idea behind Computationalism is that the brain is, like a computer, an information-processing device: it takes information from the environment (perception), it stores that information (memory/knowledge), infers new information from existing information (reasoning, anticipation), and makes decisions based on this information (action).

• Computationalism fits well with the finding that one can obtain powerful information-processing capacities using very simple resources: Early views on the brain supposed that neurons firing or not would constitute 0’s and 1’s

Page 20: Turing’s Legacy Minds & Machines October 14, 2004.

Minimal Computationalism

• Notice that computationalism does not make any claims regarding the computational architecture underlying cognition.

• Thus, computationalism is compatible with:– Logicism: the view that cognition is best studied through

formal logic-based representations and manipulations– Computerism: the view that our cognitive architecture is

like the Von Neumann architecture of a modern computer– Connectionism: the view that cognition is best studied

through neural networks– Hyper-computation: computations that go beyond the

Turing-limit

Page 21: Turing’s Legacy Minds & Machines October 14, 2004.

Computing Things

• The symbols that computations manipulate are representations of things.

• By manipulating those representations we come to know something about the things that those representations represent. Thus, things become computable: ‘I can compute the ratio of 2 numbers’.

• I can use a computer to help me compute things.• But now we have 2 notions of computation:

– Syntactic Computation: A computer going through a process of symbol manipulation

– Semantic computation: Representing something by some symbol string, manipulating that symbol string into a new symbol string, and interpreting that new symbol string into something meaningful.

Page 22: Turing’s Legacy Minds & Machines October 14, 2004.

Syntactic and Semantic Computation

2,2 4f

M

Syntactic Computation

Semantic Computation

0011011000 001111000

Page 23: Turing’s Legacy Minds & Machines October 14, 2004.

Meaning, Intentionality, and The Chinese Room

• Semantic computation involves syntactic computation and an interpreter to give some meaning to the syntactic computation. But, how can syntactic computations by themselves introduce any meaning?

• This relates to a distinction discussed by Dennett between original (intrinsic) and derived intentionality: it seems that the syntactic computation can only have derived intentionality thanks to the interpreter (who has intrinsic intentionality).

• John Searle uses exactly this line of reasoning in his Chinese Room thought experiment to argue that computations alone cannot lead to any intentionality, and hence that computationalism is false.

• In other words, computations can lead to all kinds of interesting functionality and behavior, but it isn’t intelligence!

Page 24: Turing’s Legacy Minds & Machines October 14, 2004.

The Turing Test

Interrogator

Machine

Human

Page 25: Turing’s Legacy Minds & Machines October 14, 2004.

“I believe that in about fifty years’ time it will be possible to programme computers, with astorage capacity of about 109, to make them playthe imitation game so well that an averageinterrogator will not have more than 70 per centchance of making the right identification after5 minutes of questioning”

-Alan Turing (1950)

Page 26: Turing’s Legacy Minds & Machines October 14, 2004.

Turing’s Argument for AI

Premise 1: Machines can pass the Turing Test

Premise 2: Anything that passes the Turing Test is intelligent

Conclusion: Machines can be intelligent

Page 27: Turing’s Legacy Minds & Machines October 14, 2004.

Can Machines pass the Turing Test?

• Turing thinks that machines will at some point in the future be able to pass the Turing Test, because Turing thinks that passing the test requires nothing more than some kind of information processing ability, which is exactly what computers do.

• In fact, Turing points to his Universal Machine to demonstrate that a single, fairly simple, machine can do all kinds of information-processing.

Page 28: Turing’s Legacy Minds & Machines October 14, 2004.

A Puzzle

• But wait, can’t we then just argue as follows: – Intelligence requires nothing more than some kind of

information processing ability, – Computers can have this information processing

ability– Therefore, computers can be intelligent.

• Indeed, this is exactly how proponents of AI make the argument today.

• So why didn’t Turing make this very argument? Why bring in the game?

Page 29: Turing’s Legacy Minds & Machines October 14, 2004.

A Second Puzzle

• Also, why the strange set-up of the Turing-Test? Why did Turing ‘pit’ a machine against a human in some kind of contest? Why not have the interrogator simply interact with a machine and judge whether or not the machine is intelligent based on those interactions?

Page 30: Turing’s Legacy Minds & Machines October 14, 2004.

The Super-Simplified Turing Test

Interrogator Machine

Page 31: Turing’s Legacy Minds & Machines October 14, 2004.

Answer: Bias

• The mere knowledge that we are dealing with a machine will bias our judgment as to whether that machine can think or not, as we may bring certain preconceptions about machines to the table.

• Moreover, knowing that we are dealing with a machine will most likely lead us to raise the bar for intelligence: it can’t write a sonnet? Ha, I knew it!

• By shielding the interrogator from the interrogated, such a bias and bar-raising is eliminated in the Turing-Test.

Page 32: Turing’s Legacy Minds & Machines October 14, 2004.

The Simplified Turing Test

Interrogator Machine or Human

Page 33: Turing’s Legacy Minds & Machines October 14, 2004.

Level the Playing Field

• Since we know we might be dealing with a machine, we still raise the bar for the entity on the other side being intelligent.

• Through his set-up of the test, Turing made sure that the bar for being intelligent wouldn’t be raised any higher for machines than we do for fellow humans.

Page 34: Turing’s Legacy Minds & Machines October 14, 2004.

Is the Imitation Game a Test?• Still, we are left with the first puzzle: if Turing wanted to argue

that machines can be intelligent, why bring up any test at all?• I believe that Turing never intended the test as part of any

argument for machine intelligence.• Instead, Turing used the test to make us aware of the point

that we should level the playing field as far as attributing intelligence goes: if we attribute intelligence to humans based on their behavior then, just to be fair, we should do it for machines as well.

• Thus, the convoluted set-up wasn’t merely a practical consideration to eliminate bias: it was the whole point!

• In fact, Turing hardly uses the word ‘test’ in his original article, and instead talks about it as the Imitation Game.

• Thus, Turing doesn’t say make any claims about intelligence, but rather how we use and apply the word.

Page 35: Turing’s Legacy Minds & Machines October 14, 2004.

In Turing’s Words

The original question, “Can machines think?”, I believeto be too meaningless to deserve discussion. NeverthelessI believe that at the end of the century the use of words and general educated opinion will have altered so much that onewill be able to speak of machines thinking without expectingto be contradicted.

-Alan Turing (1950)