The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

Alan M. Turing, B. Jack Copeland - The essential Turing_ seminal writings in computing, logic, philosophy, artificial intelligence, and artificial life, plus the secrets of Enigma (2004, Clarendon Press_ Oxford Universit

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by andricw16, 2020-06-13 20:19:14

Alan M. Turing, B. Jack Copeland - The essential Turing_ seminal writings in computing, logic, philosophy, artificial intelligence, and artificial life, plus the secrets of Enigma (2004, Clarendon Press_ Oxford Universit

Alan M. Turing, B. Jack Copeland - The essential Turing_ seminal writings in computing, logic, philosophy, artificial intelligence, and artificial life, plus the secrets of Enigma (2004, Clarendon Press_ Oxford Universit

442 | Alan Turing

2. Critique of the New Problem

As well as asking, ‘What is the answer to this new form of the question’, one may
ask, ‘Is this new question a worthy one to investigate?’ This latter question we
investigate without further ado, thereby cutting short an inWnite regress.

The new problem has the advantage of drawing a fairly sharp line between the
physical and the intellectual capacities of a man. No engineer or chemist claims
to be able to produce a material which is indistinguishable from the human skin.
It is possible that at some time this might be done, but even supposing this
invention available we should feel there was little point in trying to make a
‘thinking machine’ more human by dressing it up in such artiWcial Xesh. The
form in which we have set the problem reXects this fact in the condition which
prevents the interrogator from seeing or touching the other competitors, or
hearing their voices. Some other advantages of the proposed criterion may be
shown up by specimen questions and answers. Thus:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1.

It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.

The question and answer method seems to be suitable for introducing almost
any one of the Welds of human endeavour that we wish to include. We do not
wish to penalise the machine for its inability to shine in beauty competitions, nor
to penalise a man for losing in a race against an aeroplane. The conditions of our
game make these disabilities irrelevant. The ‘witnesses’ can brag, if they consider
it advisable, as much as they please about their charms, strength or heroism, but
the interrogator cannot demand practical demonstrations.

The game may perhaps be criticised on the ground that the odds are weighted
too heavily against the machine. If the man were to try and pretend to be the
machine he would clearly make a very poor showing. He would be given away at
once by slowness and inaccuracy in arithmetic. May not machines carry out
something which ought to be described as thinking but which is very diVerent
from what a man does? This objection is a very strong one, but at least we can say
that if, nevertheless, a machine can be constructed to play the imitation game
satisfactorily, we need not be troubled by this objection.

It might be urged that when playing the ‘imitation game’ the best strategy for
the machine may possibly be something other than imitation of the behaviour of

Computing Machinery and Intelligence | 443

a man. This may be, but I think it is unlikely that there is any great eVect of this
kind. In any case there is no intention to investigate here the theory of the game,
and it will be assumed that the best strategy is to try to provide answers that
would naturally be given by a man.

3. The Machines concerned in the Game

The question which we put in §1 will not be quite deWnite until we have speciWed
what we mean by the word ‘machine’. It is natural that we should wish to permit
every kind of engineering technique to be used in our machines. We also wish to
allow the possibility that an engineer or team of engineers may construct a
machine which works, but whose manner of operation cannot be satisfactorily
described by its constructors because they have applied a method which is largely
experimental. Finally, we wish to exclude from the machines men born in the
usual manner. It is diYcult to frame the deWnitions so as to satisfy these three
conditions. One might for instance insist that the team of engineers should be all
of one sex, but this would not really be satisfactory, for it is probably possible to
rear a complete individual from a single cell of the skin (say) of a man. To do so
would be a feat of biological technique deserving of the very highest praise, but
we would not be inclined to regard it as a case of ‘constructing a thinking
machine’. This prompts us to abandon the requirement that every kind of
technique should be permitted. We are the more ready to do so in view of the
fact that the present interest in ‘thinking machines’ has been aroused by a
particular kind of machine, usually called an ‘electronic computer’ or ‘digital
computer’. Following this suggestion we only permit digital computers to take
part in our game.

This restriction appears at Wrst sight to be a very drastic one. I shall attempt to
show that it is not so in reality. To do this necessitates a short account of the
nature and properties of these computers.

It may also be said that this identiWcation of machines with digital computers,
like our criterion for ‘thinking’, will only be unsatisfactory if (contrary to my
belief), it turns out that digital computers are unable to give a good showing in
the game.

There are already a number of digital computers in working order, and it
may be asked, ‘Why not try the experiment straight away? It would be easy
to satisfy the conditions of the game. A number of interrogators could be used,
and statistics compiled to show how often the right identiWcation was given.’
The short answer is that we are not asking whether all digital computers
would do well in the game nor whether the computers at present available
would do well, but whether there are imaginable computers which would do
well. But this is only the short answer. We shall see this question in a diVerent
light later.

444 | Alan Turing

4. Digital Computers

The idea behind digital computers may be explained by saying that these
machines are intended to carry out any operations which could be done by a
human computer. The human computer is supposed to be following Wxed rules;
he has no authority to deviate from them in any detail. We may suppose that
these rules are supplied in a book, which is altered whenever he is put on to a
new job. He has also an unlimited supply of paper on which he does his
calculations. He may also do his multiplications and additions on a ‘desk
machine’, but this is not important.

If we use the above explanation as a deWnition we shall be in danger of
circularity of argument. We avoid this by giving an outline of the means by
which the desired eVect is achieved. A digital computer can usually be regarded
as consisting of three parts:

(i) Store.
(ii) Executive unit.
(iii) Control.

The store is a store of information, and corresponds to the human computer’s
paper, whether this is the paper on which he does his calculations or that on
which his book of rules is printed. In so far as the human computer does
calculations in his head a part of the store will correspond to his memory.

The executive unit is the part which carries out the various individual oper-
ations involved in a calculation. What these individual operations are will vary
from machine to machine. Usually fairly lengthy operations can be done such as
‘Multiply 3540675445 by 7076345687’ but in some machines only very simple
ones such as ‘Write down 0’ are possible.

We have mentioned that the ‘book of rules’ supplied to the computer is replaced
in the machine by a part of the store. It is then called the ‘table of instructions’. It is
the duty of the control to see that these instructions are obeyed correctly and in the
right order. The control is so constructed that this necessarily happens.

The information in the store is usually broken up into packets of moderately
small size. In one machine, for instance, a packet might consist of ten decimal
digits. Numbers are assigned to the parts of the store in which the various packets
of information are stored, in some systematic manner. A typical instruction
might say—

‘Add the number stored in position 6809 to that in 4302 and put the result
back into the latter storage position’.

Needless to say it would not occur in the machine expressed in English. It
would more likely be coded in a form such as 6809430217. Here 17 says which of
various possible operations is to be performed on the two numbers. In this case
the operation is that described above, viz. ‘Add the number . . .’ It will be noticed

Computing Machinery and Intelligence | 445

that the instruction takes up 10 digits and so forms one packet of information,
very conveniently. The control will normally take the instructions to be obeyed in
the order of the positions in which they are stored, but occasionally an instruc-
tion such as

‘Now obey the instruction stored in position 5606, and continue from there’
may be encountered, or again

‘If position 4505 contains 0 obey next the instruction stored in 6707, otherwise
continue straight on.’
Instructions of these latter types are very important because they make it possible
for a sequence of operations to be repeated over and over again until some
condition is fulWlled, but in doing so to obey, not fresh instructions on
each repetition, but the same ones over and over again. To take a domestic
analogy. Suppose Mother wants Tommy to call at the cobbler’s every morning
on his way to school to see if her shoes are done, she can ask him afresh
every morning. Alternatively she can stick up a notice once and for all in the
hall which he will see when he leaves for school and which tells him to call for
the shoes, and also to destroy the notice when he comes back if he has the shoes
with him.

The reader must accept it as a fact that digital computers can be constructed,
and indeed have been constructed, according to the principles we have described,
and that they can in fact mimic the actions of a human computer very closely.

The book of rules which we have described our human computer as using is of
course a convenient Wction. Actual human computers really remember what they
have got to do. If one wants to make a machine mimic the behaviour of the
human computer in some complex operation one has to ask him how it is done,
and then translate the answer into the form of an instruction table. Constructing
instruction tables is usually described as ‘programming’. To ‘programme a ma-
chine to carry out the operation A’ means to put the appropriate instruction
table into the machine so that it will do A.

An interesting variant on the idea of a digital computer is a ‘digital computer
with a random element’. These have instructions involving the throwing of a die
or some equivalent electronic process; one such instruction might for instance
be, ‘Throw the die and put the resulting number into store 1000’. Sometimes
such a machine is described as having free will (though I would not use this
phrase myself). It is not normally possible to determine from observing
a machine whether it has a random element, for a similar eVect can be
produced by such devices as making the choices depend on the digits of the
decimal for p.

Most actual digital computers have only a Wnite store. There is no theoretical
diYculty in the idea of a computer with an unlimited store. Of course only a
Wnite part can have been used at any one time. Likewise only a Wnite amount can
have been constructed, but we can imagine more and more being added as

446 | Alan Turing

required. Such computers have special theoretical interest and will be called
inWnitive1 capacity computers.

The idea of a digital computer is an old one. Charles Babbage, Lucasian
Professor of Mathematics at Cambridge from 1828 to 1839, planned such a
machine, called the Analytical Engine, but it was never completed. Although
Babbage had all the essential ideas, his machine was not at that time such a very
attractive prospect. The speed which would have been available would be deW-
nitely faster than a human computer but something like 100 times slower than
the Manchester machine, itself one of the slower of the modern machines. The
storage was to be purely mechanical, using wheels and cards.

The fact that Babbage’s Analytical Engine was to be entirely mechanical will
help us to rid ourselves of a superstition. Importance is often attached to the fact
that modern digital computers are electrical, and that the nervous system also is
electrical. Since Babbage’s machine was not electrical, and since all digital
computers are in a sense equivalent, we see that this use of electricity cannot
be of theoretical importance. Of course electricity usually comes in where fast
signalling is concerned, so that it is not surprising that we Wnd it in both these
connections. In the nervous system chemical phenomena are at least as import-
ant as electrical. In certain computers the storage system is mainly acoustic. The
feature of using electricity is thus seen to be only a very superWcial similarity. If
we wish to Wnd such similarities we should look rather for mathematical analo-
gies of function.

5. Universality of Digital Computers

The digital computers considered in the last section may be classiWed amongst
the ‘discrete state machines’. These are the machines which move by sudden
jumps or clicks from one quite deWnite state to another. These states are
suYciently diVerent for the possibility of confusion between them to be ignored.
Strictly speaking there are no such machines. Everything really moves continu-
ously. But there are many kinds of machine which can proWtably be thought of as
being discrete state machines. For instance in considering the switches for a
lighting system it is a convenient Wction that each switch must be deWnitely on or
deWnitely oV. There must be intermediate positions, but for most purposes we
can forget about them. As an example of a discrete state machine we might
consider a wheel which clicks round through 1208 once a second, but may be
stopped by a lever which can be operated from outside; in addition a lamp is to
light in one of the positions of the wheel. This machine could be described
abstractly as follows. The internal state of the machine (which is described by the
position of the wheel) may be q1, q2 or q3. There is an input signal i0 or i1

1 Editor’s note. Perhaps ‘inWnitive’ is a mis-printing in Mind of ‘inWnite’.

Computing Machinery and Intelligence | 447

(position of lever). The internal state at any moment is determined by the last
state and input signal according to the table

i0 Last State
Input q1 q2 q3
q2 q3 q1
i1 q1 q2 q3

The output signals, the only externally visible indication of the internal state (the
light) are described by the table

State q1 q2 q3
Output o0 o0 o1

This example is typical of discrete state machines. They can be described by such
tables provided they have only a Wnite number of possible states.

It will seem that given the initial state of the machine and the input signals it is
always possible to predict all future states. This is reminiscent of Laplace’s view
that from the complete state of the universe at one moment of time, as described
by the positions and velocities of all particles, it should be possible to predict all
future states. The prediction which we are considering is, however, rather nearer
to practicability than that considered by Laplace. The system of the ‘universe as a
whole’ is such that quite small errors in the initial conditions can have an
overwhelming eVect at a later time. The displacement of a single electron by a
billionth of a centimetre at one moment might make the diVerence between a
man being killed by an avalanche a year later, or escaping. It is an essential
property of the mechanical systems which we have called ‘discrete state ma-
chines’ that this phenomenon does not occur. Even when we consider the actual
physical machines instead of the idealised machines, reasonably accurate know-
ledge of the state at one moment yields reasonably accurate knowledge any
number of steps later.

As we have mentioned, digital computers fall within the class of discrete state
machines. But the number of states of which such a machine is capable is usually
enormously large. For instance, the number for the machine now working at
Manchester is about 2165, 000, i.e. about 1050, 000. Compare this with our example
of the clicking wheel described above, which had three states. It is not diYcult to
see why the number of states should be so immense. The computer includes a
store corresponding to the paper used by a human computer. It must be possible
to write into the store any one of the combinations of symbols which might have
been written on the paper. For simplicity suppose that only digits from 0 to 9 are
used as symbols. Variations in handwriting are ignored. Suppose the computer is
allowed 100 sheets of paper each containing 50 lines each with room for 30 digits.
Then the number of states is 10100Â50Â30, i.e. 10150, 000. This is about the number

448 | Alan Turing

of states of three Manchester machines put together. The logarithm to the base
two of the number of states is usually called the ‘storage capacity’ of the machine.
Thus the Manchester machine has a storage capacity of about 165,000 and the
wheel machine of our example about 1.6. If two machines are put together their
capacities must be added to obtain the capacity of the resultant machine. This
leads to the possibility of statements such as ‘The Manchester machine contains
64 magnetic tracks each with a capacity of 2560, eight electronic tubes with a
capacity of 1280. Miscellaneous storage amounts to about 300 making a total of
174,380.’

Given the table corresponding to a discrete state machine it is possible to
predict what it will do. There is no reason why this calculation should not be
carried out by means of a digital computer. Provided it could be carried out
suYciently quickly the digital computer could mimic the behaviour of any
discrete state machine. The imitation game could then be played with the
machine in question (as B) and the mimicking digital computer (as A) and the
interrogator would be unable to distinguish them. Of course the digital com-
puter must have an adequate storage capacity as well as working suYciently fast.
Moreover, it must be programmed afresh for each new machine which it is
desired to mimic.

This special property of digital computers, that they can mimic any discrete
state machine, is described by saying that they are universal machines. The
existence of machines with this property has the important consequence that,
considerations of speed apart, it is unnecessary to design various new machines
to do various computing processes. They can all be done with one digital
computer, suitably programmed for each case. It will be seen that as a conse-
quence of this all digital computers are in a sense equivalent.

We may now consider again the point raised at the end of §3. It was suggested
tentatively that the question, ‘Can machines think?’ should be replaced by ‘Are
there imaginable digital computers which would do well in the imitation game?’
If we wish we can make this superWcially more general and ask ‘Are there discrete
state machines which would do well?’ But in view of the universality property we
see that either of these questions is equivalent to this, ‘Let us Wx our attention on
one particular digital computer C. Is it true that by modifying this computer to
have an adequate storage, suitably increasing its speed of action, and providing it
with an appropriate programme, C can be made to play satisfactorily the part of
A in the imitation game, the part of B being taken by a man?’

6. Contrary Views on the Main Question

We may now consider the ground to have been cleared and we are ready to
proceed to the debate on our question, ‘Can machines think?’ and the variant of
it quoted at the end of the last section. We cannot altogether abandon the

Computing Machinery and Intelligence | 449

original form of the problem, for opinions will diVer as to the appropriateness of
the substitution and we must at least listen to what has to be said in this
connexion.

It will simplify matters for the reader if I explain Wrst my own beliefs in the
matter. Consider Wrst the more accurate form of the question. I believe that in
about Wfty years’ time it will be possible to programme computers, with a storage
capacity of about 109, to make them play the imitation game so well that an
average interrogator will not have more than 70 per cent. chance of making the
right identiWcation after Wve minutes of questioning. The original question, ‘Can
machines think?’ I believe to be too meaningless to deserve discussion. Neverthe-
less I believe that at the end of the century the use of words and general educated
opinion will have altered so much that one will be able to speak of machines
thinking without expecting to be contradicted. I believe further that no useful
purpose is served by concealing these beliefs. The popular view that scientists
proceed inexorably from well-established fact to well-established fact, never
being inXuenced by any unproved conjecture, is quite mistaken. Provided it is
made clear which are proved facts and which are conjectures, no harm can result.
Conjectures are of great importance since they suggest useful lines of research.

I now proceed to consider opinions opposed to my own.
(1) The Theological Objection. Thinking is a function of man’s immortal
soul. God has given an immortal soul to every man and woman, but not to
any other animal or to machines. Hence no animal or machine can think.
I am unable to accept any part of this, but will attempt to reply in theological
terms. I should Wnd the argument more convincing if animals were classed with
men, for there is a greater diVerence, to my mind, between the typical animate
and the inanimate than there is between man and the other animals. The
arbitrary character of the orthodox view becomes clearer if we consider how it
might appear to a member of some other religious community. How do Chris-
tians regard the Moslem view that women have no souls? But let us leave this
point aside and return to the main argument. It appears to me that the argument
quoted above implies a serious restriction of the omnipotence of the Almighty. It
is admitted that there are certain things that He cannot do such as making one
equal to two, but should we not believe that He has freedom to confer a soul on
an elephant if He sees Wt? We might expect that He would only exercise this
power in conjunction with a mutation which provided the elephant with an
appropriately improved brain to minister to the needs of this soul. An argument
of exactly similar form may be made for the case of machines. It may seem
diVerent because it is more diYcult to ‘‘swallow’’. But this really only means that
we think it would be less likely that He would consider the circumstances suitable
for conferring a soul. The circumstances in question are discussed in the rest of
this paper. In attempting to construct such machines we should not be irrever-
ently usurping His power of creating souls, any more than we are in the

450 | Alan Turing

procreation of children: rather we are, in either case, instruments of His will
providing mansions for the souls that He creates.

However, this is mere speculation. I am not very impressed with theological
arguments whatever they may be used to support. Such arguments have often
been found unsatisfactory in the past. In the time of Galileo it was argued that
the texts, ‘‘And the sun stood still . . . and hasted not to go down about a whole
day’’ (Joshua x. 13) and ‘‘He laid the foundations of the earth, that it should not
move at any time’’ (Psalm cv. 5) were an adequate refutation of the Copernican
theory. With our present knowledge such an argument appears futile. When that
knowledge was not available it made a quite diVerent impression.

(2) The ‘Heads in the Sand’ Objection. ‘‘The consequences of machines
thinking would be too dreadful. Let us hope and believe that they cannot do so.’’

This argument is seldom expressed quite so openly as in the form above. But it
aVects most of us who think about it at all. We like to believe that Man is in some
subtle way superior to the rest of creation. It is best if he can be shown to be
necessarily superior, for then there is no danger of him losing his commanding
position. The popularity of the theological argument is clearly connected with
this feeling. It is likely to be quite strong in intellectual people, since they value
the power of thinking more highly than others, and are more inclined to base
their belief in the superiority of Man on this power.

I do not think that this argument is suYciently substantial to require refuta-
tion. Consolation would be more appropriate: perhaps this should be sought in
the transmigration of souls.

(3) The Mathematical Objection. There are a number of results of math-
ematical logic which can be used to show that there are limitations to the powers
of discrete-state machines. The best known of these results is known as Go¨del ’s
theorem,3 and shows that in any suYciently powerful logical system statements
can be formulated which can neither be proved nor disproved within the system,
unless possibly the system itself is inconsistent. There are other, in some respects
similar, results due to Church, Kleene, Rosser, and Turing. The latter result is the
most convenient to consider, since it refers directly to machines, whereas the
others can only be used in a comparatively indirect argument: for instance if
Go¨del’s theorem is to be used we need in addition to have some means of
describing logical systems in terms of machines, and machines in terms of logical
systems. The result in question refers to a type of machine which is essentially a
digital computer with an inWnite capacity. It states that there are certain things
that such a machine cannot do. If it is rigged up to give answers to questions as in

2 Possibly this view is heretical. St Thomas Aquinas (Summa Theologica, quoted by Bertrand Russell,
p. 480) states that God cannot make a man to have no soul. But this may not be a real restriction on His
powers, but only a result of the fact that men’s souls are immortal, and therefore indestructible. (Editor’s
note: the text in Mind contains no reference-marker for this footnote.)

3 Author’s names in italics refer to the Bibliography.

Computing Machinery and Intelligence | 451

the imitation game, there will be some questions to which it will either give a
wrong answer, or fail to give an answer at all however much time is allowed for a
reply. There may, of course, be many such questions, and questions which cannot
be answered by one machine may be satisfactorily answered by another. We are of
course supposing for the present that the questions are of the kind to which an
answer ‘Yes’ or ‘No’ is appropriate, rather than questions such as ‘What do you
think of Picasso?’ The questions that we know the machines must fail on are of
this type, ‘‘Consider the machine speciWed as follows . . . Will this machine ever
answer ‘Yes’ to any question?’’ The dots are to be replaced by a description of
some machine in a standard form, which could be something like that used in §5.
When the machine described bears a certain comparatively simple relation to the
machine which is under interrogation, it can be shown that the answer is either
wrong or not forthcoming. This is the mathematical result: it is argued that it
proves a disability of machines to which the human intellect is not subject.

The short answer to this argument is that although it is established that there
are limitations to the powers of any particular machine, it has only been stated,
without any sort of proof, that no such limitations apply to the human intellect.
But I do not think this view can be dismissed quite so lightly. Whenever one of
these machines is asked the appropriate critical question, and gives a deWnite
answer, we know that this answer must be wrong, and this gives us a certain
feeling of superiority. Is this feeling illusory? It is no doubt quite genuine, but
I do not think too much importance should be attached to it. We too often give
wrong answers to questions ourselves to be justiWed in being very pleased at such
evidence of fallibility on the part of the machines. Further, our superiority can
only be felt on such an occasion in relation to the one machine over which we
have scored our petty triumph. There would be no question of triumphing
simultaneously over all machines. In short, then, there might be men cleverer
than any given machine, but then again there might be other machines cleverer
again, and so on.

Those who hold to the mathematical argument would, I think, mostly
be willing to accept the imitation game as a basis for discussion. Those who
believe in the two previous objections would probably not be interested in any
criteria.

(4) The Argument from Consciousness. This argument is very well expressed
in Professor JeVerson’s Lister Oration for 1949, from which I quote. ‘‘Not until a
machine can write a sonnet or compose a concerto because of thoughts and
emotions felt, and not by the chance fall of symbols, could we agree that machine
equals brain—that is, not only write it but know that it had written it. No
mechanism could feel (and not merely artiWcially signal, an easy contrivance)
pleasure at its successes, grief when its valves fuse, be warmed by Xattery, be made
miserable by its mistakes, be charmed by sex, be angry or depressed when it
cannot get what it wants.’’

452 | Alan Turing

This argument appears to be a denial of the validity of our test. According to
the most extreme form of this view the only way by which one could be sure that
a machine thinks is to be the machine and to feel oneself thinking. One could
then describe these feelings to the world, but of course no one would be justiWed
in taking any notice. Likewise according to this view the only way to know that a
man thinks is to be that particular man. It is in fact the solipsist point of view. It
may be the most logical view to hold but it makes communication of ideas
diYcult. A is liable to believe ‘A thinks but B does not’ whilst B believes ‘B thinks
but A does not’. Instead of arguing continually over this point it is usual to have
the polite convention that everyone thinks.

I am sure that Professor JeVerson does not wish to adopt the extreme and
solipsist point of view. Probably he would be quite willing to accept the imitation
game as a test. The game (with the player B omitted) is frequently used in
practice under the name of viva voce to discover whether some one really
understands something or has ‘learnt it parrot fashion’. Let us listen in to a
part of such a viva voce :

Interrogator: In the Wrst line of your sonnet which reads ‘Shall I compare
thee to a summer’s day’, would not ‘a spring day’ do as well or better?

Witness: It wouldn’t scan.
Interrogator: How about ‘a winter’s day’. That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter’s day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
Witness: In a way.
Interrogator: Yet Christmas is a winter’s day, and I do not think

Mr. Pickwick would mind the comparison.
Witness: I don’t think you’re serious. By a winter’s day one means a

typical winter’s day, rather than a special one like Christmas.

And so on. What would Professor JeVerson say if the sonnet-writing machine
was able to answer like this in the viva voce ? I do not know whether he would
regard the machine as ‘merely artiWcially signalling’ these answers, but if
the answers were as satisfactory and sustained as in the above passage I do
not think he would describe it as ‘an easy contrivance’. This phrase is, I
think, intended to cover such devices as the inclusion in the machine of a record
of someone reading a sonnet, with appropriate switching to turn it on from time
to time.

In short then, I think that most of those who support the argument from
consciousness could be persuaded to abandon it rather than be forced into the
solipsist position. They will then probably be willing to accept our test.

I do not wish to give the impression that I think there is no mystery about
consciousness. There is, for instance, something of a paradox connected with any
attempt to localise it. But I do not think these mysteries necessarily need to be

Computing Machinery and Intelligence | 453

solved before we can answer the question with which we are concerned in this
paper.

(5) Arguments from Various Disabilities. These arguments take the form, ‘‘I
grant you that you can make machines do all the things you have mentioned but
you will never be able to make one to do X’’. Numerous features X are suggested
in this connexion. I oVer a selection:

Be kind, resourceful, beautiful, friendly (p. [454]), have initiative, have a sense of humour,
tell right from wrong, make mistakes (p. [454]), fall in love, enjoy strawberries and cream
(p. [453]), make some one fall in love with it, learn from experience (pp. [460]f.), use
words properly, be the subject of its own thought (pp. [454–5]), have as much diversity of
behaviour as a man, do something really new (pp. [455–6]). (Some of these disabilities are
given special consideration as indicated by the page numbers.)

No support is usually oVered for these statements. I believe they are mostly
founded on the principle of scientiWc induction. A man has seen thousands of
machines in his lifetime. From what he sees of them he draws a number of
general conclusions. They are ugly, each is designed for a very limited purpose,
when required for a minutely diVerent purpose they are useless, the variety of
behaviour of any one of them is very small, etc., etc. Naturally he concludes that
these are necessary properties of machines in general. Many of these limitations
are associated with the very small storage capacity of most machines. (I am
assuming that the idea of storage capacity is extended in some way to cover
machines other than discrete-state machines. The exact deWnition does not
matter as no mathematical accuracy is claimed in the present discussion.) A
few years ago, when very little had been heard of digital computers, it was
possible to elicit much incredulity concerning them, if one mentioned their
properties without describing their construction. That was presumably due to
a similar application of the principle of scientiWc induction. These applications
of the principle are of course largely unconscious. When a burnt child fears the
Wre and shows that he fears it by avoiding it, I should say that he was applying
scientiWc induction. (I could of course also describe his behaviour in many other
ways.) The works and customs of mankind do not seem to be very suitable
material to which to apply scientiWc induction. A very large part of space-time
must be investigated, if reliable results are to be obtained. Otherwise we may (as
most English children do) decide that everybody speaks English, and that it is
silly to learn French.

There are, however, special remarks to be made about many of the disabilities
that have been mentioned. The inability to enjoy strawberries and cream may
have struck the reader as frivolous. Possibly a machine might be made to enjoy
this delicious dish, but any attempt to make one do so would be idiotic. What is
important about this disability is that it contributes to some of the other
disabilities, e.g. to the diYculty of the same kind of friendliness occurring

454 | Alan Turing

between man and machine as between white man and white man, or between
black man and black man.

The claim that ‘‘machines cannot make mistakes’’ seems a curious one. One is
tempted to retort, ‘‘Are they any the worse for that?’’ But let us adopt a more
sympathetic attitude, and try to see what is really meant. I think this criticism
can be explained in terms of the imitation game. It is claimed that the interro-
gator could distinguish the machine from the man simply by setting them a
number of problems in arithmetic. The machine would be unmasked because of
its deadly accuracy. The reply to this is simple. The machine (programmed for
playing the game) would not attempt to give the right answers to the arithmetic
problems. It would deliberately introduce mistakes in a manner calculated to
confuse the interrogator. A mechanical fault would probably show itself through
an unsuitable decision as to what sort of a mistake to make in the arithmetic.
Even this interpretation of the criticism is not suYciently sympathetic. But we
cannot aVord the space to go into it much further. It seems to me that this
criticism depends on a confusion between two kinds of mistake. We may call
them ‘errors of functioning’ and ‘errors of conclusion’. Errors of functioning
are due to some mechanical or electrical fault which causes the machine to
behave otherwise than it was designed to do. In philosophical discussions one
likes to ignore the possibility of such errors; one is therefore discussing ‘abstract
machines’. These abstract machines are mathematical Wctions rather than phys-
ical objects. By deWnition they are incapable of errors of functioning. In this
sense we can truly say that ‘machines can never make mistakes’. Errors of
conclusion can only arise when some meaning is attached to the output signals
from the machine. The machine might, for instance, type out mathematical
equations, or sentences in English. When a false proposition is typed we say
that the machine has committed an error of conclusion. There is clearly no
reason at all for saying that a machine cannot make this kind of mistake. It
might do nothing but type out repeatedly ‘0 ¼ 1’. To take a less perverse
example, it might have some method for drawing conclusions by scientiWc
induction. We must expect such a method to lead occasionally to erroneous
results.

The claim that a machine cannot be the subject of its own thought can
of course only be answered if it can be shown that the machine has some
thought with some subject matter. Nevertheless, ‘the subject matter of a ma-
chine’s operations’ does seem to mean something, at least to the people who deal
with it. If, for instance, the machine was trying to Wnd a solution of the equation
x2 À 40x À 11 ¼ 0 one would be tempted to describe this equation as part of the
machine’s subject matter at that moment. In this sort of sense a machine
undoubtedly can be its own subject matter. It may be used to help in making
up its own programmes, or to predict the eVect of alterations in its own
structure. By observing the results of its own behaviour it can modify its own

Computing Machinery and Intelligence | 455

programmes so as to achieve some purpose more eVectively. These are possibil-
ities of the near future, rather than Utopian dreams.

The criticism that a machine cannot have much diversity of behaviour is just a
way of saying that it cannot have much storage capacity. Until fairly recently a
storage capacity of even a thousand digits was very rare.

The criticisms that we are considering here are often disguised forms of the
argument from consciousness. Usually if one maintains that a machine can do
one of these things, and describes the kind of method that the machine could
use, one will not make much of an impression. It is thought that the method
(whatever it may be, for it must be mechanical) is really rather base. Compare the
parenthesis in JeVerson’s statement quoted on p. [451].

(6) Lady Lovelace’s Objection. Our most detailed information of Babbage’s
Analytical Engine comes from a memoir by Lady Lovelace. In it she states, ‘‘The
Analytical Engine has no pretensions to originate anything. It can do whatever we
know how to order it to perform’’ (her italics). This statement is quoted by
Hartree (p. 70) who adds: ‘‘This does not imply that it may not be possible to
construct electronic equipment which will ‘think for itself ’, or in which, in
biological terms, one could set up a conditioned reXex, which would serve as a
basis for ‘learning’. Whether this is possible in principle or not is a stimulating
and exciting question, suggested by some of these recent developments. But it
did not seem that the machines constructed or projected at the time had this
property.’’

I am in thorough agreement with Hartree over this. It will be noticed that he
does not assert that the machines in question had not got the property, but
rather that the evidence available to Lady Lovelace did not encourage her to
believe that they had it. It is quite possible that the machines in question had in a
sense got this property. For suppose that some discrete-state machine has the
property. The Analytical Engine was a universal digital computer, so that, if its
storage capacity and speed were adequate, it could by suitable programming be
made to mimic the machine in question. Probably this argument did not occur
to the Countess or to Babbage. In any case there was no obligation on them to
claim all that could be claimed.

This whole question will be considered again under the heading of learning
machines.

A variant of Lady Lovelace’s objection states that a machine can ‘never
do anything really new’. This may be parried for a moment with the saw,
‘There is nothing new under the sun’. Who can be certain that ‘original work’
that he has done was not simply the growth of the seed planted in him
by teaching, or the eVect of following well-known general principles. A better
variant of the objection says that a machine can never ‘take us by surprise’.
This statement is a more direct challenge and can be met directly. Machines
take me by surprise with great frequency. This is largely because I do not do

456 | Alan Turing

suYcient calculation to decide what to expect them to do, or rather because,
although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.
Perhaps I say to myself, ‘I suppose the voltage here ought to be the same as there:
anyway let’s assume it is’. Naturally I am often wrong, and the result is a surprise
for me for by the time the experiment is done these assumptions have been
forgotten. These admissions lay me open to lectures on the subject of my vicious
ways, but do not throw any doubt on my credibility when I testify to the
surprises I experience.

I do not expect this reply to silence my critic. He will probably say that such
surprises are due to some creative mental act on my part, and reXect no credit on
the machine. This leads us back to the argument from consciousness, and far
from the idea of surprise. It is a line of argument we must consider closed, but it
is perhaps worth remarking that the appreciation of something as surprising
requires as much of a ‘creative mental act’ whether the surprising event origin-
ates from a man, a book, a machine or anything else.

The view that machines cannot give rise to surprises is due, I believe, to a
fallacy to which philosophers and mathematicians are particularly subject. This
is the assumption that as soon as a fact is presented to a mind all consequences
of that fact spring into the mind simultaneously with it. It is a very useful
assumption under many circumstances, but one too easily forgets that it is
false. A natural consequence of doing so is that one then assumes that there is
no virtue in the mere working out of consequences from data and general
principles.

(7) Argument from Continuity in the Nervous System. The nervous system is
certainly not a discrete-state machine. A small error in the information about the
size of a nervous impulse impinging on a neuron, may make a large diVerence to
the size of the outgoing impulse. It may be argued that, this being so, one cannot
expect to be able to mimic the behaviour of the nervous system with a discrete-
state system.

It is true that a discrete-state machine must be diVerent from a continuous
machine. But if we adhere to the conditions of the imitation game, the interro-
gator will not be able to take any advantage of this diVerence. The situation can
be made clearer if we consider some other simpler continuous machine.
A diVerential analyser will do very well. (A diVerential analyser is a certain
kind of machine not of the discrete-state type used for some kinds of calcula-
tion.) Some of these provide their answers in a typed form, and so are suitable
for taking part in the game. It would not be possible for a digital computer to
predict exactly what answers the diVerential analyser would give to a problem,
but it would be quite capable of giving the right sort of answer. For instance, if
asked to give the value of p (actually about 3:1416) it would be reasonable to
choose at random between the values 3.12, 3.13, 3.14, 3.15, 3.16 with the
probabilities of 0.05, 0.15, 0.55, 0.19, 0.06 (say). Under these circumstances it

Computing Machinery and Intelligence | 457

would be very diYcult for the interrogator to distinguish the diVerential analyser
from the digital computer.

(8) The Argument from Informality of Behaviour. It is not possible to pro-
duce a set of rules purporting to describe what a man should do in every
conceivable set of circumstances. One might for instance have a rule that one
is to stop when one sees a red traYc light, and to go if one sees a green one, but
what if by some fault both appear together? One may perhaps decide that it is
safest to stop. But some further diYculty may well arise from this decision later.
To attempt to provide rules of conduct to cover every eventuality, even those
arising from traYc lights, appears to be impossible. With all this I agree.

From this it is argued that we cannot be machines. I shall try to reproduce the
argument, but I fear I shall hardly do it justice. It seems to run something like this.
‘If each man had a deWnite set of rules of conduct by which he regulated his life he
would be no better than a machine. But there are no such rules, so men cannot be
machines.’ The undistributed middle is glaring. I do not think the argument is
ever put quite like this, but I believe this is the argument used nevertheless. There
may however be a certain confusion between ‘rules of conduct’ and ‘laws of
behaviour’ to cloud the issue. By ‘rules of conduct’ I mean precepts such as ‘Stop
if you see red lights’, on which one can act, and of which one can be conscious. By
‘laws of behaviour’ I mean laws of nature as applied to a man’s body such as ‘if
you pinch him he will squeak’. If we substitute ‘laws of behaviour which regulate
his life’ for ‘laws of conduct by which he regulates his life’ in the argument quoted
the undistributed middle is no longer insuperable. For we believe that it is not
only true that being regulated by laws of behaviour implies being some sort of
machine (though not necessarily a discrete-state machine), but that conversely
being such a machine implies being regulated by such laws. However, we cannot
so easily convince ourselves of the absence of complete laws of behaviour as of
complete rules of conduct. The only way we know of for Wnding such laws is
scientiWc observation, and we certainly know of no circumstances under which
we could say, ‘We have searched enough. There are no such laws.’

We can demonstrate more forcibly that any such statement would be unjus-
tiWed. For suppose we could be sure of Wnding such laws if they existed. Then given
a discrete-state machine it should certainly be possible to discover by observation
suYcient about it to predict its future behaviour, and this within a reasonable
time, say a thousand years. But this does not seem to be the case. I have set up on
the Manchester computer a small programme using only 1000 units of storage,
whereby the machine supplied with one sixteen Wgure number replies with
another within two seconds. I would defy anyone to learn from these
replies suYcient about the programme to be able to predict any replies to untried
values.

(9) The Argument from Extra-Sensory Perception. I assume that the reader is
familiar with the idea of extra-sensory perception, and the meaning of the four

458 | Alan Turing

items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis. These
disturbing phenomena seem to deny all our usual scientiWc ideas. How we
should like to discredit them! Unfortunately the statistical evidence, at least for
telepathy, is overwhelming. It is very diYcult to rearrange one’s ideas so as to Wt
these new facts in. Once one has accepted them it does not seem a very big step to
believe in ghosts and bogies. The idea that our bodies move simply according to
the known laws of physics, together with some others not yet discovered but
somewhat similar, would be one of the Wrst to go.

This argument is to my mind quite a strong one. One can say in reply that
many scientiWc theories seem to remain workable in practice, in spite of clashing
with E.S.P.; that in fact one can get along very nicely if one forgets about it. This
is rather cold comfort, and one fears that thinking is just the kind of phenom-
enon where E.S.P. may be especially relevant.

A more speciWc argument based on E.S.P. might run as follows: ‘‘Let us play the
imitation game, using as witnesses a man who is good as a telepathic receiver, and a
digital computer. The interrogator can ask such questions as ‘What suit does the
card in my right hand belong to?’ The man by telepathy or clairvoyance gives the
right answer 130 times out of 400 cards. The machine can only guess at random,
and perhaps gets 104 right, so the interrogator makes the right identiWcation.’’
There is an interesting possibility which opens here. Suppose the digital computer
contains a random number generator. Then it will be natural to use this to decide
what answer to give. But then the random number generator will be subject to the
psycho-kinetic powers of the interrogator. Perhaps this psycho-kinesis might
cause the machine to guess right more often than would be expected on a
probability calculation, so that the interrogator might still be unable to make
the right identiWcation. On the other hand, he might be able to guess right without
any questioning, by clairvoyance. With E.S.P. anything may happen.

If telepathy is admitted it will be necessary to tighten our test up. The situation
could be regarded as analogous to that which would occur if the interrogator
were talking to himself and one of the competitors was listening with his ear to
the wall. To put the competitors into a ‘telepathy-proof room’ would satisfy all
requirements.

7. Learning Machines

The reader will have anticipated that I have no very convincing arguments of a
positive nature to support my views. If I had I should not have taken such
pains to point out the fallacies in contrary views. Such evidence as I have I shall
now give.

Let us return for a moment to Lady Lovelace’s objection, which stated that the
machine can only do what we tell it to do. One could say that a man can ‘inject’
an idea into the machine, and that it will respond to a certain extent and then

Computing Machinery and Intelligence | 459

drop into quiescence, like a piano string struck by a hammer. Another simile
would be an atomic pile of less than critical size: an injected idea is to correspond
to a neutron entering the pile from without. Each such neutron will cause a
certain disturbance which eventually dies away. If, however, the size of the pile is
suYciently increased, the disturbance caused by such an incoming neutron will
very likely go on and on increasing until the whole pile is destroyed. Is there a
corresponding phenomenon for minds, and is there one for machines? There
does seem to be one for the human mind. The majority of them seem to be ‘sub-
critical’, i.e. to correspond in this analogy to piles of sub-critical size. An idea
presented to such a mind will on average give rise to less than one idea in reply.
A smallish proportion are super-critical. An idea presented to such a mind may
give rise to a whole ‘theory’ consisting of secondary, tertiary and more remote
ideas. Animals minds seem to be very deWnitely sub-critical. Adhering to this
analogy we ask, ‘Can a machine be made to be super-critical?’

The ‘skin of an onion’ analogy is also helpful. In considering the functions of
the mind or the brain we Wnd certain operations which we can explain in purely
mechanical terms. This we say does not correspond to the real mind: it is a sort
of skin which we must strip oV if we are to Wnd the real mind. But then in what
remains we Wnd a further skin to be stripped oV, and so on. Proceeding in this
way do we ever come to the ‘real’ mind, or do we eventually come to the skin
which has nothing in it? In the latter case the whole mind is mechanical. (It
would not be a discrete-state machine however. We have discussed this.)

These last two paragraphs do not claim to be convincing arguments. They
should rather be described as ‘recitations tending to produce belief ’.

The only really satisfactory support that can be given for the view expressed
at the beginning of §6, will be that provided by waiting for the end of the
century and then doing the experiment described. But what can we say in the
meantime? What steps should be taken now if the experiment is to be successful?

As I have explained, the problem is mainly one of programming. Advances in
engineering will have to be made too, but it seems unlikely that these will not be
adequate for the requirements. Estimates of the storage capacity of the brain vary
from 1010 to 1015 binary digits. I incline to the lower values and believe that only
a very small fraction is used for the higher types of thinking. Most of it is
probably used for the retention of visual impressions. I should be surprised if
more than 109 was required for satisfactory playing of the imitation game, at any
rate against a blind man. (Note—The capacity of the Encyclopaedia Britannica,
11th edition, is 2 Â 109.) A storage capacity of 107 would be a very practicable
possibility even by present techniques. It is probably not necessary to increase the
speed of operations of the machines at all. Parts of modern machines which can
be regarded as analogues of nerve cells work about a thousand times faster than
the latter. This should provide a ‘margin of safety’ which could cover losses of
speed arising in many ways. Our problem then is to Wnd out how to programme

460 | Alan Turing

these machines to play the game. At my present rate of working I produce about
a thousand digits of programme a day, so that about sixty workers, working
steadily through the Wfty years might accomplish the job, if nothing went into the
waste-paper basket. Some more expeditious method seems desirable.

In the process of trying to imitate an adult human mind we are bound to think
a good deal about the process which has brought it to the state that it is in. We
may notice three components,

(a) The initial state of the mind, say at birth,
(b) The education to which it has been subjected,
(c) Other experience, not to be described as education, to which it has been

subjected.

Instead of trying to produce a programme to simulate the adult mind, why not
rather try to produce one which simulates the child’s? If this were then subjected
to an appropriate course of education one would obtain the adult brain. Pre-
sumably the child-brain is something like a note-book as one buys it from the
stationers. Rather little mechanism, and lots of blank sheets. (Mechanism and
writing are from our point of view almost synonymous.) Our hope is that there
is so little mechanism in the child-brain that something like it can be easily
programmed. The amount of work in the education we can assume, as a Wrst
approximation, to be much the same as for the human child.

We have thus divided our problem into two parts. The child-programme and
the education process. These two remain very closely connected. We cannot
expect to Wnd a good child-machine at the Wrst attempt. One must experiment
with teaching one such machine and see how well it learns. One can then try
another and see if it is better or worse. There is an obvious connection between
this process and evolution, by the identiWcations

Structure of the child machine ¼ Hereditary material

Changes of the child machine ¼ Mutations

Natural selection ¼ Judgment of the experimenter

One may hope, however, that this process will be more expeditious than evolu-
tion. The survival of the Wttest is a slow method for measuring advantages. The
experimenter, by the exercise of intelligence, should be able to speed it up.
Equally important is the fact that he is not restricted to random mutations. If
he can trace a cause for some weakness he can probably think of the kind of
mutation which will improve it.

It will not be possible to apply exactly the same teaching process to the
machine as to a normal child. It will not, for instance, be provided with legs,
so that it could not be asked to go out and Wll the coal scuttle. Possibly it might
not have eyes. But however well these deWciencies might be overcome by clever
engineering, one could not send the creature to school without the other

Computing Machinery and Intelligence | 461

children making excessive fun of it. It must be given some tuition. We need not
be too concerned about the legs, eyes, etc. The example of Miss Helen Keller
shows that education can take place provided that communication in both
directions between teacher and pupil can take place by some means or other.

We normally associate punishments and rewards with the teaching process.
Some simple child-machines can be constructed or programmed on this sort of
principle. The machine has to be so constructed that events which shortly
preceded the occurrence of a punishment-signal are unlikely to be repeated,
whereas a reward-signal increased the probability of repetition of the events
which led up to it. These deWnitions do not presuppose any feelings on the
part of the machine. I have done some experiments with one such child-
machine, and succeeded in teaching it a few things, but the teaching method
was too unorthodox for the experiment to be considered really successful.

The use of punishments and rewards can at best be a part of the teaching
process. Roughly speaking, if the teacher has no other means of communicating
to the pupil, the amount of information which can reach him does not exceed
the total number of rewards and punishments applied. By the time a child has
learnt to repeat ‘Casabianca’ he would probably feel very sore indeed, if the text
could only be discovered by a ‘Twenty Questions’ technique, every ‘n o’ taking
the form of a blow. It is necessary therefore to have some other ‘unemotional’
channels of communication. If these are available it is possible to teach a
machine by punishments and rewards to obey orders given in some language,
e.g. a symbolic language. These orders are to be transmitted through the ‘un-
emotional’ channels. The use of this language will diminish greatly the number
of punishments and rewards required.

Opinions may vary as to the complexity which is suitable in the child machine.
One might try to make it as simple as possible consistently with the general
principles. Alternatively one might have a complete system of logical inference
‘built in’.4 In the latter case the store would be largely occupied with deWnitions
and propositions. The propositions would have various kinds of status, e.g. well-
established facts, conjectures, mathematically proved theorems, statements given
by an authority, expressions having the logical form of proposition but not
belief-value. Certain propositions may be described as ‘imperatives.’ The ma-
chine should be so constructed that as soon as an imperative is classed as ‘well-
established’ the appropriate action automatically takes place. To illustrate this,
suppose the teacher says to the machine, ‘Do your homework now’. This may
cause ‘‘Teacher says ‘Do your homework now’ ’’ to be included amongst the well-
established facts. Another such fact might be, ‘‘Everything that teacher says is
true’’. Combining these may eventually lead to the imperative, ‘Do your home-
work now’, being included amongst the well-established facts, and this, by the

4 Or rather ‘programmed in’ for our child-machine will be programmed in a digital computer. But the
logical system will not have to be learnt.

462 | Alan Turing

construction of the machine, will mean that the homework actually gets started,
but the eVect is very satisfactory. The processes of inference used by the machine
need not be such as would satisfy the most exacting logicians. There might
for instance be no hierarchy of types. But this need not mean that type fallacies
will occur, any more than we are bound to fall over unfenced cliVs. Suitable
imperatives (expressed within the systems, not forming part of the rules of
the system) such as ‘Do not use a class unless it is a subclass of one which
has been mentioned by teacher’ can have a similar eVect to ‘Do not go too near
the edge’.

The imperatives that can be obeyed by a machine that has no limbs are bound
to be of a rather intellectual character, as in the example (doing homework)
given above. Important amongst such imperatives will be ones which regulate the
order in which the rules of the logical system concerned are to be applied. For at
each stage when one is using a logical system, there is a very large number of
alternative steps, any of which one is permitted to apply, so far as obedience to
the rules of the logical system is concerned. These choices make the diVerence
between a brilliant and a footling reasoner, not the diVerence between a sound
and a fallacious one. Propositions leading to imperatives of this kind might be
‘‘When Socrates is mentioned, use the syllogism in Barbara’’ or ‘‘If one method
has been proved to be quicker than another, do not use the slower method’’.
Some of these may be ‘given by authority’, but others may be produced by the
machine itself, e.g. by scientiWc induction.

The idea of a learning machine may appear paradoxical to some readers. How
can the rules of operation of the machine change? They should describe com-
pletely how the machine will react whatever its history might be, whatever changes
it might undergo. The rules are thus quite time-invariant. This is quite true. The
explanation of the paradox is that the rules which get changed in the learning
process are of a rather less pretentious kind, claiming only an ephemeral validity.
The reader may draw a parallel with the Constitution of the United States.

An important feature of a learning machine is that its teacher will often be
very largely ignorant of quite what is going on inside, although he may still be
able to some extent to predict his pupil’s behaviour. This should apply most
strongly to the later education of a machine arising from a child-machine of
well-tried design (or programme). This is in clear contrast with normal proced-
ure when using a machine to do computations: one’s object is then to have a
clear mental picture of the state of the machine at each moment in the compu-
tation. This object can only be achieved with a struggle. The view that ‘the
machine can only do what we know how to order it to do’5 appears strange in
face of this. Most of the programmes which we can put into the machine will
result in its doing something that we cannot make sense of at all, or which we

5 Compare Lady Lovelace’s statement (p. [455]), which does not contain the word ‘only’.

Computing Machinery and Intelligence | 463

regard as completely random behaviour. Intelligent behaviour presumably con-
sists in a departure from the completely disciplined behaviour involved in
computation, but a rather slight one, which does not give rise to random
behaviour, or to pointless repetitive loops. Another important result of preparing
our machine for its part in the imitation game by a process of teaching and
learning is that ‘human fallibility’ is likely to be omitted6 in a rather natural way,
i.e. without special ‘coaching’. (The reader should reconcile this with the point of
view on p. [454].)7 Processes that are learnt do not produce a hundred per cent.
certainty of result; if they did they could not be unlearnt.

It is probably wise to include a random element in a learning machine (see
p. [445]). A random element is rather useful when we are searching for a solution
of some problem. Suppose for instance we wanted to Wnd a number between 50
and 200 which was equal to the square of the sum of its digits, we might start at
51 then try 52 and go on until we got a number that worked. Alternatively we
might choose numbers at random until we got a good one. This method has the
advantage that it is unnecessary to keep track of the values that have been tried,
but the disadvantage that one may try the same one twice, but this is not very
important if there are several solutions. The systematic method has the disadvan-
tage that there may be an enormous block without any solutions in the region
which has to be investigated Wrst. Now the learning process may be regarded as a
search for a form of behaviour which will satisfy the teacher (or some other
criterion). Since there is probably a very large number of satisfactory solutions
the random method seems to be better than the systematic. It should be noticed
that it is used in the analogous process of evolution. But there the systematic
method is not possible. How could one keep track of the diVerent genetical
combinations that had been tried, so as to avoid trying them again?

We may hope that machines will eventually compete with men in all purely
intellectual Welds. But which are the best ones to start with? Even this is a diYcult
decision. Many people think that a very abstract activity, like the playing of chess,
would be best. It can also be maintained that it is best to provide the machine
with the best sense organs that money can buy, and then teach it to understand
and speak English. This process could follow the normal teaching of a child.
Things would be pointed out and named, etc. Again I do not know what the
right answer is, but I think both approaches should be tried.

We can only see a short distance ahead, but we can see plenty there that needs
to be done.

6 Editor’s note. Presumably ‘omitted’ is a typographical error in Mind.
7 Editor’s note. The cross-reference in Mind is to ‘pp. 24, 25’. These are presumably pages of Turing’s
original typescript. The approximate position of the material is indicated by the fact that another un-
corrected cross-reference in Mind places Turing’s quotation from JeVerson on p. 21 of the original
typescript.

464 | Alan Turing

Bibliography

Samuel Butler, Erewhon, London, 1865. Chapters 23, 24, 25, The Book of the Machines.
Alonzo Church, ‘‘An Unsolvable Problem of Elementary Number Theory’’, American

Journal of Mathematics, 58 (1936), 345–363.
K. Go¨del, ‘‘U¨ ber formal unentscheidbare Sa¨tze der Principia Mathematica und verwandter

Systeme I’’, Monatshefte fu¨r Mathematik und Physik (1931), 173–189.
D. R. Hartree, Calculating Instruments and Machines, New York, 1949.
S. C. Kleene, ‘‘General Recursive Functions of Natural Numbers’’, American Journal of

Mathematics, 57 (1935), 153–173 and 219–244.
G. JeVerson, ‘‘The Mind of Mechanical Man’’. Lister Oration for 1949. British Medical

Journal, vol. i (1949), 1105–1121.
Countess of Lovelace, ‘Translator’s notes to an article on Babbage’s Analytical Engine’,

ScientiWc Memoirs (ed. by R. Taylor), vol. 3 (1842), 691–731.
Bertrand Russell, History of Western Philosophy, London, 1940.
A. M. Turing, ‘‘On Computable Numbers, with an Application to the Entscheidungspro-

blem’’ [Chapter 1].

CHAPTER 12

Intelligent Machinery, A Heretical
Theory (c.1951)

Alan Turing

Introduction

Jack Copeland

The ’51 Society

Turing gave the presentation ‘Intelligent Machinery, A Heretical Theory’ on a
radio discussion programme called The ’51 Society. Named after the year in which
the programme Wrst went to air, The ’51 Society was produced by the BBC Home
Service at their Manchester studio and ran for several years.1 A presentation by the
week’s guest would be followed by a panel discussion. Regulars on the panel
included Max Newman, Professor of Mathematics at Manchester, the philosopher
Michael Polanyi, then Professor of Social Studies at Manchester, and the math-
ematician Peter Hilton, a younger member of Newman’s department at Manches-
ter who had worked with Turing and Newman at Bletchley Park.

Machine Learning

Turing’s target in ‘Intelligent Machinery, A Heretical Theory’ is the claim that
‘You cannot make a machine to think for you’ (p. 472). A common theme in his
writing is that if a machine is to be intelligent, then it will need to ‘learn by
experience’ (probably with some pre-selection, by an external educator, of the
experiences to which the machine will be subjected). The present article con-
tinues the discussion of machine learning begun in Chapters 10 and 11. Turing
remarks that the ‘human analogy alone’ suggests that a process of education
‘would in practice be an essential to the production of a reasonably intelligent
machine within a reasonably short space of time’ (p. 473). He emphasizes the

1 Peter Hilton in interview with Copeland (June 2001).

466 | Jack Copeland

point, also made in Chapter 11, that one might ‘start from a comparatively
simple machine, and, by subjecting it to a suitable range of ‘‘experience’’ trans-
form it into one which was more elaborate, and was able to deal with a far greater
range of contingencies’ (p. 473).

Turing goes on to give some indication of how learning might be accom-
plished, introducing the idea of a machine’s building up what he calls ‘indexes of
experiences’ (p. 474). (This idea is not mentioned elsewhere in his writings.) An
example of an index of experiences is a list (ordered in some way) of situations in
which the machine has found itself, coupled with the action that was taken, and
the outcome, good or bad. The situations are described in terms of features.
Faced with a choice as to what to do next, the machine looks up features of its
present situation in whatever indexes it has. If this procedure aVords more than
one candidate action, the machine selects between them by means of some rule,
possibly itself learned through experience. Turing very reasonably grounds his
belief that comparatively crude selection-rules will lead to satisfactory behaviour
in the fact that engineering problems are regularly solved by ‘the crudest rule of
thumb procedure . . . e.g. whether a function increases or decreases with one of its
variables’ (p. 474).

In response to the problem of how the educator is to indicate to the machine
whether a situation or outcome is a ‘favourable’ one or not, Turing returns to the
possibility of incorporating two ‘keys’ in the machine, which can be manipulated
by the educator, and which represent ‘pleasure’ and ‘pain’ (p. 474). This is an
idea that Turing discusses more fully in Chapter 10, where he considers adding
two input lines to a (modiWed) Turing machine, the pleasure (or reward) line
and the pain (or punishment) line. He calls the result a ‘P-type machine’
(‘P’ standing for ‘pleasure–pain’).2

Random Elements

Turing ends his discussion of machine learning with the suggestion that a
‘random element’ be incorporated in the machine (p. 475). This would, as he
says, result in the behaviour of the machine being by no means completely
determined by the experiences to which it was subjected (p. 475). The idea
that a random element be included in a learning machine appears elsewhere in
Turing’s discussions of machine intelligence. In Chapter 11 he says: ‘A random
element is rather useful when . . . searching for a solution of some problem’
(p. 463). He gives this example:

2 A detailed description of Turing’s P-type machines is given in B. J. Copeland and D. Proudfoot, ‘On
Alan Turing’s Anticipation of Connectionism’, Synthese, 108 (1996), 361–77 (reprinted in R. Chrisley (ed.),
ArtiWcial Intelligence: Critical Concepts in Cognitive Science, ii: Symbolic AI (London: Routledge, 2000)).

Intelligent Machinery, A Heretical Theory | 467

Suppose for instance we wanted to Wnd a number between 50 and 200 which was equal to
the square of the sum of its digits, we might start at 51 then try 52 and so on until we got
a number that worked. Alternatively we might choose numbers at random until we got a
good one.

Turing continues (p. 463):

The systematic method has the disadvantage that there may be an enormous block
without any solutions in the region which has to be investigated Wrst. Now the learning
process may be regarded as a search for a form of behaviour which will satisfy the teacher
(or some other criterion). Since there is probably a very large number of satisfactory
solutions the random method seems to be better than the systematic. It should be noticed
that it is used in the analogous process of evolution.

Turing’s discussion of ‘pleasure–pain systems’ in Chapter 10 also mentions
randomness (p. 425):

I will use this term [‘pleasure–pain’ system] to mean an unorganised machine of the
following general character: The conWgurations of the machine are described by two
expressions, which we may call the character-expression and the situation-expression.
The character and situation at any moment, together with the input signals, determine the
character and situation at the next moment. The character may be subject to some
random variation. Pleasure interference has a tendency to Wx the character i.e. towards
preventing it changing, whereas pain stimuli tend to disrupt the character, causing
features which had become Wxed to change, or to become again subject to random
variation.

The Mathematical Objection

In what are some of the most interesting remarks in ‘Intelligent Machinery, A
Heretical Theory’, Turing sketches and rebuts an argument against the possibility
of computing machines emulating the full intelligence of human beings. The
objection is stated as follows in Chapter 10 (pp. 410–11):

Recently the theorem of Go¨del and related results . . . have shown that if one tries to use
machines for such purposes as determining the truth or falsity of mathematical theorems
and one is not willing to tolerate an occasional wrong result, then any given machine will
in some cases be unable to give an answer at all. On the other hand the human intelligence
seems to be able to Wnd methods of ever-increasing power for dealing with such problems
‘transcending’ the methods available to machines.

In Chapter 11 he terms this the ‘Mathematical Objection’ (p. 450).
As Turing notes, the ‘related results’ include what he himself proved in ‘On

Computable Numbers’. The import of the satisfactoriness problem (explained in
‘Computable Numbers: A Guide’) is that no Turing machine can correctly
determine the truth or falsity of each statement of the form ‘such-and-such

468 | Jack Copeland

Turing machine is circle-free’. Whichever Turing machine one chooses to ask,
there will be statements of this form for which the chosen machine either gives
no answer or gives the wrong answer (compare Chapter 11, pp. 450–1). (In
Chapter 3, Turing extends this result to his oracle machines: no oracle machine
can correctly determine the truth or falsity of each statement of the form ‘such-
and-such oracle machine is circle-free’ (pp. 156–7).)

Post formulated a version of the Mathematical Objection as early as 1921.3
However, the objection has become known over the years as the ‘Go¨del argu-
ment’. In 1961, in a famous article, the philosopher John Lucas claimed the Go¨del
argument establishes that ‘mechanism’—which Lucas characterizes as the view
that ‘minds [can] be explained as machines’—is false.4 More recently, the
mathematical physicist Roger Penrose has endorsed a version of the Go¨del
argument.5

Lucas was happy to assert, on the basis of the Mathematical Objection, that
‘no scientiWc enquiry can ever exhaust the . . . human mind’.6 Not many who
admire the explanatory power of science would be happy to endorse this
conclusion. Penrose himself appears to hold that the mind can be explained in
ultimately physical terms. However, it is diYcult to say what scientiWc concep-
tion of the mind could be available to someone who endorses the Mathematical
Objection. This is because the objection, if sound, could be used equally well to
support the conclusion, not only that the mind is not a Turing machine, but also
that it is not any one of a very broad range of machines (which includes the
oracle machines). Given the enormous diversity of types of machine in this
range, it is an open question whether there is any scientiWc conception of the
mind that the Mathematical Objection (if sound) would not rule out.7

Penrose acknowledges that the objection applies not only to the view that the
mind is equivalent to a Turing machine but ‘much more generally’, saying: ‘No
doubt there are readers who believe that the last vestige of credibility of my
[version of the Go¨del] argument has disappeared at this stage! I certainly should
not blame any reader for feeling this way.’8

So far, however, Penrose has not made it clear what scientiWc conception of the
mind can remain for one who endorses the argument, remarking only that, since

3 E. L. Post, ‘Absolutely Unsolvable Problems and Relatively Undecidable Propositions: Account of an
Anticipation’, in M. Davis (ed.), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable
Problems and Computable Functions (New York: Raven, 1965), 417; see also 423.

4 J. R. Lucas, ‘Minds, Machines and Go¨del’, Philosophy, 36 (1961), 112–27 (112).
5 See his The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford:
Oxford University Press, 1989); ‘Pre´cis of The Emperor’s New Mind: Concerning Computers, Minds, and the
Laws of Physics’, Behavioral and Brain Sciences, 13 (1990), 643–55 and 692–705; Shadows of the Mind: A
Search for the Missing Science of Consciousness (Oxford: Oxford University Press, 1994); ‘Beyond the
Doubting of a Shadow’, Psyche, 2/23 (1996).
6 Lucas, ‘Minds, Machines and Go¨del’, 127.
7 See B. J. Copeland, ‘Turing’s O-machines, Penrose, Searle, and the Brain’, Analysis, 58 (1998), 128–38.
8 Penrose, ‘Beyond the Doubting of a Shadow’, section 3.10, and Shadows of the Mind, 381.

Intelligent Machinery, A Heretical Theory | 469

the argument ‘can be applied in very general circumstances indeed’, the mind is
‘something very mysterious’.9

Turing’s Answer to the Mathematical Objection

In Chapter 11 Turing says (p. 451): ‘The short answer to this argument is that
although it is established that there are limitations to the powers of any particu-
lar machine, it has only been stated, without any sort of proof, that no such
limitations apply to the human intellect.’ This remark might appear to cut to the
heart of the matter. However, Turing expresses dissatisfaction with it, saying that
the Mathematical Objection cannot ‘be dismissed so lightly’. He goes on to
broach a further line of attack on the argument, pointing out that humans
‘often give wrong answers to questions’, and it is this line of attack that he
pursues in ‘Intelligent Machinery, A Heretical Theory’.

In the quotation from Chapter 10 given above, Turing notes that the Math-
ematical Objection rests on a proviso that the machine is not allowed to make
mistakes, and as he goes on to point out, ‘the condition that the machine must
not make mistakes . . . is not a requirement for intelligence’ (p. 411). In ‘Intelli-
gent Machinery, A Heretical Theory’ he suggests that the ‘danger of the math-
ematician making mistakes is an unavoidable corollary of his power of
sometimes hitting upon an entirely new method’ (p. 472). Turing envisages
machines also able to hit upon new methods: ‘My contention is that machines
can be constructed which will simulate the behaviour of the human mind very
closely. They will make mistakes at times, and at times they may make new and
very interesting statements.’

Turing makes a similar point in Chapter 9 (pp. 393–4):

[I]f a mathematician is confronted with such a problem [e.g. determining the truth or
falsity of statements of the form ‘p is provable in such-and-such system’—Ed.] he would
search around and Wnd new methods of proof, so that he ought eventually to be able to
reach a decision about any given formula. . . . I would say that fair play must be given
to the machine. Instead of it sometimes giving no answer we could arrange that it
gives occasional wrong answers. But the human mathematician would likewise make
blunders when trying out new techniques. It is easy for us to regard these blunders as
not counting and give him another chance, but the machine would probably be allowed
no mercy.

The use of heuristic search carries with it the risk of the computer producing a
proportion of incorrect answers (see ‘ArtiWcial Intelligence’). This fact would
have been very familiar to Turing from his experience with the bombe. Probably
Turing was thinking of heuristic search when he wrote this, the earliest surviving

9 ‘Beyond the Doubting of a Shadow’, section 13.2.

470 | Jack Copeland

statement of his views concerning machine intelligence, in ‘Proposed Electronic
Calculator’: ‘There are indications however that it is possible to make the
machine display intelligence at the risk of its making occasional serious mistakes.
By following up this aspect the machine could probably be made to play very
good chess.’10

In ‘Intelligent Machinery, A Heretical Theory’ Turing passes immediately from
his remarks on the Mathematical Objection to a discussion of machine learning.
This juxtaposition perhaps indicates that Turing’s view was this: it is the possi-
bility of a machine’s learning new methods and techniques that ultimately defeats
the Mathematical Objection. In the simplest possible case, the machine’s tutor—
a human mathematician—can just present the machine with a better method
whenever the machine produces an incorrect answer to a problem. This new
input in eVect alters the machine’s standard description, transforming it into a
diVerent Turing machine (see ‘Computable Numbers: A Guide’). Alternatively a
machine may itself be able to search around (albeit fallibly) for better methods.
The search might involve the use of a random element. As in the preceding case,
the standard description of the machine alters in consequence of the learning
process, as the machine overwrites its previous algorithm with a successor. (As
Turing says in Chapter 9: ‘What we want is a machine that can learn from
experience. The possibility of letting the machine alter its own instructions
provides the mechanism for this.’) Thus the learning machine may traverse the
space of what in one of his letters to Newman (Chapter 4, p. 215) Turing calls
‘proof Wnding’ machines. In the same letter Turing says:

One imagines diVerent machines allowing diVerent sets of proofs, and by choosing a
suitable machine one can approximate ‘truth’ by ‘provability’ better than with a less
suitable machine, and can in a sense approximate it as well as you please.

The learning machine successively mutates from one proof-Wnding Turing
machine into another, becoming capable of wider sets of proofs as new, more
powerful methods of proof are acquired.

The Future

Turing ends ‘Intelligent Machinery, A Heretical Theory’ with a vision of the
future, now hackneyed, in which intelligent computers ‘outstrip our feeble
powers’ and ‘take control’. There is more of the same in Chapter 13. No doubt
this is comic-strip stuV. Nevertheless, these images of Turing’s reveal his pro-
found grasp of the potential of the universal Turing machine at a time when the

10 ‘Proposed Electronic Calculator’, National Physical Laboratory, 1945, 16 (National Physical Laboratory
library; a digital facsimile of the original typescript is in The Turing Archive for the History of Computing
<www.AlanTuring.net/proposed_electronic_calculator> (page reference is to the original typescript)).

Intelligent Machinery, A Heretical Theory | 471

only computers in existence were minuscule, and none but the most straightfor-
ward of tasks had been successfully programmed.11

Further reading

Benacerraf, P., ‘God, the Devil, and Go¨del’, Monist, 51 (1967), 9–32.
Copeland, B. J., ‘Turing’s O-machines, Penrose, Searle, and the Brain’, Analysis, 58 (1998),

128–38.
Gandy, R., ‘Human versus Mechanical Intelligence’, in P. Millican and A. Clark (eds.),

Machines and Thought: The Legacy of Alan Turing (Oxford: Clarendon Press, 1996).
Lucas, J. R., ‘Minds, Machines and Go¨del’, Philosophy, 36 (1961), 112–27.
—— ‘Minds, Machines and Go¨del: A Retrospect’, in P. Millican and A. Clark (eds.),

Machines and Thought: The Legacy of Alan Turing (Oxford: Clarendon Press, 1996).
Penrose, R., Shadows of the Mind: A Search for the Missing Science of Consciousness

(Oxford: Oxford University Press, 1994).
Piccinini, G., ‘Alan Turing and the Mathematical Objection’, Minds and Machines, 13

(2003), 23–48.

Provenance

The text that follows is from a typescript entitled ‘Intelligent Machinery, A
Heretical Theory’ and marked ‘Typist’s Typescript’.12

11 In this chapter Turing speaks of the ‘mechanic who has constructed the machine’. This is perhaps a
glimpse of Turing’s attitude toward Kilburn, Williams, and the other engineers who built the Manchester
computer. Kilburn himself was hardly less dismissive of the logicians’ contributions (for example in an
interview with Christopher Evans in 1976, ‘The Pioneers of Computing: An Oral History of Computing’
(London: Science Museum)).

12 The typescript is among the Turing Papers in the Modern Archive Centre, King’s College, Cambridge
(catalogue reference B 4). Turing’s mother Sara included the text of ‘Intelligent Machinery, A Heretical
Theory’ in her biography Alan M. Turing but unfortunately incorporated some errors (S. Turing, Alan M.
Turing (Cambridge: HeVer, 1959), 128–34.) The present edition Wrst appeared in B. J. Copeland (ed.),
‘A Lecture and Two Radio Broadcasts on Machine Intelligence by Alan Turing’, in K. Furukawa, D. Michie,
and S. Muggleton (eds.), Machine Intelligence 15 (Oxford: Oxford University Press, 1999).

Intelligent Machinery, A Heretical Theory

‘You cannot make a machine to think for you.’ This is a commonplace that is
usually accepted without question. It will be the purpose of this paper to
question it.

Most machinery developed for commercial purposes is intended to carry out
some very speciWc job, and to carry it out with certainty and considerable speed.
Very often it does the same series of operations over and over again without any
variety. This fact about the actual machinery available is a powerful argument to
many in favour of the slogan quoted above. To a mathematical logician this
argument is not available, for it has been shown that there are machines
theoretically possible which will do something very close to thinking. They
will, for instance, test the validity of a formal proof in the system of Principia
Mathematica, or even tell of a formula of that system whether it is provable or
disprovable. In the case that the formula is neither provable nor disprovable such
a machine certainly does not behave in a very satisfactory manner, for it
continues to work indeWnitely without producing any result at all, but this
cannot be regarded as very diVerent from the reaction of the mathematicians,
who have for instance worked for hundreds of years on the question as to
whether Fermat’s last theorem is true or not. For the case of machines of this
kind a more subtle argument is necessary. By Go¨del’s famous theorem, or some
similar argument, one can show that however the machine is constructed there
are bound to be cases where the machine fails to give an answer, but a mathem-
atician would be able to. On the other hand, the machine has certain advantages
over the mathematician. Whatever it does can be relied upon, assuming no
mechanical ‘breakdown’, whereas the mathematician makes a certain proportion
of mistakes. I believe that this danger of the mathematician making mistakes is
an unavoidable corollary of his power of sometimes hitting upon an entirely new
method. This seems to be conWrmed by the well known fact that the most reliable
people will not usually hit upon really new methods.

My contention is that machines can be constructed which will simulate the
behaviour of the human mind very closely. They will make mistakes at times, and
at times they may make new and very interesting statements, and on the whole
the output of them will be worth attention to the same sort of extent as the
output of a human mind. The content of this statement lies in the greater
frequency expected for the true statements, and it cannot, I think, be given an
exact statement. It would not, for instance, be suYcient to say simply that the
machine will make any true statement sooner or later, for an example of such a
machine would be one which makes all possible statements sooner or later. We

Printed with the permission of the BBC and the Estate of Alan Turing.

Intelligent Machinery, A Heretical Theory | 473

know how to construct these, and as they would (probably) produce true and
false statements about equally frequently, their verdicts would be quite worthless.
It would be the actual reaction of the machine to circumstances that would prove
my contention, if indeed it can be proved at all.

Let us go rather more carefully into the nature of this ‘proof ’. It is clearly
possible to produce a machine which would give a very good account of itself for
any range of tests, if the machine were made suYciently elaborate. However, this
again would hardly be considered an adequate proof. Such a machine would give
itself away by making the same sort of mistake over and over again, and being
quite unable to correct itself, or to be corrected by argument from outside. If the
machine were able in some way to ‘learn by experience’ it would be much more
impressive. If this were the case there seems to be no real reason why one should
not start from a comparatively simple machine, and, by subjecting it to a suitable
range of ‘experience’ transform it into one which was more elaborate, and was
able to deal with a far greater range of contingencies. This process could probably
be hastened by a suitable selection of the experiences to which it was subjected.
This might be called ‘education’. But here we have to be careful. It would be quite
easy to arrange the experiences in such a way that they automatically caused the
structure of the machine to build up into a previously intended form, and this
would obviously be a gross form of cheating, almost on a par with having a man
inside the machine. Here again the criterion as to what would be considered
reasonable in the way of ‘education’ cannot be put into mathematical terms, but
I suggest that the following would be adequate in practice. Let us suppose that it
is intended that the machine shall understand English, and that owing to its
having no hands or feet, and not needing to eat, nor desiring to smoke, it will
occupy its time mostly in playing games such as Chess and GO, and possibly
Bridge. The machine is provided with a typewriter keyboard on which any
remarks to it are typed, and it also types out any remarks that it wishes to
make. I suggest that the education of the machine should be entrusted to some
highly competent schoolmaster who is interested in the project but who is
forbidden any detailed knowledge of the inner workings of the machine. The
mechanic who has constructed the machine, however, is permitted to keep the
machine in running order, and if he suspects that the machine has been operat-
ing incorrectly may put it back to one of its previous positions and ask the
schoolmaster to repeat his lessons from that point on, but he may not take any
part in the teaching. Since this procedure would only serve to test the bona Wdes
of the mechanic, I need hardly say that it would not be adopted in the experi-
mental stages. As I see it, this education process would in practice be an essential
to the production of a reasonably intelligent machine within a reasonably short
space of time. The human analogy alone suggests this.

I may now give some indication of the way in which such a machine might be
expected to function. The machine would incorporate a memory. This does not

474 | Alan Turing

need very much explanation. It would simply be a list of all the statements that
had been made to it or by it, and all the moves it had made and the cards it had
played in its games. This would be listed in chronological order. Besides this
straightforward memory there would be a number of ‘indexes of experiences’. To
explain this idea I will suggest the form which one such index might possibly
take. It might be an alphabetical index of the words that had been used giving the
‘times’ at which they had been used, so that they could be looked up in the
memory. Another such index might contain patterns of men on parts of a GO
board that had occurred. At comparatively late stages of education the memory
might be extended to include important parts of the conWguration of the
machine at each moment, or in other words it would begin to remember what
its thoughts had been. This would give rise to fruitful new forms of indexing.
New forms of index might be introduced on account of special features observed
in the indexes already used. The indexes would be used in this sort of way.
Whenever a choice has to be made as to what to do next, features of the present
situation are looked up in the indexes available, and the previous choice in the
similar situations, and the outcome, good or bad, is discovered. The new choice
is made accordingly. This raises a number of problems. If some of the indications
are favourable and some are unfavourable what is one to do? The answer to this
will probably diVer from machine to machine and will also vary with its degree of
education. At Wrst probably some quite crude rule will suYce, e.g. to do
whichever has the greatest number of votes in its favour. At a very late stage of
education the whole question of procedure in such cases will probably have been
investigated by the machine itself, by means of some kind of index, and this may
result in some highly sophisticated, and, one hopes, highly satisfactory, form of
rule. It seems probable however that the comparatively crude forms of rule will
themselves be reasonably satisfactory, so that progress can on the whole be made
in spite of the crudeness of the choice [of] rules.1 This seems to be veriWed by the
fact that engineering problems are sometimes solved by the crudest rule of
thumb procedure which only deals with the most superWcial aspects of the
problem, e.g. whether a function increases or decreases with one of its variables.
Another problem raised by this picture of the way behaviour is determined is the
idea of ‘favourable outcome’. Without some such idea, corresponding to the
‘pleasure principle’ of the psychologists, it is very diYcult to see how to proceed.
Certainly it would be most natural to introduce some such thing into the
machine. I suggest that there should be two keys which can be manipulated by
the schoolmaster, and which represent the ideas of pleasure and pain. At later
stages in education the machine would recognise certain other conditions as
desirable owing to their having been constantly associated in the past with
pleasure, and likewise certain others as undesirable. Certain expressions of

1 Editor’s note. Words enclosed in square brackets do not appear in the typescript.

Intelligent Machinery, A Heretical Theory | 475

anger on the part of the schoolmaster might, for instance, be recognised as so
ominous that they could never be overlooked, so that the schoolmaster would
Wnd that it became unnecessary to ‘apply the cane’ any more.

To make further suggestions along these lines would perhaps be unfruitful at
this stage, as they are likely to consist of nothing more than an analysis of actual
methods of education applied to human children. There is, however, one feature
that I would like to suggest should be incorporated in the machines, and that is
a ‘random element’. Each machine should be supplied with a tape bearing a
random series of Wgures, e.g. 0 and 1 in equal quantities, and this series of Wgures
should be used in the choices made by the machine. This would result in the
behaviour of the machine not being by any means completely determined by the
experiences to which it was subjected, and would have some valuable uses when
one was experimenting with it. By faking the choices made one would be able to
control the development of the machine to some extent. One might, for instance,
insist on the choice made being a particular one at, say, 10 particular places, and
this would mean that about one machine in 1024 or more would develop to as
high a degree as the one which had been faked. This cannot very well be given an
accurate statement because of the subjective nature of the idea of ‘degree of
development’ to say nothing of the fact that the machine that had been faked
might have been also fortunate in its unfaked choices.

Let us now assume, for the sake of argument, that these machines are a
genuine possibility, and look at the consequences of constructing them. To do
so would of course meet with great opposition, unless we have advanced greatly
in religious toleration from the days of Galileo. There would be great opposition
from the intellectuals who were afraid of being put out of a job. It is probable
though that the intellectuals would be mistaken about this. There would be
plenty to do, [trying to understand what the machines were trying to say,]2 i.e.
in trying to keep one’s intelligence up to the standard set by the machines, for it
seems probable that once the machine thinking method had started, it would not
take long to outstrip our feeble powers. There would be no question of the
machines dying, and they would be able to converse with each other to sharpen
their wits. At some stage therefore we should have to expect the machines to take
control, in the way that is mentioned in Samuel Butler’s ‘Erewhon’.

2 Editor’s note. The words ‘trying to understand what the machines were trying to say,’ are handwritten
and are marked in the margin ‘Inserted from Turing’s Typescript’.

CHAPTER 13

Can Digital Computers Think? (1951)

Alan Turing

Introduction

Jack Copeland

The lecture ‘Can Digital Computers Think?’ was broadcast on BBC Radio on 15
May 1951, and was repeated on 3 July of that year.1 (Sara Turing relates that
Turing did not listen to the Wrst broadcast but did ‘pluck up courage’ to listen to
the repeat.2) Turing’s was the second lecture in a series with the general title
‘Automatic Calculating Machines’. Other speakers in the series included
Newman, D. R. Hartree, M. V. Wilkes, and F. C. Williams.3

Imitating the Brain

Turing’s principal aim in this lecture is to defend his view that ‘it is not altogether
unreasonable to describe digital computers as brains’, and he argues for the
proposition that ‘If any machine can appropriately be described as a brain,
then any digital computer can be so described’.

The lecture casts light upon Turing’s attitude towards talk of machines think-
ing. In Chapter 11 he says that in his view the question ‘Can machines think?’ is
‘too meaningless to deserve discussion’ (p. 449). However, in the present chapter
he makes liberal use of such phrases as ‘programm[ing] a machine . . . to think’
and ‘the attempt to make a thinking machine’. In one passage, Turing says
(p. 485): ‘our main problem [is] how to programme a machine to imitate a
brain, or as we might say more brieXy, if less accurately, to think.’ He shows the
same willingness to discuss the question ‘Can machines think?’ in Chapter 14.

Turing’s view is that a machine which imitates the intellectual behaviour of a
human brain can itself appropriately be described as a brain or as thinking. In

1 Alan M. Turing (Cambridge: HeVer, 1959), 102.
2 Ibid.
3 Letter from Maurice Wilkes to Copeland (9 July 1997).

Can Digital Computers Think? | 477

Chapter 14, Turing emphasizes that it is only the intellectual behaviour of the
brain that need be considered (pp. 494–5): ‘To take an extreme case, we are not
interested in the fact that the brain has the consistency of cold porridge. We don’t
want to say ‘‘This machine’s quite hard, so it isn’t a brain, and so it can’t think.’’ ’

It is, of course, the ability of the machine to imitate the intellectual behaviour
of a human brain that is examined in the Turing test (Chapter 11). Thus: any
machine that plays the imitation game successfully can appropriately be de-
scribed as a brain or as thinking.

Freedom of the Will

This chapter contains one of the two discussions of free will occurring in Turing’s
mature writings, both of which are tantalizingly brief. (The early essay entitled
‘Nature of Spirit, which possibly dates from Turing’s undergraduate days, also
contains a discussion of free will. There Turing wrote: ‘the theory which held that
as eclipses etc. are predestined so were all our actions breaks down . . . We have a
will which is able to determine the action of the atoms probably in a small
portion of the brain, or possibly all over it.’4) The other discussion occurs in
Chapter 11, where Turing says (p. 445):

An interesting variant on the idea of a digital computer is a ‘digital computer with a
random element’. These have instructions involving the throwing of a die or some
equivalent electronic process . . . Sometimes such a machine is described as having free
will (though I would not use this phrase myself). It is not normally possible to determine
from observing a machine whether it has a random element, for a similar eVect can be
produced by such devices as making the choices depend on the digits of the decimal for p.

Unfortunately, Turing does not expand on the remark ‘I would not use this
phrase myself.’ Possibly he means simply that the addition of a random element
to a computer is not in itself suYcient to warrant the attribution of free will.
Presumably one would at least need to add cognition and initiative as well before
the machine could reasonably be described as having free will (compare Chapter
10, pp. 424, 429–30). Alternatively, it is possible that Turing is objecting to the
term ‘free will’ itself, much as he objects elsewhere in Chapter 11 to the word
‘think’ (‘too meaningless to deserve discussion’).

Turing introduced this idea of a ‘digital computer with a random element’
more fully in Chapter 10 (p. 416):

It is possible to modify the above described types of discrete machines by allowing several
alternative operations to be applied at some points, the alternatives to be chosen by a
random process. Such a machine will be described as ‘partially random’. If we wish to say

4 A copy of ‘Nature of Spirit’ is among the Turing Papers in the Modern Archive Centre, King’s College,
Cambridge.

478 | Jack Copeland

deWnitely that a machine is not of this kind we will describe it as ‘determined’. Sometimes a
machine may be strictly speaking determined but appear superWcially as if it were partially
random. This would occur if for instance the digits of the number p were used to determine
the choices of a partially random machine where previously a dice thrower or electronic
equivalent had been used. These machines are known as apparently partially random.

Turing discusses partially random machines further in Chapter 12 and in
Chapter 9, where he mentions the possibility of including in the ACE a ‘random
element, some electronic roulette wheel’ (p. 391).

In ‘Can Digital Computers Think?’ Turing raises both the possibility that ‘the
feeling of free will which we all have is an illusion’ and the possibility that ‘we
really have got free will but yet there is no way of telling from our behaviour that
this is so’. In parallel, he raises the question whether the behaviour of the brain ‘is
in principle predictable by calculation’ (i.e. by Turing machine). In discussing
this possibility, he observes that we ‘certainly do not know how any such
calculation should be done’. He points out that, furthermore, some physicists
argue that no such prediction is even theoretically possible, ‘on account of the
indeterminacy principle in quantum mechanics’. Turing does not state his own
position on these issues in the course of the lecture. However, in an interview
given long after Turing’s death, Max Newman stated that Turing ‘had a deep-
seated conviction that the real brain has a ‘‘roulette wheel’’ somewhere in it.’5
This seems to indicate that Turing’s view was that the brain is a partially random
machine. Whether or not Turing would have asserted, on that basis, that we
‘really have got free will’ is not known.

Can Computers Think?

Turing’s overarching aim in the lecture is to answer the question posed by his
title. His strategy is to argue for the proposition mentioned above:

If any machine can appropriately be described as a brain, then any digital
computer can be so described.

His initial bald statement of his argument is (p. 483):

If now some particular machine can be described as a brain we have only to programme
our digital computer to imitate it and it will also be a brain. If it is accepted that real
brains, as found in animals, and in particular in men, are a sort of machine it will follow
that our digital computer suitably programmed, will behave like a brain.

Turing goes on to Xesh out his argument in various ways, turning eventually to
the problem of free will (p. 484): ‘There are still some diYculties. To behave like a

5 Newman in interview with Christopher Evans (‘The Pioneers of Computing: An Oral History of
Computing’ (London: Science Museum)).

Can Digital Computers Think? | 479

brain seems to involve free will, but the behaviour of a digital computer, when it
has been programmed, is completely determined.’ Turing argues resourcefully
that, even if it is true that brains have free will, this in fact presents no diYculty
for his claim that a suitably programmed computer can imitate the brain
(pp. 484–5):

a machine which is to imitate a brain must appear to behave as if it had free will, and it
may well be asked how this is to be achieved. One possibility is to make its behaviour
depend on something like a roulette wheel or a supply of radium. . . . It is, however, not
really even necessary to do this. It is not diYcult to design machines whose behaviour
appears quite random to anyone who does not know the details of their construction.

Such machines are ‘apparently partially random’ (p. 416). Examples of appar-
ently partially random machines are the German Enigma machine and the
Lorenz SZ 40 cipher machine (‘Tunny’). Since both these machines can be
simulated by a digital computer, an appropriately programmed digital computer
is apparently partially random. (In Chapter 11 Turing mentions having written a
programme for the Manchester computer that produced apparently partially
random behaviour. When given a number, the programme would reply with
a number. Turing said ‘I would defy anyone to learn from these replies suYcient
about the programme to be able to predict any replies to untried values’
(p. 457).)

Apparently partially random machines imitate partially random machines. If
the brain is a partially random machine, an appropriately programmed digital
computer may nevertheless give a convincing imitation of a brain. The appear-
ance that this deterministic machine gives of possessing free will may be said
to be mere sham, but this will not aVect the machine’s ability to play the
imitation game successfully. And by Turing’s principle, above, any machine that
plays the imitation game successfully can appropriately be described as a brain.

The Church–Turing Thesis and Calculating Machines

In ‘Can Digital Computers Think?’ Turing puts foward a thesis that, while not
the same as the Church–Turing thesis (see Chapter 1 and ‘Computable Numbers:
A Guide’), is in eVect the result of replacing the term ‘human computer’ in the
Church–Turing thesis by ‘calculating machine’, and replacing ‘universal Turing
machine’ by ‘digital computer of suYcient speed and storage capacity’.

The Church–Turing thesis states that any work that can be carried out by a
human computer (i.e. by an obedient clerk working with pencil on paper in
accordance with an eVective procedure) can equally well be carried out by the
universal Turing machine. The present thesis (pp. 482–3) states that any work
that can be carried out by any calculating machine can equally well be carried out
by a digital computer of suYcient speed and storage capacity:

480 | Jack Copeland

A digital computer is a universal machine in the sense that it can be made to replace any
machine of a certain very wide class. It will not replace a bulldozer or a steam-engine or a
telescope, but it will replace any rival design of calculating machine.

Newman wrote in the same vein a few years previously:

A universal machine is a single machine which, when provided with suitable instructions,
will perform any calculation that could be done by a specially constructed machine. No
real machine can be truly universal because its size is limited . . . but subject to this
limitation of size, the machines now being made in America and in this country will be
‘universal’—if they work at all; that is, they will do every kind of job that can be done by
special machines.6

If Turing were requested to clarify the notion of a ‘calculating machine’, he
would perhaps oVer paradigm examples such as the Brunsviga (a popular desk
calculating machine), a diVerential analyser (an analogue computing device),
special-purpose electronic machines like Colossus and ENIAC, and so on (com-
pare Chapter 10, pp. 412–13). Or perhaps he would say, with greater generality,
that a calculating machine is any machine that duplicates the abilities of a human
mathematician working mechanically with paper and pencil, i.e. in accordance
with an eVective (‘rule of thumb’) procedure. It was in this manner that he
explained the idea of an electronic computing machine in the opening paragraph
of his Programmers’ Handbook : ‘Electronic computers are intended to carry out
any deWnite rule of thumb process which could have been done by a human
operator working in a disciplined but unintelligent manner.’7

Turing’s remarks in Chapter 17 on the status of the Church–Turing thesis are
also relevant here (and see also the section ‘Normal Forms and the Church-
Turing Thesis’ in the introduction to Chapter 17).

Other Notable Features

Other features of note in the lecture include the continuation of the discussion of
‘Lady Lovelace’s dictum’, begun in Chapter 11, and Turing’s glorious analogy
comparing trying to programme a computer to behave like a brain with trying to
write a treatise about family life on Mars—and moreover with insuYcient paper.
(Newman once remarked on the ‘comical but brilliantly apt analogies with which
he [Turing] explained his ideas’.8)

6 M. H. A. Newman, ‘General Principles of the Design of All-Purpose Computing Machines’, Proceedings
of the Royal Society of London, Series A, 195 (1948), 271–4 (271–2).

7 A. M. Turing, Programmers’ Handbook for Manchester Electronic Computer, University of Manchester
Computing Laboratory (1950), 1. A digital facsimile of the Programmers’ Handbook is available in The
Turing Archive for the History of Computing <www.AlanTuring.net/ programmers_handbook>.

8 Newman writing in the Manchester Guardian, 11 June 1954.

Can Digital Computers Think? | 481

Further reading

Copeland, B. J., ArtiWcial Intelligence: A Philosophical Introduction (Oxford: Blackwell,
1993). Chapter 3: ‘Can a Machine Think?’; chapter 7: ‘Freedom’.

—— ‘Narrow versus Wide Mechanism: Including a Re-examination of Turing’s Views on
the Mind–Machine Issue’, Journal of Philosophy, 97 (2000), 5–32. Reprinted in
M. Scheutz, Computationalism: New Directions (Cambridge, Mass.: MIT Press, 2002).

Dennett, D. C., Elbow Room: The Varieties of Freewill Worth Wanting (Oxford: Clarendon
Press, 1984).

Simons, G., The Biology of Computer Life (Brighton: Harvester, 1985).

Provenance

The text that follows is from Turing’s typescript and incorporates corrections
made in his hand.9

9 The typescript is in the Modern Archive Centre, King’s College, Cambridge (catalogue reference B 5).
The present edition Wrst appeared in B. J. Copeland (ed.), ‘A Lecture and Two Radio Broadcasts on Machine
Intelligence by Alan Turing’, in K. Furukawa, D. Michie and S. Muggleton (eds.), Machine Intelligence 15
(Oxford: Oxford University Press, 1999).

Can Digital Computers Think?

Digital computers have often been described as mechanical brains. Most scien-
tists probably regard this description as a mere newspaper stunt, but some do
not. One mathematician has expressed the opposite point of view to me rather
forcefully in the words ‘It is commonly said that these machines are not brains,
but you and I know that they are.’ In this talk I shall try to explain the ideas
behind the various possible points of view, though not altogether impartially.
I shall give most attention to the view which I hold myself, that it is not
altogether unreasonable to describe digital computers as brains. A diVerent
point of view has already been put by Professor Hartree.

First we may consider the naive point of view of the man in the street. He
hears amazing accounts of what these machines can do: most of them apparently
involve intellectual feats of which he would be quite incapable. He can only
explain it by supposing that the machine is a sort of brain, though he may prefer
simply to disbelieve what he has heard.

The majority of scientists are contemptuous of this almost superstitious atti-
tude. They know something of the principles on which the machines are con-
structed and of the way in which they are used. Their outlook was well summed up
by Lady Lovelace over a hundred years ago, speaking of Babbage’s Analytical
Engine. She said, as Hartree has already quoted, ‘The Analytical Engine has no
pretensions whatever to originate anything. It can do whatever we know how to
order it to perform.’ This very well describes the way in which digital computers are
actually used at the present time, and in which they will probably mainly be used
for many years to come. For any one calculation the whole procedure that the
machine is to go through is planned out in advance by a mathematician. The less
doubt there is about what is going to happen the better the mathematician is
pleased. It is like planning a military operation. Under these circumstances it is fair
to say that the machine doesn’t originate anything.

There is however a third point of view, which I hold myself. I agree with Lady
Lovelace’s dictum as far as it goes, but I believe that its validity depends on
considering how digital computers are used rather than how they could be used.
In fact I believe that they could be used in such a manner that they could
appropriately be described as brains. I should also say that ‘If any machine can
appropriately be described as a brain, then any digital computer can be so
described.’

This last statement needs some explanation. It may appear rather startling, but
with some reservations it appears to be an inescapable fact. It can be shown to
follow from a characteristic property of digital computers, which I will call their
universality. A digital computer is a universal machine in the sense that it can be

Printed with the permission of the BBC and the Estate of Alan Turing.

Can Digital Computers Think? | 483

made to replace any machine of a certain very wide class. It will not replace a
bulldozer or a steam-engine or a telescope, but it will replace any rival design of
calculating machine, that is to say any machine into which one can feed data and
which will later print out results. In order to arrange for our computer to imitate
a given machine it is only necessary to programme the computer to calculate
what the machine in question would do under given circumstances, and in
particular what answers it would print out. The computer can then be made to
print out the same answers.

If now some particular machine can be described as a brain we have only to
programme our digital computer to imitate it and it will also be a brain. If it is
accepted that real brains, as found in animals, and in particular in men, are a sort
of machine it will follow that our digital computer, suitably programmed, will
behave like a brain.

This argument involves several assumptions which can quite reasonably be
challenged. I have already explained that the machine to be imitated must be
more like a calculator than a bulldozer. This is merely a reXection of the fact that
we are speaking of mechanical analogues of brains, rather than of feet or jaws. It
was also necessary that this machine should be of the sort whose behaviour is in
principle predictable by calculation. We certainly do not know how any such
calculation should be done, and it was even argued by Sir Arthur Eddington that
on account of the indeterminacy principle in quantum mechanics no such
prediction is even theoretically possible.

Another assumption was that the storage capacity of the computer used
should be suYcient to carry out the prediction of the behaviour of the machine
to be imitated. It should also have suYcient speed. Our present computers
probably have not got the necessary storage capacity, though they may well
have the speed. This means in eVect that if we wish to imitate anything so
complicated as the human brain we need a very much larger machine than any
of the computers at present available. We probably need something at least a
hundred times as large as the Manchester Computer. Alternatively of course a
machine of equal size or smaller would do if suYcient progress were made in the
technique of storing information.

It should be noticed that there is no need for there to be any increase in the
complexity of the computers used. If we try to imitate ever more complicated
machines or brains we must use larger and larger computers to do it. We do not
need to use successively more complicated ones. This may appear paradoxical,
but the explanation is not diYcult. The imitation of a machine by a computer
requires not only that we should have made the computer, but that we should
have programmed it appropriately. The more complicated the machine to be
imitated the more complicated must the programme be.

This may perhaps be made clearer by an analogy. Suppose two men both
wanted to write their autobiographies, and that one had had an eventful life, but

484 | Alan Turing

very little had happened to the other. There would be two diYculties troubling
the man with the more eventful life more seriously than the other. He would have
to spend more on paper and he would have to take more trouble over thinking
what to say. The supply of paper would not be likely to be a serious diYculty,
unless for instance he were on a desert island, and in any case it could only be a
technical or a Wnancial problem. The other diYculty would be more fundamen-
tal and would become more serious still if he were not writing his life but a work
on something he knew nothing about, let us say about family life on Mars. Our
problem of programming a computer to behave like a brain is something like
trying to write this treatise on a desert island. We cannot get the storage capacity
we need: in other words we cannot get enough paper to write the treatise on, and
in any case we don’t know what we should write down if we had it. This is a poor
state of aVairs, but, to continue the analogy, it is something to know how to
write, and to appreciate the fact that most knowledge can be embodied in books.

In view of this it seems that the wisest ground on which to criticise the descrip-
tion of digital computers as ‘mechanical brains’ or ‘electronic brains’ is that,
although they might be programmed to behave like brains, we do not at present
know how this should be done. With this outlook I am in full agreement. It leaves
open the question as to whether we will or will not eventually succeed in Wnding
such a programme. I, personally, am inclined to believe that such a programme
will be found. I think it is probable for instance that at the end of the century it will
be possible to programme a machine to answer questions in such a way that it will
be extremely diYcult to guess whether the answers are being given by a man or by
the machine. I am imagining something like a viva-voce examination, but with the
questions and answers all typewritten in order that we need not consider such
irrelevant matters as the faithfulness with which the human voice can be imitated.
This only represents my opinion; there is plenty of room for others.

There are still some diYculties. To behave like a brain seems to involve free
will, but the behaviour of a digital computer, when it has been programmed, is
completely determined. These two facts must somehow be reconciled, but to do
so seems to involve us in an age-old controversy, that of ‘free will and determin-
ism’. There are two ways out. It may be that the feeling of free will which we all
have is an illusion. Or it may be that we really have got free will, but yet there is
no way of telling from our behaviour that this is so. In the latter case, however
well a machine imitates a man’s behaviour it is to be regarded as a mere sham. I
do not know how we can ever decide between these alternatives but whichever is
the correct one it is certain that a machine which is to imitate a brain must
appear to behave as if it had free will, and it may well be asked how this is to be
achieved. One possibility is to make its behaviour depend on something like a
roulette wheel or a supply of radium. The behaviour of these may perhaps be
predictable, but if so, we do not know how to do the prediction.

Can Digital Computers Think? | 485

It is, however, not really even necessary to do this. It is not diYcult to design
machines whose behaviour appears quite random to anyone who does not know
the details of their construction. Naturally enough the inclusion of this random
element, whichever technique is used, does not solve our main problem, how to
programme a machine to imitate a brain, or as we might say more brieXy, if less
accurately, to think. But it gives us some indication of what the process will be
like. We must not always expect to know what the computer is going to do. We
should be pleased when the machine surprises us, in rather the same way as one
is pleased when a pupil does something which he had not been explicitly taught
to do.

Let us now reconsider Lady Lovelace’s dictum. ‘The machine can do whatever
we know how to order it to perform.’ The sense of the rest of the passage is such
that one is tempted to say that the machine can only do what we know how to
order it to perform. But I think this would not be true. Certainly the machine can
only do what we do order it to perform, anything else would be a mechanical
fault. But there is no need to suppose that, when we give it its orders we know
what we are doing, what the consequences of these orders are going to be. One
does not need to be able to understand how these orders lead to the machine’s
subsequent behaviour, any more than one needs to understand the mechanism
of germination when one puts a seed in the ground. The plant comes up whether
one understands or not. If we give the machine a programme which results in its
doing something interesting which we had not anticipated I should be inclined
to say that the machine had originated something, rather than to claim that its
behaviour was implicit in the programme, and therefore that the originality lies
entirely with us.

I will not attempt to say much about how this process of ‘programming a
machine to think’ is to be done. The fact is that we know very little about it, and
very little research has yet been done. There are plentiful ideas, but we do not yet
know which of them are of importance. As in the detective stories, at the
beginning of the investigation any triXe may be of importance to the investigator.
When the problem has been solved, only the essential facts need to be told to the
jury. But at present we have nothing worth putting before a jury. I will only say
this, that I believe the process should bear a close relation of that of teaching.

I have tried to explain what are the main rational arguments for and against
the theory that machines could be made to think, but something should also be
said about the irrational arguments. Many people are extremely opposed to the
idea of machine that thinks, but I do not believe that it is for any of the reasons
that I have given, or any other rational reason, but simply because they do not
like the idea. One can see many features which make it unpleasant. If a machine
can think, it might think more intelligently than we do, and then where should
we be? Even if we could keep the machines in a subservient position, for instance
by turning oV the power at strategic moments, we should, as a species, feel

486 | Alan Turing

greatly humbled. A similar danger and humiliation threatens us from the possib-
ility that we might be superseded by the pig or the rat. This is a theoretical
possibility which is hardly controversial, but we have lived with pigs and rats for
so long without their intelligence much increasing, that we no longer trouble
ourselves about this possibility. We feel that if it is to happen at all it will not be
for several million years to come. But this new danger is much closer. If it comes
at all it will almost certainly be within the next millennium. It is remote but not
astronomically remote, and is certainly something which can give us anxiety.

It is customary, in a talk or article on this subject, to oVer a grain of comfort,
in the form of a statement that some particularly human characteristic could
never be imitated by a machine. It might for instance be said that no machine
could write good English, or that it could not be inXuenced by sex-appeal or
smoke a pipe. I cannot oVer any such comfort, for I believe that no such bounds
can be set. But I certainly hope and believe that no great eVorts will be put into
making machines with the most distinctively human, but non-intellectual char-
acteristics such as the shape of the human body; it appears to me to be quite
futile to make such attempts and their results would have something like the
unpleasant quality of artiWcial Xowers. Attempts to produce a thinking machine
seem to me to be in a diVerent category. The whole thinking process is still rather
mysterious to us, but I believe that the attempt to make a thinking machine will
help us greatly in Wnding out how we think ourselves.

CHAPTER 14

Can Automatic Calculating Machines
Be Said To Think? (1952)

Alan Turing, Richard Braithwaite,
Geoffrey Jefferson, Max Newman

Introduction

Jack Copeland

This discussion between Turing, Newman, R. B. Braithwaite, and G. JeVerson was
recorded by the BBC on 10 January 1952 and broadcast on BBC Radio on the
14th, and again on the 23rd, of that month. This is the earliest known recorded
discussion of artiWcial intelligence.1

The Participants

The anchor man of the discussion is Richard Braithwaite (1900–90). Braithwaite
was at the time Sidgwick Lecturer in Moral Science at the University of Cam-
bridge, where the following year he was appointed Knightsbridge Professor of
Moral Philosophy. Like Turing, he was a Fellow of King’s College. Braithwaite’s
main work lay in the philosophy of science and in decision and games theory
(which he applied in moral philosophy).

GeoVrey JeVerson (1886–1961) retired from the Chair of Neurosurgery at
Manchester University in 1951. In his Lister Oration, delivered at the Royal

1 The present material together with the dialogues recorded in Wittgenstein’s Lectures on the Foundations
of Mathematics (ed. C. Diamond, Ithaca, NY: Cornell University Press, 1976) are the only known transcrip-
tions of discussions involving Turing. (Some rather compressed notes of a discussion between Turing,
Newman, Young, Polanyi, and others, entitled ‘Rough Draft of the Discussion on the Mind and the
Computing Machine, held on Thursday, 27th October, 1949, in the Philosophy Seminar’ (anon., n.d.,
University of Manchester Philosophy Department), are in The Turing Archive for the History of Computing
<www.AlanTuring.net/philosophy_seminar_oct1949>. I am grateful to Wolfe Mays for making these notes
available.)

488 | Jack Copeland

College of Surgeons of England on 9 June 1949, he had declared: ‘When we hear
it said that wireless valves think, we may despair of language.’2

Turing gave a substantial discussion of JeVerson’s views in ‘Computing Ma-
chinery and Intelligence’ (pp. 451–2), rebutting the ‘argument from conscious-
ness’ that he found in the Lister Oration. In the present chapter, JeVerson takes
numerous pot shots at the notion of a machine thinking, which for the most part
Turing and Newman are easily able to turn aside.

JeVerson may have thought little of the idea of machine intelligence, but he
held Turing in considerable regard, saying after Turing’s death that he ‘had real
genius, it shone from him’.3

The Turing Test Revisited

From the point of view of Turing scholarship, the most important parts of ‘Can
Automatic Calculating Machines Be Said to Think’ are the passages containing
Turing’s exposition of the imitation game or Turing test. The description of the
test that Turing gave in ‘Computing Machinery and Intelligence’ is here modiWed
in a number of signiWcant ways.

The lone interrogator of the original version is replaced by a ‘jury’ (p. 495).
Each jury must judge ‘quite a number of times’ and ‘sometimes they really are
dealing with a man and not a machine’. For a machine to pass the test, a
‘considerable proportion’ of the jury ‘must be taken in by the pretence’. The
members of the jury interrogate the contestants, but their contributions ‘don’t
really have to be questions, any more than questions in a law court are really
questions’; for example, ‘I put it to you that you are only pretending to be a man’
is ‘quite in order’.

In its original presentation (Chapter 11), the Turing test is a three-party game
involving the parallel interrogation by a human of a computer and a human foil.
According to the 1952 formulation, however, members of a jury interview a
series of contestants one at a time, some of the contestants being machines and
some humans. In the original form of the test, each interrogator knows that one
of each pair of interviewees is a human and one a machine, but in the single-
interviewee version of the test, this condition is necessarily absent. It appears that
the earlier formulation is in fact superior, since the single-interviewee version is
open to a biasing eVect which disfavours the machine. Results of the Loebner
series of single-interviewee Turing tests reveal a strong propensity among jurors
to classify human respondents as machines. (New York businessman Hugh
Loebner started the annual Loebner Prize competition in 1991, oVering the

2 G. JeVerson, ‘The Mind of Mechanical Man’, British Medical Journal, 25 June 1949, 1105–10 (1110).
3 Letter from JeVerson to Sara Turing, 18 Oct. 1954. The letter is among the Turing Papers in the Modern
Archive Centre, King’s College, Cambridge (catalogue reference A 16).

Can Automatic Calculating Machines Be Said To Think? | 489

sum of $100,000 to the programmer(s) of the Wrst programme to pass the Turing
test. $2,000 is awarded each year for the best eVort. So far the grand prize
remains unclaimed.) In the Loebner competition held at Dartmouth College,
New Hampshire, in January 2000, human respondents were mistaken for com-
puters on ten occasions, a computer for a human on none. The same eVect was
present in a series of single-interviewee tests performed with Kenneth Colby’s
historic programme Parry.4 In a total of ten interviews, there were Wve misiden-
tiWcations; in four of these a human respondent was mistaken for a computer.
Presumably this phenomenon is the result of a determination on the part of the
jurors not to be fooled by a programme. This lengthening of the odds against the
machine cannot occur in the three-player form of the test.

Turing’s Predictions

Turing is often misquoted as having predicted that, by the turn of the twentieth
century, artiWcial intelligence indistinguishable from human intelligence would
be in existence. What he in fact wrote in 1950 was that (p. 449):

in about Wfty years’ time it will be possible to programme computers . . . to make them
play the imitation game so well that an average interrogator will not have more than 70
per cent chance of making the right identiWcation after Wve minutes of questioning.

Some commentators have reported this prediction the wrong way about, as the
claim that, by the end of the twentieth century, computers would succeed in
deceiving the interrogator 70 per cent of the time.5

In ‘Can Automatic Calculating Machines Be Said to Think’, Turing oVered a
prediction that is interestingly diVerent from the above, and which seems to
concern success of a more substantial nature (p. 495):

Newman: I should like to be there when your match between a man and a machine takes
place, and perhaps to try my hand at making up some of the questions. But that will be
a long time from now, if the machine is to stand any chance with no questions barred?

Turing : Oh yes, at least 100 years, I should say.

In Chapter 11, Turing describes a form of the imitation game in which the
interrogator must attempt to distinguish between a woman (the foil) and a man
pretending to be a woman. This form of the game serves as a reference point for
evaluating the computer’s performance in the computer-human imitation game
(p. 441):

4 J. F. Heiser, K. M. Colby, W. S. Faught and R. C. Parkison, ‘Can Psychiatrists Distinguish a Computer
Simulation of Paranoia from the Real Thing?’, Journal of Psychiatric Research, 15 (1980), 149–62.

5 See, for example, R. Brooks, ‘Intelligence without Reason’, in L. Steels and R. Brooks (eds.), The
ArtiWcial Life Route to ArtiWcial Intelligence (Mahwah, NJ: Erlbaum, 1995), n. 8, and P. Millican and A. Clark
(eds.), Machines and Thought: The Legacy of Alan Turing (Oxford: Oxford University Press, 1996), 61.

490 | Jack Copeland

We now ask the question, ‘What will happen when a machine takes the part of A in this
[man-imitates-woman] game?’ Will the interrogator decide wrongly as often when the
game is played like this as he does when the game is played between a man and a woman?

Reformulating Turing’s 1952 prediction in these terms produces: It will be at
least 100 years (2052) before a computer is able to play the imitation game
suYciently well so that jurors will decide wrongly as often in man-imitates-
woman imitation games as in computer-imitates-human imitation games, in
each case no questions being barred.

‘Fiendish Expert’ Objections to the Turing Test

‘Fiendish expert’ objections, of which there are many, are of the form: ‘An expert
could unmask the computer by asking it . . .’. All can be dealt with in a similar
fashion. (Some other objections to the test are discussed in the introduction to
Chapter 11.)

An interesting example is put forward by Robert French.6 French’s objection
involves the phenomenon of associative priming, observed in what psychologists
call the word/non-word recognition task. A subject seated in front of a screen is
presented for a brief time with what may or may not be a word (e.g. ‘d og ’,
‘d ok ’). If the subject recognizes the letters as a word he or she must press a
button. The experimenter measures the time that the subject takes to respond. It
is found that, on average, subjects require less time to recognize a word if the
word is preceded by a brief presentation of an associated word (e.g. ‘Wsh’ may
facilitate the recognition of ‘chips’, ‘bread’ of ‘butter’). French’s claim is that this
priming eVect may be used to unmask the computer in the Turing test:

The Turing Test interrogator makes use of this phenomenon as follows. The day before the
Test, she selects a set of words (and non-words), runs the lexical decision task on the
interviewees and records average recognition times. She then comes to the Test armed
with the results . . . [and] identiWes as the human being the candidate whose results more
closely resemble the average results produced by her sample population of interviewees.
The machine would invariably fail this type of test because there is no a priori way of
determining associative strengths . . . Virtually the only way a machine could determine,
even on average, all of the associative strengths between human concepts is to have
experienced the world as the human candidate and the interviewers had.7

Turing, however, was happy to rule out expert jurors. In ‘Can Automatic
Calculating Machines Be Said to Think’ he said that the interrogator ‘should
not be expert about machines’ (p. 495), and in chapter 10, describing the chess-
player version of the test, he said that the discriminator should be a ‘rather poor’
chess player (p. 431). Turing did not mention other kinds of expert, but the

6 R. French, ‘Subcognition and the Limits of the Turing Test’, Mind, 99 (1990), 53–65.
7 Ibid. 17.

Can Automatic Calculating Machines Be Said To Think? | 491

reasons for excluding experts about machines apply equally well to experts about
minds.

In any case, French’s proposal is illegitimate. The speciWcations of the Turing
test are clear: the interrogator is allowed only to put questions. There is no
provision for the use of the equipment necessary for administering the lexical
decision task and for measuring the contestants’ reaction times. One might as
well allow the interrogator to bring along equipment for measuring the contest-
ants’ magnetic Welds or energy dissipation.8

Can Machines Think?

In Chapter 11, Turing said that the question ‘Can machines think?’ is ‘too
meaningless to deserve discussion’ (p. 449). He certainly did not allow this
view to prevent him from indulging rather often in such discussion. In the
present chapter, Turing records a considerably milder attitude to the question
(p. 495):

You might call it a test to see whether the machine thinks, but it would be better to avoid
begging the question, and say that the machines that pass are (let’s say) ‘Grade A’
machines. . . . [The question whether] machines really could pass the test [is] not the
same as ‘Do machines think?’, but it seems near enough for our present purpose, and
raises much the same diYculties.

In Chapter 10 Turing wrote (p. 431):

The extent to which we regard something as behaving in an intelligent manner is
determined as much by our own state of mind and training as by the properties of the
object under consideration. If we are able to explain or predict its behaviour . . . we have
little temptation to imagine intelligence. With the same object therefore it is possible that
one man would consider it as intelligent and another would not; the second man would
have found out the rules of its behaviour.

Turing develops this point in the present discussion (p. 500):

As soon as one can see the cause and eVect working themselves out in the brain, one
regards it as not being thinking, but a sort of unimaginative donkey-work. From this point
of view one might be tempted to deWne thinking as consisting of ‘those mental processes
that we don’t understand’. If this is right then to make a thinking machine is to make one
which does interesting things without our really understanding quite how it is done.

Many years later Marvin Minsky put forward this same view, saying that
‘intelligence’ is simply our name for whichever problem-solving mental processes

8 There is additional discussion of French’s objections to the Turing test in my ‘The Turing Test’, Minds
and Machines, 10 (2000), 519–39 (reprinted in J. H. Moor (ed.), The Turing Test (Dordrecht: Kluwer, 2003)).


Click to View FlipBook Version