The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by Fatsie, 2021-05-27 11:25:42

The Pleasure of Findings Things Out

The Pleasure of Findings Things Out

should describe that country in a disinterested way. What a miracle–that’s
worse than a miracle at Lourdes! Advertising, for example, is an example of
a scientifically immoral description of the products. This immorality is so
extensive that one gets so used to it in ordinary life, that you do not
appreciate that it is a bad thing. And I think that one of the important
reasons to increase the contact of scientists with the rest of society is to
explain, and to kind of wake them up to this permanent attrition of
cleverness of the mind that comes from not having information, or [not]
having information always in a form which is interesting.

There are other things in which scientific methods would be of some
value; they are perfectly obvious but they get more and more difficult to
discuss–such things as making decisions. I do not mean that it should be
done scientifically, such as [the way] in the United States that the Rand
Company sits down and makes arithmetical calculations. That reminds me
of my sophomore days at college in which, in discussing women, we
discovered that by using electrical terminology–impedance, reluctance,
resistance–that we had a deeper understanding of the situation. The other
thing that gives a scientific man the creeps in the world today are the
methods of choosing leaders–in every nation. Today, for example, in the
United States, the two political parties have decided to employ public
relations men, that is, advertising men, who are trained in the necessary
methods of telling the truth and lying in order to develop a product. This
wasn’t the original idea. They are supposed to discuss situations and not
just make up slogans. It’s true, if you look in history, however, that
choosing political leaders in the United States has been on many different
occasions based on slogans. (I am sure that each party now has million-
dollar bank accounts and there are going to be some very clever slogans.)
But I can’t do a sum-up of all this stuff now.

I kept saying that the science was irrelevant. That sounds strange and I
would like to come back to it. Of course it is relevant, because of the fact
that it is relevant to astrology; because if we understand the world the way
we do, we cannot understand how the astrological phenomena can take
place. And so that is relevant. But for people who believe in astrology there
is no relevance, because the scientist never bothers to argue with them. The
people who believe in faith healing have not to worry about science at all,

because nobody argues with them. You don’t have to learn science if you
don’t feel like it. So you can forget the whole business if it is too much
mental strain, which it usually is. Why can you forget the whole business?
Because we don’t do anything about it. I believe that we must attack these
things in which we do not believe. Not attack by the method of cutting off
the heads of the people, but attack in the sense of discuss. I believe that we
should demand that people try in their own minds to obtain for themselves a
more consistent picture of their own world; that they not permit themselves
the luxury of having their brain cut in four pieces or two pieces even, and
on one side they believe this and on the other side they believe that, but
never try to compare the two points of view. Because we have learned that,
by trying to put the points of view that we have in our head together and
comparing one to the other, we make some progress in understanding and in
appreciating where we are and what we are. And I believe that science has
remained irrelevant because we wait until somebody asks us questions or
until we are invited to give a speech on Einstein’s theory to people who
don’t understand Newtonian mechanics, but we never are invited to give an
attack on faith healing, or on astrology–on what is the scientific view of
astrology today.

I think that we must mainly write some articles. Now what would
happen? The person who believes in astrology will have to learn some
astronomy. The person who believes in faith healing might have to learn
some medicine, because of the arguments going back and forth; and some
biology. In other words, it will be necessary that science become relevant.
The remark which I read somewhere, that science is all right so long as it
doesn’t attack religion, was the clue that I needed to understand the
problem. As long as it doesn’t attack religion it need not be paid attention to
and nobody has to learn anything. So it can be cut off from modern society
except for its applications, and thus be isolated. And then we have this
terrible struggle to try to explain things to people who have no reason to
want to know. But if they want to defend their own point of view, they will
have to learn what yours is a little bit. So I suggest, maybe incorrectly and
perhaps wrongly, that we are too polite. There was in the past an era of
conversation on these matters. It was felt by the church that Galileo’s views
attacked the church. It is not felt by the church today that the scientific

views attack the church. Nobody is worrying about it. Nobody attacks; I
mean, nobody writes trying to explain the inconsistencies between the
theological views and the scientific views held by different people today–or
even the inconsistencies sometimes held by the same scientist between his
religious and scientific beliefs.

Now the next subject, and the last main subject that I want to talk about,
is the one I really consider the most important and the most serious. And
that has to do with the question of uncertainty and doubt. A scientist is
never certain. We all know that. We know that all our statements are
approximate statements with different degrees of certainty; that when a
statement is made, the question is not whether it is true or false but rather
how likely it is to be true or false. “Does God exist?” “When put in the
questional form, how likely is it?” It makes such a terrifying transformation
of the religious point of view, and that is why the religious point of view is
unscientific. We must discuss each question within the uncertainties that are
allowed. And as evidence grows it increases the probability perhaps that
some idea is right, or decreases it. But it never makes absolutely certain one
way or the other. Now we have found that this is of paramount importance
in order to progress. We absolutely must leave room for doubt or there is no
progress and there is no learning. There is no learning without having to
pose a question. And a question requires doubt. People search for certainty.
But there is no certainty. People are terrified–how can you live and not
know? It is not odd at all. You only think you know, as a matter of fact. And
most of your actions are based on incomplete knowledge and you really
don’t know what it is all about, or what the purpose of the world is, or know
a great deal of other things. It is possible to live and not know.

Now the freedom to doubt, which is absolutely essential for the
development of the sciences, was born from a struggle with the constituted
authorities of the time who had a solution to every problem, namely, the
church. Galileo is a symbol of that struggle–one of the most important
strugglers. And although Galileo himself apparently was forced to recant,
nobody takes the confession seriously. We do not feel that we should follow
Galileo in this way and that we should all recant. In fact, we consider the
recantation as a foolishness—that the church asked for such a foolishness
that we see again and again; and we feel sympathetic to Galileo as we feel

sympathetic to the musicians and the artists of the Soviet Union who had to
recant, and fortunately in apparently somewhat fewer numbers in recent
times. But the recantation is a meaningless thing, no matter how cleverly it
is organized. It is perfectly obvious to people from the outside that it is
nothing to consider, and that Galileo’s recantation is not something that we
need to discuss as demonstrating anything about Galileo, except that
perhaps he was an old man and that the church was very powerful. The fact
that Galileo was right is not essential to this discussion. The fact that he was
trying to be suppressed is, of course.

We are all saddened when we look at the world and see what few
accomplishments we have made, compared to what we feel are the
potentialities of human beings. People in the past, in the nightmare of their
times, had dreams for the future. And now that the future has materialized
we see that in many ways the dreams have been surpassed, but in still more
ways many of our dreams of today are very much the dreams of people of
the past. There have, in the past, been great enthusiasms for one or
another’s method of solving a problem. One was that education should
become universal, for then all men would become Voltaires, and then we
would have everything straightened out. Universal education is probably a
good thing, but you could teach bad as well as good–you [could] teach
falsehood as well as truth. The communication between nations as it
develops through a technical development of science should certainly
improve the relations between nations. Well, it depends what you
communicate. You can communicate truth and you can communicate lies.
You can communicate threats or kindnesses. There was a great hope that the
applied sciences would free man of his physical struggles, and particularly
in medicine it seems, for example, that all is to the good. Yes, but while we
are talking, scientists are working in hidden secret laboratories trying to
develop, as best they can, diseases which the other man can’t cure. Perhaps
today we have the dream that economic satiation of all men is the solution
to the problem. I mean everybody should have enough stuff. I don’t mean,
of course, that we shouldn’t try to do that. I don’t mean, by what I’m
saying, that we should not educate, or that we should not communicate, or
that we shouldn’t produce economic satiation. But that this is the solution
all by itself, of all problems, is doubtful. Because in those places where we

have a certain degree of economic satisfaction, we have a whole host of
new problems, or probably old problems that just look a little different
because we happen to know enough about history.

So today we are not very well off, we don’t see that we have done too
well. Men, philosophers of all ages, have tried to find the secret of
existence, the meaning of it all. Because if they could find the real meaning
of life, then all this human effort, all this wonderful potentiality of human
beings, could then be moved in the correct direction and we would march
forward with great success. So therefore we tried these different ideas. But
the question of the meaning of the whole world, of life, and of human
beings, and so on, has been answered very many times by very many
people. Unfortunately all the answers are different; and the people with one
answer look with horror at the actions and behavior of the people with
another answer. Horror, because they see the terrible things that are done;
the way man is being pushed into a blind alley by this rigid view of the
meaning of the world. In fact, it is really perhaps by the fantastic size of the
horror that it becomes clear how great are the potentialities of human
beings, and it is possibly this which makes us hope that if we could move
things in the right direction, things would be much better.

What then is the meaning of the whole world? We do not know what the
meaning of existence is. We say, as the result of studying all of the views
that we have had before, we find that we do not know the meaning of
existence; but in saying that we do not know the meaning of existence, we
have probably found the open channel–if we will allow only that, as we
progress, we leave open opportunities for alternatives, that we do not
become enthusiastic for the fact, the knowledge, the absolute truth, but
remain always uncertain–[that we] “hazard it.” The English, who have
developed their government in this direction, call it “muddling through,”
and although a rather silly, stupid sounding thing, it is the most scientific
way of progressing. To decide upon the answer is not scientific. In order to
make progress, one must leave the door to the unknown ajar–ajar only. We
are only at the beginning of the development of the human race; of the
development of the human mind, of intelligent life–we have years and years
in the future. It is our responsibility not to give the answer today as to what
it is all about, to drive everybody down in that direction and to say: “This is

a solution to it all.” Because we will be chained then to the limits of our
present imagination. We will only be able to do those things that we think
today are the things to do. Whereas, if we leave always some room for
doubt, some room for discussion, and proceed in a way analogous to the
sciences, then this difficulty will not arise. I believe, therefore, that although
it is not the case today, that there may some day come a time, I should hope,
when it will be fully appreciated that the power of government should be
limited; that governments ought not to be empowered to decide the validity
of scientific theories, that that is a ridiculous thing for them to try to do; that
they are not to decide the various descriptions of history or of economic
theory or of philosophy. Only in this way can the real possibilities of the
future human race be ultimately developed.

______
*Chairman of the conference. Ed.
*Conservation of charge and parity, one of the fundamental conservation

laws of physics, which says that the total electric charge and parity, an
intrinsic symmetry property of subatomic particles, going into an
interaction will be the same coming out of that interaction. Ed.

*Actually it was Immanuel Velikovsky: Worlds in Collision (Doubleday,
New York, 1950). Ed.

5

THERE′S PLENTY OF ROOM AT THE
BOTTOM

In this famous talk to the American Physical Society on December 29, 1959,
at Caltech, Feynman, the “father of nanotechnology,” expounds, decades
ahead of his time, on the future of miniaturization: how to put the entire
Encyclopaedia Brittanica on the head of a pin, the drastic reduction in size
of both biological and inanimate objects, and the problems of lubricating
machines smaller than the period at the end of this sentence. Feynman
makes his famous wager, challenging young scientists to construct a
working motor no more than 1/64 of an inch on all sides.

An Invitation to Enter a New Field of Physics

I imagine experimental physicists must often look with envy at men like
Kamerlingh-Onnes,* who discovered a field like low temperature, which
seems to be bottomless and in which one can go down and down. Such a
man is then a leader and has some temporary monopoly in a scientific
adventure. Percy Bridgman,* in designing a way to obtain higher pressures,
opened up another new field and was able to move into it and to lead us all
along. The development of ever higher vacuum was a continuing
development of the same kind.

I would like to describe a field in which little has been done, but in which
an enormous amount can be done in principle. This field is not quite the
same as the others in that it will not tell us much of fundamental physics (in
the sense of, “What are the strange particles?”), but it is more like solid-

state physics in the sense that it might tell us much of great interest about
the strange phenomena that occur in complex situations. Furthermore, a
point that is most important is that it would have an enormous number of
technical applications.

What I want to talk about is the problem of manipulating and controlling
things on a small scale.

As soon as I mention this, people tell me about miniaturization, and how
far it has progressed today. They tell me about electric motors that are the
size of the nail on your small finger. And there is a device on the market,
they tell me, by which you can write the Lord’s Prayer on the head of a pin.
But that’s nothing; that’s the most primitive, halting step in the direction I
intend to discuss. It is a staggeringly small world that is below. In the year
2000, when they look back at this age, they will wonder why it was not
until the year 1960 that anybody began seriously to move in this direction.

Why cannot we write the entire 24 volumes of the Encyclopaedia
Brittanica on the head of a pin?

Let’s see what would be involved. The head of a pin is a sixteenth of an
inch across. If you magnify it by 25,000 diameters, the area of the head of
the pin is then equal to the area of all the pages of the Encyclopaedia
Brittanica. Therefore, all it is necessary to do is to reduce in size all the
writing in the Encyclopaedia by 25,000 times. Is that possible? The
resolving power of the eye is about 1/120 of an inch–that is roughly the
diameter of one of the little dots on the fine half-tone reproductions in the
Encyclopaedia. This, when you demagnify it by 25,000 times, is still 80
angstroms* in diameter–32 atoms across, in an ordinary metal. In other
words, one of those dots still would contain in its area 1,000 atoms. So,
each dot can easily be adjusted in size as required by the photoengraving,
and there is no question that there is enough room on the head of a pin to
put all of the Encyclopaedia Brittanica.

Furthermore, it can be read if it is so written. Let’s imagine that it is
written in raised letters of metal; that is, where the black is in the
Encyclopaedia, we have raised letters of metal that are actually 1/25,000 of
their ordinary size. How would we read it?

If we had something written in such a way, we could read it using
techniques in common use today. (They will undoubtedly find a better way

when we do actually have it written, but to make my point conservatively I
shall just take techniques we know today.) We would press the metal into a
plastic material and make a mold of it, then peel the plastic off very
carefully, evaporate silica into the plastic to get a very thin film, then
shadow it by evaporating gold at an angle against the silica so that all the
little letters will appear clearly, dissolve the plastic away from the silica
film, and then look through it with an electron microscope!

There is no question that if the thing were reduced by 25,000 times in the
form of raised letters on the pin, it would be easy for us to read it today.
Furthermore, there is no question that we would find it easy to make copies
of the master; we would just need to press the same metal plate again into
plastic and we would have another copy.

How Do We Write Small?

The next question is: How do we write it? We have no standard technique to
do this now. But let’s argue that it is not as difficult as it first appears to be.
We can reverse the lenses of the electron microscope in order to demagnify
as well as magnify. A source of ions, sent through the microscope lenses in
reverse, could be focused to a very small spot. We could write with that spot
like we write in a TV cathode ray oscilloscope, by going across in lines, and
having an adjustment which determines the amount of material which is
going to be deposited as we scan in lines.

This method might be very slow because of space charge limitations.
There will be more rapid methods. We could first make, perhaps by some
photo process, a screen which has holes in it in the form of the letters. Then
we would strike an arc behind the holes and draw metallic ions through the
holes; then we could again use our system of lenses and make a small
image in the form of ions, which would deposit the metal on the pin.

A simpler way might be this (though I am not sure it would work): We
take light and, through an optical microscope running backwards, we focus
it onto a very small photoelectric screen. Then electrons come away from
the screen where the light is shining. These electrons are focused down in
size by the electron microscope lenses to impinge directly upon the surface
of the metal. Will such a beam etch away the metal if it is run long enough?

I don’t know. If it doesn’t work for a metal surface, it must be possible to
find some surface with which to coat the original pin so that, where the
electrons bombard, a change is made which we could recognize later.

There is no intensity problem in these devices–not what you are used to in
magnification, where you have to take a few electrons and spread them over
a bigger and bigger screen; it is just the opposite. The light which we get
from a page is concentrated onto a very small area so it is very intense. The
few electrons which come from the photoelectric screen are demagnified
down to a very tiny area so that, again, they are very intense. I don’t know
why this hasn’t been done yet!

That’s the Encyclopaedia Brittanica on the head of a pin, but let’s
consider all the books in the world. The Library of Congress has
approximately 9 million volumes; the British Museum Library has 5 million
volumes; there are also 5 million volumes in the National Library in France.
Undoubtedly there are duplications, so let us say that there are some 24
million volumes of interest in the world.

What would happen if I print all this down at the scale we have been
discussing? How much space would it take? It would take, of course, the
area of about a million pinheads because, instead of there being just the 24
volumes of the Encyclopaedia, there are 24 million volumes. The million
pin-heads can be put in a square of a thousand pins on a side, or an area of
about 3 square yards. That is to say, the silica replica with the paper-thin
backing of plastic, with which we have made the copies, with all this
information, is on an area of approximately the size of 35 pages of the
Encyclopaedia. That is about half as many pages as there are in this
magazine. All of the information which all of mankind has ever recorded in
books can be carried around in a pamphlet in your hand–and not written in
code, but a simple reproduction of the original pictures, engravings, and
everything else on a small scale without loss of resolution.

What would our librarian at Caltech say, as she runs all over from one
building to another, if I tell her that, ten years from now, all of the
information that she is struggling to keep track of–120,000 volumes,
stacked from the floor to the ceiling, drawers full of cards, storage rooms
full of the older books–can be kept on just one library card! When the
University of Brazil, for example, finds that their library is burned, we can

send them a copy of every book in our library by striking off a copy from
the master plate in a few hours and mailing it in an envelope no bigger or
heavier than any other ordinary air mail letter.

Now, the name of this talk is “There Is Plenty of Room at the Bottom”–
not just “There Is Room at the Bottom.” What I have demonstrated is that
there is room–that you can decrease the size of things in a practical way. I
now want to show that there is plenty of room. I will not now discuss how
we are going to do it, but only what is possible in principle–in other words,
what is possible according to the laws of physics. I am not inventing anti-
gravity, which is possible someday only if the laws are not what we think. I
am telling you what could be done if the laws are what we think; we are not
doing it simply because we haven’t yet gotten around to it.

Information on a Small Scale

Suppose that, instead of trying to reproduce the pictures and all the
information directly in its present form, we write only the information
content in a code of dots and dashes, or something like that, to represent the
various letters. Each letter represents six or seven “bits” of information; that
is, you need only about six or seven dots or dashes for each letter. Now,
instead of writing everything, as I did before, on the surface of the head of a
pin, I am going to use the interior of the material as well.

Let us represent a dot by a small spot of one metal, the next dash by an
adjacent spot of another metal, and so on. Suppose, to be conservative, that
a bit of information is going to require a little cube of atoms 5 times 5 times
5–that is, 125 atoms. Perhaps we need a hundred and some odd atoms to
make sure that the information is not lost through diffusion, or through
some other process.

I have estimated how many letters there are in the Encyclopaedia, and I
have assumed that each of my 24 million books is as big as an
Encyclopaedia volume, and have calculated, then, how many bits of
information there are (1015). For each bit I allow 100 atoms. And it turns
out that all of the information that man has carefully accumulated in all the
books in the world can be written in this form in a cube of material one
two-hundredths of an inch wide–which is the barest piece of dust that can

be made out by the human eye. So there is plenty of room at the bottom!
Don’t tell me about microfilm!

This fact–that enormous amounts of information can be carried in an
exceedingly small space–is, of course, well known to the biologists, and
resolves the mystery which existed before we understood all this clearly, of
how it could be that, in the tiniest cell, all of the information for the
organization of a complex creature such as ourselves can be stored. All this
information–whether we have brown eyes, or whether we think at all, or
that in the embryo the jawbone should first develop with a little hole in the
side so that later a nerve can grow through it–all this information is
contained in a very tiny fraction of the cell in the form of long-chain DNA
molecules in which approximately 50 atoms are used for one bit of
information about the cell.

Better Electron Microscopes

If I have written in a code, with 5 times 5 times 5 atoms to a bit, the
question is: How could I read it today? The electron microscope is not quite
good enough; with the greatest care and effort, it can only resolve about 10
angstroms. I would like to try and impress upon you while I am talking
about all of these things on a small scale, the importance of improving the
electron microscope by a hundred times. It is not impossible; it is not
against the laws of diffraction of the electron. The wave length of the
electron in such a microscope is only 1/20 of an angstrom. So it should be
possible to see the individual atoms. What good would it be to see
individual atoms distinctly?

We have friends in other fields–in biology, for instance. We physicists
often look at them and say, “You know the reason you fellows are making
so little progress?” (Actually I don’t know any field where they are making
more rapid progress than they are in biology today.) “You should use more
mathematics, like we do.” They could answer us–but they’re polite, so I’ll
answer for them: “What you should do in order for us to make more rapid
progress is to make the electron microscope 100 times better.”

What are the most central and fundamental problems of biology today?
They are questions like: What is the sequence of bases in the DNA? What

happens when you have a mutation? How is the base order in the DNA
connected to the order of amino acids in the protein? What is the structure
of the RNA; is it single-chain or double-chain, and how is it related in its
order of bases to the DNA? What is the organization of the microsomes?
How are proteins synthesized? Where does the RNA go? How does it sit?
Where do the proteins sit? Where do the amino acids go in? In
photosynthesis, where is the chlorophyll; how is it arranged; where are the
carotenoids involved in this thing? What is the system of the conversion of
light into chemical energy?

It is very easy to answer many of these fundamental biological questions;
you just look at the thing! You will see the order of bases in the chain; you
will see the structure of the microsome. Unfortunately, the present
microscope sees at a scale which is just a bit too crude. Make the
microscope one hundred times more powerful, and many problems of
biology would be made very much easier. I exaggerate, of course, but the
biologists would surely be very thankful to you–and they would prefer that
to the criticism that they should use more mathematics.

The theory of chemical processes today is based on theoretical physics. In
this sense, physics supplies the foundation of chemistry. But chemistry also
has analysis. If you have a strange substance and you want to know what it
is, you go through a long and complicated process of chemical analysis.
You can analyze almost anything today, so I am a little late with my idea.
But if the physicists wanted to, they could also dig under the chemists in the
problem of chemical analysis. It would be very easy to make an analysis of
any complicated chemical substance; all one would have to do would be to
look at it and see where the atoms are. The only trouble is that the electron
microscope is one hundred times too poor. (Later, I would like to ask the
question: Can the physicists do something about the third problem of
chemistry–namely, synthesis? Is there a physical way to synthesize any
chemical substance?)

The reason the electron microscope is so poor is that the f-value of the
lenses is only 1 part to 1,000; you don’t have a big enough numerical
aperture. And I know that there are theorems which prove that it is
impossible, with axially symmetrical stationary field lenses, to produce an
f-value any bigger than so and so; and therefore the resolving power at the

present time is at its theoretical maximum. But in every theorem there are
assumptions. Why must the field be symmetrical? I put this out as a
challenge: Is there no way to make the electron microscope more powerful?

The Marvelous Biological System

The biological example of writing information on a small scale has inspired
me to think of something that should be possible. Biology is not simply
writing information; it is doing something about it. A biological system can
be exceedingly small. Many of the cells are very tiny, but they are very
active; they manufacture various substances; they walk around; they
wiggle; and they do all kinds of marvelous things–all on a very small scale.
Also, they store information. Consider the possibility that we too can make
a thing very small which does what we want–that we can manufacture an
object that maneuvers at that level!

There may even be an economic point to this business of making things
very small. Let me remind you of some of the problems of computing
machines. In computers we have to store an enormous amount of
information. The kind of writing that I was mentioning before, in which I
had everything down as a distribution of metal, is permanent. Much more
interesting to a computer is a way of writing, erasing, and writing
something else. (This is usually because we don’t want to waste the
material on which we have just written. Yet if we could write it in a very
small space, it wouldn’t make any difference; it could just be thrown away
after it was read. It doesn’t cost very much for the material.)

Miniaturizing the Computer

I don’t know how to do this on a small scale in a practical way, but I do
know that computing machines are very large; they fill rooms. Why can’t
we make them very small, make them of little wires, little elements–and by
little, I mean little. For instance, the wires should be 10 or 100 atoms in
diameter, and the circuits should be a few thousand angstroms across.
Everybody who has analyzed the logical theory of computers has come to
the conclusion that the possibilities of computers are very interesting–if
they could be made to be more complicated by several orders of magnitude.

If they had millions of times as many elements, they could make judgments.
They would have time to calculate what is the best way to make the
calculation that they are about to make. They could select the method of
analysis which, from their experience, is better than the one that we would
give to them. And in many other ways, they would have new qualitative
features.

If I look at your face I immediately recognize that I have seen it before.
(Actually, my friends will say I have chosen an unfortunate example here
for the subject of this illustration. At least I recognize that it is a man and
not an apple.) Yet there is no machine which, with that speed, can take a
picture of a face and say even that it is a man; and much less that it is the
same man that you showed it before–unless it is exactly the same picture. If
the face is changed; if I am closer to the face; if I am further from the face;
if the light changes–I recognize it anyway. Now, this little computer I carry
in my head is easily able to do that. The computers that we build are not
able to do that. The number of elements in this bone box of mine are
enormously greater than the number of elements in our “wonderful”
computers. But our mechanical computers are too big; the elements in this
box are microscopic. I want to make some that are submicroscopic.

If we wanted to make a computer that had all these marvelous extra
qualitative abilities, we would have to make it, perhaps, the size of the
Pentagon. This has several disadvantages. First, it requires too much
material; there may not be enough germanium in the world for all the
transistors which would have to be put into this enormous thing. There is
also the problem of heat generation and power consumption; TVA would be
needed to run the computer. But an even more practical difficulty is that the
computer would be limited to a certain speed. Because of its large size,
there is finite time required to get the information from one place to
another. The information cannot go any faster than the speed of light–so,
ultimately, when our computers get faster and faster and more and more
elaborate, we will have to make them smaller and smaller.

But there is plenty of room to make them smaller. There is nothing that I
can see in the physical laws that says the computer elements cannot be
made enormously smaller than they are now. In fact, there may be certain
advantages.

Miniaturization by Evaporation

How can we make such a device? What kind of manufacturing processes
would we use? One possibility we might consider, since we have talked
about writing by putting atoms down in a certain arrangement, would be to
evaporate the material, then evaporate the insulator next to it. Then, for the
next layer, evaporate another position of a wire, another insulator, and so
on. So, you simply evaporate until you have a block of stuff which has the
elements–coils and condensers, transistors and so on–of exceedingly fine
dimensions.

But I would like to discuss, just for amusement, that there are other
possibilities. Why can’t we manufacture these small computers somewhat
like we manufacture the big ones? Why can’t we drill holes, cut things,
solder things, stamp things out, mold different shapes all at an infinitesimal
level? What are the limitations as to how small a thing has to be before you
can no longer mold it? How many times when you are working on
something frustratingly tiny, like your wife’s wrist-watch, have you said to
yourself, “If I could only train an ant to do this!” What I would like to
suggest is the possibility of training an ant to train a mite to do this. What
are the possibilities of small but movable machines? They may or may not
be useful, but they surely would be fun to make.

Consider any machine–for example, an automobile–and ask about the
problems of making an infinitesimal machine like it. Suppose, in the
particular design of the automobile, we need a certain precision of the parts;
we need an accuracy, let’s suppose, of 4/10,000 of an inch. If things are
more inaccurate than that in the shape of the cylinder and so on, it isn’t
going to work very well. If I make the thing too small, I have to worry
about the size of the atoms; I can’t make a circle of “balls” so to speak, if
the circle is too small. So, if I make the error, corresponding to 4/10,000 of
an inch, correspond to an error of 10 atoms, it turns out that I can reduce the
dimensions of an automobile 4,000 times, approximately–so that it is 1 mm
across. Obviously, if you redesign the car so that it would work with a much
larger tolerance, which is not at all impossible, then you could make a much
smaller device.

It is interesting to consider what the problems are in such small machines.
Firstly, with parts stressed to the same degree, the forces go as the area you

are reducing, so that things like weight and inertia are of relatively no
importance. The strength of material, in other words, is very much greater
in proportion. The stresses and expansion of the flywheel from centrifugal
force, for example, would be the same proportion only if the rotational
speed is increased in the same proportion as we decrease the size. On the
other hand, the metals that we use have a grain structure, and this would be
very annoying at small scale because the material is not homogeneous.
Plastics and glass and things of this amorphous nature are very much more
homogeneous, and so we would have to make our machines out of such
materials.

There are problems associated with the electrical part of the system–with
the copper wires and the magnetic parts. The magnetic properties on a very
small scale are not the same as on a large scale; there is the “domain”
problem involved. A big magnet made of millions of domains can only be
made on a small scale with one domain. The electrical equipment won’t
simply be scaled down; it has to be redesigned. But I can see no reason why
it can’t be redesigned to work again.

Problems of Lubrication

Lubrication involves some interesting points. The effective viscosity of oil
would be higher and higher in proportion as we went down (and if we
increase the speed as much as we can). If we don’t increase the speed so
much, and change from oil to kerosene or some other fluid, the problem is
not so bad. But actually we may not have to lubricate at all! We have a lot
of extra force. Let the bearings run dry; they won’t run hot because the heat
escapes away from such a small device very, very rapidly. This rapid heat
loss would prevent the gasoline from exploding, so an internal combustion
engine is impossible. Other chemical reactions, liberating energy when
cold, can be used. Probably an external supply of electrical power would be
most convenient for such small machines.

What would be the utility of such machines? Who knows? Of course, a
small automobile would only be useful for the mites to drive around in, and
I suppose our Christian interests don’t go that far. However, we did note the
possibility of the manufacture of small elements for computers in

completely automatic factories, containing lathes and other machine tools at
the very small level. The small lathe would not have to be exactly like our
big lathe. I leave to your imagination the improvement of the design to take
full advantage of the properties of things on a small scale, and in such a
way that the fully automatic aspect would be easiest to manage.

A friend of mine (Albert R. Hibbs)* suggests a very interesting possibility
for relatively small machines. He says that, although it is a very wild idea, it
would be interesting in surgery if you could swallow the surgeon. You put
the mechanical surgeon inside the blood vessel and it goes into the heart
and “looks” around. (Of course the information has to be fed out.) It finds
out which valve is the faulty one and takes a little knife and slices it out.
Other small machines might be permanently incorporated in the body to
assist some inadequately-functioning organ.

Now comes the interesting question: How do we make such a tiny
mechanism? I leave that to you. However, let me suggest one weird
possibility. You know, in the atomic energy plants they have materials and
machines that they can’t handle directly because they have become
radioactive. To un-screw nuts and put on bolts and so on, they have a set of
master and slave hands, so that by operating a set of levers here, you control
the “hands” there, and can turn them this way and that so you can handle
things quite nicely.

Most of these devices are actually made rather simply, in that there is a
particular cable, like a marionette string, that goes directly from the controls
to the “hands.” But, of course, things also have been made using servo
motors, so that the connection between the one thing and the other is
electrical rather than mechanical. When you turn the levers, they turn a
servo motor, and it changes the electrical currents in the wires, which
repositions a motor at the other end.

Now, I want to build much the same device–a master-slave system which
operates electrically. But I want the slaves to be made especially carefully
by modern large-scale machinists so that they are one-fourth the scale of the
“hands” that you ordinarily maneuver. So you have a scheme by which you
can do things at one-quarter scale anyway–the little servo motors with little
hands play with little nuts and bolts; they drill little holes; they are four
times smaller. Aha! So I manufacture a quarter-size lathe; I manufacture

quarter-size tools; and I make, at the one-quarter scale, still another set of
hands again relatively one-quarter size! This is one-sixteenth size, from my
point of view. And after I finish doing this I wire directly from my large-
scale system, through transformers perhaps, to the one-sixteenth-size servo
motors. Thus I can now manipulate the one-sixteenth-size hands.

Well, you get the principle from there on. It is rather a difficult program,
but it is a possibility. You might say that one can go much farther in one
step than from one to four. Of course, this has all to be designed very
carefully and it is not necessary simply to make it like hands. If you thought
of it very carefully, you could probably arrive at a much better system for
doing such things.

If you work through a pantograph, even today, you can get much more
than a factor of four in even one step. But you can’t work directly through a
pantograph which makes a smaller pantograph which then makes a smaller
pantograph–because of the looseness of the holes and the irregularities of
construction. The end of the pantograph wiggles with a relatively greater
irregularity than the irregularity with which you move your hands. In going
down this scale, I would find the end of the pantograph on the end of the
pantograph on the end of the pantograph shaking so badly that it wasn’t
doing anything sensible at all.

At each stage, it is necessary to improve the precision of the apparatus. If,
for instance, having made a small lathe with a pantograph, we find its lead
screw irregular–more irregular than the large-scale one–we could lap the
lead screw against breakable nuts that you can reverse in the usual way
back and forth until this lead screw is, at its scale, as accurate as our
original lead screws, at our scale.

We can make flats by rubbing unflat surfaces in triplicates together–in
three pairs–and the flats then become flatter than the thing you started with.
Thus, it is not impossible to improve precision on a small scale by the
correct operations. So, when we build this stuff, it is necessary at each step
to improve the accuracy of the equipment by working for a while down
there, making accurate lead screws, Johansen blocks, and all the other
materials which we use in accurate machine work at the higher level. We
have to stop at each level and manufacture all the stuff to go to the next

level–a very long and very difficult program. Perhaps you can figure a
better way than that to get down to small scale more rapidly.

Yet, after all this, you have just got one little baby lathe four thousand
times smaller than usual. But we were thinking of making an enormous
computer, which we were going to build by drilling holes on this lathe to
make little washers for the computer. How many washers can you
manufacture on this one lathe?

A Hundred Tiny Hands

When I make my first set of slave “hands” at one-fourth scale, I am going
to make ten sets. I make ten sets of “hands,” and I wire them to my original
levers so they each do exactly the same thing at the same time in parallel.
Now, when I am making my new devices one-quarter again as small, I let
each one manufacture ten copies, so that I would have a hundred “hands” at
the 1/16 size.

Where am I going to put the million lathes that I am going to have? Why,
there is nothing to it; the volume is much less than that of even one full-
scale lathe. For instance, if I made a billion little lathes, each 1/4000 of the
scale of a regular lathe, there are plenty of materials and space available
because in the billion little ones there is less than 2 percent of the materials
in one big lathe. It doesn’t cost anything for materials, you see. So I want to
build a billion tiny factories, models of each other, which are manufacturing
simultaneously, drilling holes, stamping parts, and so on.

As we go down in size, there are a number of interesting problems that
arise. All things do not simply scale down in proportion. There is the
problem that materials stick together by the molecular (Van der Waals*)
attractions. It would be like this: After you have made a part and you
unscrew the nut from a bolt, it isn’t going to fall down because the gravity
isn’t appreciable; it would even be hard to get it off the bolt. It would be
like those old movies of a man with his hands full of molasses, trying to get
rid of a glass of water. There will be several problems of this nature that we
will have to be ready to design for.

Rearranging the Atoms

But I am not afraid to consider the final question as to whether, ultimately–
in the great future–we can arrange the atoms the way we want; the very
atoms, all the way down! What would happen if we could arrange the atoms
one by one the way we want them (within reason, of course; you can’t put
them so that they are chemically unstable, for example)?

Up to now, we have been content to dig in the ground to find minerals.
We heat them and we do things on a large scale with them, and we hope to
get a pure substance with just so much impurity, and so on. But we must
always accept some atomic arrangement that nature gives us. We haven’t
got anything, say, with a “checkerboard” arrangement, with the impurity
atoms exactly arranged 1,000 angstroms apart, or in some other particular
pattern.

What could we do with layered structures with just the right layers? What
would the properties of materials be if we could really arrange the atoms the
way we want them? They would be very interesting to investigate
theoretically. I can’t see exactly what would happen, but I can hardly doubt
that when we have some control of the arrangement of things on a small
scale, we will get an enormously greater range of possible properties that
substances can have, and of different things that we can do.

Consider, for example, a piece of material in which we make little coils
and condensers (or their solid-state analogs) 1,000 or 10,000 angstroms in a
circuit, one right next to the other, over a large area, with little antennas
sticking out at the other end–a whole series of circuits. Is it possible, for
example, to emit light from a whole set of antennas, like we emit radio
waves from an organized set of antennas to beam the radio programs to
Europe? The same thing would be to beam light out in a definite direction
with very high intensity. (Perhaps such a beam is not very useful technically
or economically.)

I have thought about some of the problems of building electric circuits on
a small scale, and the problem of resistance is serious. If you build a
corresponding circuit on a small scale, its natural frequency goes up, since
the wave length goes down as the scale; but the skin depth only decreases
with the square root of the scale ratio, and so resistive problems are of
increasing difficulty. Possibly we can beat resistance through the use of
superconductivity if the frequency is not too high, or by other tricks.

Atoms in a Small World

When we get to the very, very small world–say, circuits of seven atoms–we
have a lot of new things that would happen that represent completely new
opportunities for design. Atoms on a small scale behave like nothing on a
large scale, for they satisfy the laws of quantum mechanics. So, as we go
down and fiddle around with the atoms down there, we are working with
different laws, and we can expect to do different things. We can
manufacture in different ways. We can use, not just circuits, but some
system involving the quantized energy levels, or the interactions of
quantized spins, etc.

Another thing we will notice is that, if we go down far enough, all of our
devices can be mass produced so that they are absolutely perfect copies of
one another. We cannot build two large machines so that the dimensions are
exactly the same. But if your machine is only 100 atoms high, you only
have to get it correct to one-half of one percent to make sure the other
machine is exactly the same size–namely, 100 atoms high!

At the atomic level, we have new kinds of forces and new kinds of
possibilities, new kinds of effects. The problems of manufacture and
reproduction of materials will be quite different. I am, as I said, inspired by
the biological phenomena in which chemical forces are used in repetitious
fashion to produce all kinds of weird effects (one of which is the author).
The principles of physics, as far as I can see, do not speak against the
possibility of maneuvering things atom by atom. It is not an attempt to
violate any laws; it is something, in principle, that can be done; but in
practice, it has not been done because we are too big.

Ultimately, we can do chemical synthesis. A chemist comes to us and
says, “Look, I want a molecule that has the atoms arranged thus and so;
make me that molecule.” The chemist does a mysterious thing when he
wants to make a molecule. He sees that it has got that ring, so he mixes this
and that, and he shakes it, and he fiddles around. And, at the end of a
difficult process, he usually does succeed in synthesizing what he wants. By
the time I get my devices working, so that we can do it by physics, he will
have figured out how to synthesize absolutely anything, so that this will
really be useless.

But it is interesting that it would be, in principle, possible (I think) for a
physicist to synthesize any chemical substance that the chemist writes
down. Give the orders and the physicist synthesizes it. How? Put the atoms
down where the chemist says, and so you make the substance. The
problems of chemistry and biology can be greatly helped if our ability to
see what we are doing, and to do things on an atomic level, is ultimately
developed–a development which I think cannot be avoided. Now, you
might say, “Who should do this and why should they do it?” Well, I pointed
out a few of the economic applications, but I know that the reason that you
would do it might be just for fun. But have some fun! Let’s have a
competition between laboratories. Let one laboratory make a tiny motor
which it sends to another lab which sends it back with a thing that fits inside
the shaft of the first motor.

High School Competition

Just for the fun of it, and in order to get kids interested in this field, I would
propose that someone who has some contact with the high schools think of
making some kind of high school competition. After all, we haven’t even
started in this field, and even the kids can write smaller than has ever been
written before. They could have competition in high schools. The Los
Angeles high school could send a pin to the Venice high school on which it
says, “How’s this?” They get the pin back, and in the dot of the “i” it says,
“Not so hot.”

Perhaps this doesn’t excite you to do it, and only economics will do so.
Then I want to do something; but I can’t do it at the present moment,
because I haven’t prepared the ground. It is my intention to offer a prize of
$1,000 to the first guy who can take the information on the page of a book
and put it on an area 1/25,000 smaller in linear scale in such manner that it
can be read by an electron microscope.

And I want to offer another prize–if I can figure out how to phrase it so
that I don’t get into a mess of arguments about definitions–of another
$1,000 to the first guy who makes an operating electric motor–a rotating
electric motor which can be controlled from the outside and, not counting
the lead-in wires, is only 1/64 inch cube.

I do not expect that such prizes will have to wait very long for claimants.

Ultimately Feynman had to make good on both challenges. The following
is from the overview to Feynman and Computation, edited by Anthony J.G.
Hey (Perseus, Reading MA, 1998), reprinted with permission. Ed.

He paid out on both–the first, less than a year later, to Bill McLellan, a
Caltech alumnus, for a miniature motor which satisfied the specifications
but which was somewhat of a disappointment to Feynman in that it required
no new technical advances. Feynman gave an updated version of his talk in
1983 to the Jet Propulsion Laboratory. He predicted ‘that with today’s
technology we can easily . . . construct motors a fortieth of that size in each
dimension, 64,000 times smaller than . . . McLellan’s motor, and we can
make thousands of them at a time.’

It was not for another 26 years that he had to pay out on the second prize,
this time to a Stanford graduate student named Tom Newman. The scale of
Feynman’s challenge was equivalent to writing all twenty-four volumes of
the Encyclopaedia Brittanica on the head of a pin: Newman calculated that
each individual letter would be only about fifty atoms wide. Using electron-
beam lithography when his thesis advisor was out of town, he was
eventually able to write the first page of Charles Dickens’ A Tale of Two
Cities at 1/25,000 reduction in scale. Feynman’s paper is often credited with
starting the field of nanotechnology and there are now regular ‘Feynman
Nanotechnology Prize’ competitions.

______
*Heike Kamerlingh-Onnes (1853–1926), winner of the 1913 Physics

Nobel Prize for investigations of the properties of matter at low
temperatures, which led to the production of liquid helium. Ed.

*(1882–1961) Winner of the 1946 Physics Nobel Prize for his invention
of an apparatus for producing extremely high pressures, and further work in
high pressure physics. Ed.

*One angstrom = one ten-billionth of a meter. Ed.
*A student and later a colleague of Feynman. Ed.

*Van der Waals forces: Weak attractive forces between atoms or
molecules. Johannes Diderik Van der Waals (1837–1923) received the 1910
Nobel Prize in Physics for his work on the equation of state for gases and
liquids. Ed.

6

THE VALUE OF SCIENCE

Of all its many values, the greatest must be the freedom to
doubt.

In Hawaii, Feynman learns a lesson in humility while touring a Buddhist
temple: “To every man is given the key to the gates of heaven; the same key
opens the gates of hell.” This is one of Feynman’s most eloquent pieces,
reflecting on science’s relevance to the human experience and vice versa.
He also gives a lesson to fellow scientists on their responsibility to the
future of civilization.

From time to time, people suggest to me that scientists ought to give more
consideration to social problems–especially that they should be more
responsible in considering the impact of science upon society. This same
suggestion must be made to many other scientists, and it seems to be
generally believed that if the scientists would only look at these very
difficult social problems and not spend so much time fooling with the less
vital scientific ones, great success would come of it.

It seems to me that we do think about these problems from time to time,
but we don’t put full-time effort into them–the reason being that we know
we don’t have any magic formula for solving problems, that social
problems are very much harder than scientific ones, and that we usually
don’t get anywhere when we do think about them.

I believe that a scientist looking at nonscientific problems is just as dumb
as the next guy–and when he talks about a nonscientific matter, he will

sound as naive as anyone untrained in the matter. Since the question of the
value of science is not a scientific subject, this discussion is dedicated to
proving my point–by example.

The first way in which science is of value is familiar to everyone. It is that
scientific knowledge enables us to do all kinds of things and to make all
kinds of things. Of course if we make good things, it is not only to the
credit of science; it is also to the credit of the moral choice which led us to
good work. Scientific knowledge is an enabling power to do either good or
bad–but it does not carry instructions on how to use it. Such power has
evident value–even though the power may be negated by what one does.

I learned a way of expressing this common human problem on a trip to
Honolulu. In a Buddhist temple there, the man in charge explained a little
bit about the Buddhist religion for tourists, and then ended his talk by
telling them he had something to say to them that they would never forget–
and I have never forgotten it. It was a proverb of the Buddhist religion:

“To every man is given the key to the gates of heaven; the same key
opens the gates of hell.”

What, then, is the value of the key to heaven? It is true that if we lack
clear instructions that determine which is the gate to heaven and which the
gate to hell, the key may be a dangerous object to use, but it obviously has
value. How can we enter heaven without it?

The instructions, also, would be of no value without the key. So it is
evident that, in spite of the fact that science could produce enormous horror
in the world, it is of value because it can produce something.

Another value of science is the fun called intellectual enjoyment which
some people get from reading and learning and thinking about it, and which
others get from working in it. This is a very real and important point and
one which is not considered enough by those who tell us it is our social
responsibility to reflect on the impact of science on society.

Is this mere personal enjoyment of value to society as a whole? No! But it
is also a responsibility to consider the value of society itself. Is it, in the last
analysis, to arrange things so that people can enjoy things? If so, the
enjoyment of science is as important as anything else.

But I would like not to underestimate the value of the worldview which is
the result of scientific effort. We have been led to imagine all sorts of things

infinitely more marvelous than the imaginings of poets and dreamers of the
past. It shows that the imagination of nature is far, far greater than the
imagination of man. For instance, how much more remarkable it is for us all
to be stuck-half of us upside down–by a mysterious attraction, to a spinning
ball that has been swinging in space for billions of years, than to be carried
on the back of an elephant supported on a tortoise swimming in a
bottomless sea.

I have thought about these things so many times alone that I hope you
will excuse me if I remind you of some thoughts that I am sure you have all
had–or this type of thought–which no one could ever have had in the past,
because people then didn’t have the information we have about the world
today.

For instance, I stand at the seashore, alone, and start to think. There are
the rushing waves . . . mountains of molecules, each stupidly minding its
own business . . . trillions apart . . . yet forming white surf in unison.

Ages on ages . . . before any eyes could see . . . year after year . . .
thunderously pounding the shore as now. For whom, for what? . . . on a
dead planet, with no life to entertain.

Never at rest . . . tortured by energy . . . wasted prodigiously by the sun . .
. poured into space. A mite makes the sea roar.

Deep in the sea, all molecules repeat the patterns of one another till
complex new ones are formed. They make others like themselves . . . and a
new dance starts.

Growing in size and complexity . . . living things, masses of atoms, DNA,
protein . . . dancing a pattern ever more intricate.

Out of the cradle onto the dry land . . . here it is standing . . . atoms with
consciousness . . . matter with curiosity.

Stands at the sea . . . wonders at wondering . . . I . . . a universe of atoms .
. . an atom in the universe.

The Grand Adventure

The same thrill, the same awe and mystery, come again and again when we
look at any problem deeply enough. With more knowledge comes deeper,
more wonderful mystery, luring one on to penetrate deeper still. Never

concerned that the answer may prove disappointing, but with pleasure and
confidence we turn over each new stone to find unimagined strangeness
leading on to more wonderful questions and mysteries–certainly a grand
adventure!

It is true that few unscientific people have this particular type of religious
experience. Our poets do not write about it; our artists do not try to portray
this remarkable thing. I don’t know why. Is nobody inspired by our present
picture of the universe? The value of science remains unsung by singers, so
you are reduced to hearing–not a song or a poem, but an evening lecture
about it. This is not yet a scientific age.

Perhaps one of the reasons is that you have to know how to read the
music. For instance, the scientific article says, perhaps, something like this:
“The radioactive phosphorus content of the cerebrum of the rat decreases to
one-half in a period of two weeks.” Now, what does that mean?

It means that phosphorus that is in the brain of a rat (and also in mine, and
yours) is not the same phosphorus as it was two weeks ago, but that all of
the atoms that are in the brain are being replaced, and the ones that were
there before have gone away.

So what is this mind, what are these atoms with consciousness? Last
week’s potatoes! That is what now can remember what was going on in my
mind a year ago–a mind which has long ago been replaced.

That is what it means when one discovers how long it takes for the atoms
of the brain to be replaced by other atoms, to note that the thing which I call
my individuality is only a pattern or dance. The atoms come into my brain,
dance a dance, then go out; always new atoms but always doing the same
dance, remembering what the dance was yesterday.

The Remarkable Idea

When we read about this in the newspaper, it says, “The scientist says that
this discovery may have importance in the cure of cancer.” The paper is
only interested in the use of the idea, not the idea itself. Hardly anyone can
understand the importance of an idea, it is so remarkable. Except that,
possibly, some children catch on. And when a child catches on to an idea
like that, we have a scientist. These ideas do filter down (in spite of all the

conversation about TV replacing thinking), and lots of kids get the spirit–
and when they have the spirit you have a scientist. It’s too late for them to
get the spirit when they are in our universities, so we must attempt to
explain these ideas to children.

I would now like to turn to a third value that science has. It is a little more
indirect, but not much. The scientist has a lot of experience with ignorance
and doubt and uncertainty, and this experience is of very great importance, I
think. When a scientist doesn’t know the answer to a problem, he is
ignorant. When he has a hunch as to what the result is, he is uncertain. And
when he is pretty darn sure of what the result is going to be, he is in some
doubt. We have found it of paramount importance that in order to progress
we must recognize the ignorance and leave room for doubt. Scientific
knowledge is a body of statements of varying degrees of certainty–some
most unsure, some nearly sure, none absolutely certain.

Now, we scientists are used to this, and we take it for granted that it is
perfectly consistent to be unsure–that it is possible to live and not know.
But I don’t know whether everyone realizes that this is true. Our freedom to
doubt was born of a struggle against authority in the early days of science.
It was a very deep and strong struggle. Permit us to question–to doubt,
that’s ail–not to be sure. And I think it is important that we do not forget the
importance of this struggle and thus perhaps lose what we have gained.
Here lies a responsibility to society.

We are all sad when we think of the wondrous potentialities human
beings seem to have, as contrasted with their small accomplishments. Again
and again people have thought that we could do much better. They of the
past saw in the nightmare of their times a dream for the future. We, of their
future, see that their dreams, in certain ways surpassed, have in many ways
remained dreams. The hopes for the future today are, in good share, those
of yesterday.

Education, for Good and Evil

Once some thought that the possibilities people had were not developed
because most of those people were ignorant. With education universal,

could all men be Voltaires? Bad can be taught at least as efficiently as good.
Education is a strong force, but for either good or evil.

Communications between nations must promote understanding: So went
another dream. But the machines of communication can be channeled or
choked. What is communicated can be truth or lie. Communication is a
strong force also, but for either good or bad.

The applied sciences should free men of material problems at least.
Medicine controls diseases. And the record here seems all to the good. Yet
there are men patiently working to create great plagues and poisons. They
are to be used in warfare tomorrow.

Nearly everybody dislikes war. Our dream today is peace. In peace, man
can develop best the enormous possibilities he seems to have. But maybe
future men will find that peace, too, can be good and bad. Perhaps peaceful
men will drink out of boredom. Then perhaps drink will become the great
problem which seems to keep man from getting all he thinks he should out
of his abilities.

Clearly, peace is a great force, as is sobriety, as are material power,
communication, education, honesty, and the ideals of many dreamers.

We have more of these forces to control than did the ancients. And maybe
we are doing a little better than most of them could do. But what we ought
to be able to do seems gigantic compared with our confused
accomplishments.

Why is this? Why can’t we conquer ourselves?

Because we find that even great forces and abilities do not seem to carry
with them clear instructions on how to use them. As an example, the great
accumulation of understanding as to how the physical world behaves only
convinces one that this behavior seems to have a kind of meaninglessness.
The sciences do not directly teach good and bad.

Through all ages men have tried to fathom the meaning of life. They have
realized that if some direction or meaning could be given to our actions,
great human forces would be unleashed. So, very many answers must have
been given to the question of the meaning of it all. But they have been of all
different sorts, and the proponents of one answer have looked with horror at
the actions of the believers in another. Horror, because from a disagreeing
point of view all the great potentialities of the race were being channeled

into a false and confining blind alley. In fact, it is from the history of the
enormous monstrosities created by false belief that philosophers have
realized the apparently infinite and wondrous capacities of human beings.
The dream is to find the open channel.

What, then, is the meaning of it all? What can we say to dispel the
mystery of existence?

If we take everything into account, not only what the ancients knew, but
all of what we know today that they didn’t know, then I think that we must
frankly admit that we do not know.

But, in admitting this, we have probably found the open channel.
This is not a new idea; this is the idea of the age of reason. This is the
philosophy that guided the men who made the democracy that we live
under. The idea that no one really knew how to run a government led to the
idea that we should arrange a system by which new ideas could be
developed, tried out, tossed out, more new ideas brought in; a trial and error
system. This method was a result of the fact that science was already
showing itself to be a successful venture at the end of the 18th century.
Even then it was clear to socially minded people that the openness of the
possibilities was an opportunity, and that doubt and discussion were
essential to progress into the unknown. If we want to solve a problem that
we have never solved before, we must leave the door to the unknown ajar.

Our Responsibility as Scientists

We are at the very beginning of time for the human race. It is not
unreasonable that we grapple with problems. There are tens of thousands of
years in the future. Our responsibility is to do what we can, learn what we
can, improve the solutions and pass them on. It is our responsibility to leave
the men of the future a free hand. In the impetuous youth of humanity, we
can make grave errors that can stunt our growth for a long time. This we
will do if we say we have the answers now, so young and ignorant; if we
suppress all discussion, all criticism, saying, “This is it, boys, man is
saved!” and thus doom man for a long time to the chains of authority,
confined to the limits of our present imagination. It has been done so many
times before.

It is our responsibility as scientists, knowing the great progress and great
value of a satisfactory philosophy of ignorance, the great progress that is the
fruit of freedom of thought, to proclaim the value of this freedom, to teach
how doubt is not to be feared but welcomed and discussed, and to demand
this freedom as our duty to all coming generations.

7

RICHARD P. FEYNMAN’S MINORITY
REPORT TO THE SPACE SHUTTLE
CHALLENGER INQUIRY

When the Space Shuttle Challenger exploded shortly after its launch on
January 28, 1986, six professional astronauts and one school-teacher were
tragically killed. The nation was devastated, and NASA was shaken out of
its complacency, brought on by years of successful–or at least nonlethal–
space missions. A commission was formed, led by Secretary of State
William P. Rogers and composed of politicians, astronauts, military men,
and one scientist, to investigate the cause of the accident and to recommend
steps to prevent such a disaster from ever happening again. The fact that
Richard Feynman was that one scientist may have made the difference
between answering the question of why the Challenger failed and eternal
mystery. Feynman was gutsier than most men, not afraid to jet all over the
country to talk to the men on the ground, the engineers who had recognized
the fact that propaganda was taking the lead over care and safety in the
shuttle program. His report, which was perceived by the Commission as
embarrassing to NASA, was almost suppressed by the Commission, but
Feynman fought to have it included; it was relegated to an appendix. When
the Commission held a live press conference to answer questions, Feynman
did his now-famous tabletop experiment with one of the shuttle’s gaskets, or
O-rings, and a cup of ice water. It dramatically proved that those key
gaskets had failed because the warning of the engineers that it was too cold
outside to go ahead with the launch went unheeded by managers eager to

impress their bosses with the punctuality of their mission schedule. Here is
that historic report.

Introduction

It appears that there are enormous differences of opinion as to the
probability of a failure with loss of vehicle and of human life. The estimates
range from roughly 1 in 100 to 1 in 100,000. The higher figures come from
working engineers, and the very low figures from management. What are
the causes and consequences of this lack of agreement? Since 1 part in
100,000 would imply that one could put a Shuttle up each day for 300 years
expecting to lose only one, we could more properly ask “What is the cause
of management’s fantastic faith in the machinery?”

We have also found that certification criteria used in Flight Readiness
Reviews often develop a gradually decreasing strictness. The argument that
the same risk was flown before without failure is often accepted as an
argument for the safety of accepting it again. Because of this, obvious
weaknesses are accepted again and again, sometimes without a sufficiently
serious attempt to remedy them, or to delay a flight because of their
continued presence.

There are several sources of information. There are published criteria for
certification, including a history of modifications in the form of waivers and
deviations. In addition, the records of the Flight Readiness Reviews for
each flight document the arguments used to accept the risks of the flight.
Information was obtained from the direct testimony and the reports of the
range safety officer, Louis J. Ullian, with respect to the history of success of
solid fuel rockets. There was a further study by him (as chairman of the
launch abort safety panel [LASP]) in an attempt to determine the risks
involved in possible accidents leading to radioactive contamination from
attempting to fly a plutonium power supply (RTG) for future planetary
missions. The NASA study of the same question is also available. For the
history of the Space Shuttle Main Engines, interviews with management
and engineers at Marshall, and informal interviews with engineers at
Rocketdyne, were made. An independent (Caltech) mechanical engineer
who consulted for NASA about engines was also interviewed informally. A

visit to Johnson was made to gather information on the reliability of the
avionics (computers, sensors, and effectors). Finally there is a report, “A
Review of Certification Practices Potentially Applicable to Man-rated
Reusable Rocket Engines,” prepared at the Jet Propulsion Laboratory by N.
Moore, et al., in February 1986, for NASA Headquarters, Office of Space
Flight. It deals with the methods used by the FAA and the military to certify
their gas turbine and rocket engines. These authors were also interviewed
informally.

Solid Fuel Rockets (SRB)

An estimate of the reliability of solid fuel rockets was made by the range
safety officer, by studying the experience of all previous rocket flights. Out
of a total of nearly 2,900 flights, 121 failed (1 in 25). This includes,
however, what may be called early errors, rockets flown for the first few
times in which design errors are discovered and fixed. A more reasonable
figure for the mature rockets might be 1 in 50. With special care in the
selection of the parts and in inspection, a figure of below 1 in 100 might be
achieved but 1 in 1,000 is probably not attainable with today’s technology.
(Since there are two rockets on the Shuttle, these rocket failure rates must
be doubled to get Shuttle failure rates from Solid Rocket Booster failure.)

NASA officials argue that the figure is much lower. They point out that
these figures are for unmanned rockets but since the Shuttle is a manned
vehicle “the probability of mission success is necessarily very close to 1.0.”
It is not very clear what this phrase means. Does it mean it is close to 1 or
that it ought to be close to 1? They go on to explain “Historically this
extremely high degree of mission success has given rise to a difference in
philosophy between manned space flight programs and unmanned
programs; i.e., numerical probability usage versus engineering judgement.”
(These quotations are from “Space Shuttle Data for Planetary Mission RTG
Safety Analysis,” pages 3-1, 3-2, February 15, 1985, NASA, JSC.) It is true
that if the probability of failure was as low as 1 in 100,000 it would take an
inordinate number of tests to determine it (for you would get nothing but a
string of perfect flights from which no precise figure, other than that the
probability is likely less than the number of such flights in the string so far).
But, if the real probability is not so small, flights would show troubles, near

failures, and possibly actual failure with a reasonable estimate. In fact,
previous NASA experience had shown, on occasion, just such difficulties,
near accidents, and accidents, all giving warning that the probability of
flight failure was not so very small. The inconsistency of the argument not
to determine reliability through historical experience, as the range safety
officer did, is that NASA also appeals to history, beginning “Historically
this high degree of mission success . . .” Finally, if we are to replace
standard numerical probability usage with engineering judgment, why do
we find such an enormous disparity between the management estimate and
the judgment of the engineers? It would appear that, for whatever purpose,
be it for internal or external consumption, the management of NASA
exaggerates the reliability of its product, to the point of fantasy.

The history of the certification and Flight Readiness Reviews will not be
repeated here. (See other part of Commission reports.) The phenomenon of
accepting for flight seals that had shown erosion and blow-by in previous
flights is very clear. The Challenger flight is an excellent example. There
are several references to flights that had gone before. The acceptance and
success of these flights is taken as evidence of safety. But erosion and blow-
by are not what the design expected. They are warnings that something is
wrong. The equipment is not operating as expected, and therefore there is a
danger that it can operate with even wider deviations in this unexpected and
not thoroughly understood way. The fact that this danger did not lead to a
catastrophe before is no guarantee that it will not the next time, unless it is
completely understood. When playing Russian roulette the fact that the first
shot got off safely is little comfort for the next. The origin and
consequences of the erosion and blow-by were not understood. They did
not occur equally on all flights and all joints; sometimes more, and
sometimes less. Why not sometime, when whatever conditions determined
it were right, still more, leading to catastrophe?

In spite of these variations from case to case, officials behaved as if they
understood it, giving apparently logical arguments to each other often
depending on the “success” of previous flights. For example, in determining
if flight 51-L was safe to fly in the face of ring erosion in flight 51-C, it was
noted that the erosion depth was only one-third of the radius. It had been
noted in an experiment cutting the ring that cutting it as deep as one radius

was necessary before the ring failed. Instead of being very concerned that
variations of poorly understood conditions might reasonably create a deeper
erosion this time, it was asserted, there was “a safety factor of three.” This
is a strange use of the engineer’s term “safety factor.” If a bridge is built to
withstand a certain load without the beams permanently deforming,
cracking, or breaking, it may be designed for the materials used to actually
stand up under three times the load. This “safety factor” is to allow for
uncertain excesses of load, or unknown extra loads, or weaknesses in the
material that might have unexpected flaws, etc. If now the expected load
comes on to the new bridge and a crack appears in a beam, this is a failure
of the design. There was no safety factor at all; even though the bridge did
not actually collapse because the crack only went one-third of the way
through the beam. The O-rings of the Solid Rocket Boosters were not
designed to erode. Erosion was a clue that something was wrong. Erosion
was not something from which safety can be inferred.

There was no way, without full understanding, that one could have
confidence that conditions the next time might not produce erosion three
times more severe than the time before. Nevertheless, officials fooled
themselves into thinking they had such understanding and confidence, in
spite of the peculiar variations from case to case. A mathematical model
was made to calculate erosion. This was a model based not on physical
understanding but on empirical curve fitting. To be more detailed, it was
supposed a stream of hot gas impinged on the O-ring material, and the heat
was determined at the point of stagnation (so far, with reasonable physical,
thermodynamic laws). But to determine how much rubber eroded it was
assumed this depended only on this heat by a formula suggested by data on
a similar material. A logarithmic plot suggested a straight line, so it was
supposed that the erosion varied as the .58 power of the heat, the .58 being
determined by a nearest fit. At any rate, adjusting some other numbers, it
was determined that the model agreed with the erosion (to a depth of one-
third the radius of the ring). There is nothing much so wrong with this as
believing the answer! Uncertainties appear everywhere. How strong the gas
stream might be was unpredictable, it depended on holes formed in the
putty. Blow-by showed that the ring might fail even though not, or only
partially, eroded through. The empirical formula was known to be

uncertain, for it did not go directly through the very data points by which it
was determined. There were a cloud of points some twice above, and some
twice below the fitted curve, so erosions twice predicted were reasonable
from that cause alone. Similar uncertainties surrounded the other constants
in the formula, etc., etc. When using a mathematical model, careful
attention must be given to uncertainties in the model.

Liquid Fuel Engine (SSME)

During the flight of 51-L the three Space Shuttle Main Engines all worked
perfectly, even, at the last moment, beginning to shut down the engines as
the fuel supply began to fail. The question arises, however, as to whether,
had it failed, and we were to investigate it in as much detail as we did the
Solid Rocket Booster, we would find a similar lack of attention to faults and
a deteriorating reliability. In other words, were the organization weaknesses
that contributed to the accident confined to the Solid Rocket Booster sector
or were they a more general characteristic of NASA? To that end the Space
Shuttle Main Engines and the avionics were both investigated. No similar
study of the Orbiter or the External Tank was made.

The engine is a much more complicated structure than the Solid Rocket
Booster, and a great deal more detailed engineering goes into it. Generally,
the engineering seems to be of high quality and apparently considerable
attention is paid to deficiencies and faults found in operation.

The usual way that such engines are designed (for military or civilian
aircraft) may be called the component system, or bottom-up design. First it
is necessary to thoroughly understand the properties and limitations of the
materials to be used (for turbine blades, for example), and tests are begun in
experimental rigs to determine those. With this knowledge larger
component parts (such as bearings) are designed and tested individually. As
deficiencies and design errors are noted they are corrected and verified with
further testing. Since one tests only parts at a time, these tests and
modifications are not overly expensive. Finally one works up to the final
design of the entire engine, to the necessary specifications. There is a good
chance, by this time, that the engine will generally succeed, or that any
failures are easily isolated and analyzed because the failure modes,

limitations of materials, etc., are so well understood. There is a very good
chance that the modifications to the engine to get around the final
difficulties are not very hard to make, for most of the serious problems have
already been discovered and dealt with in the earlier, less expensive, stages
of the process.

The Space Shuttle Main Engine was handled in a different manner, top
down, we might say. The engine was designed and put together all at once
with relatively little detailed preliminary study of the material and
components. Then when troubles are found in the bearings, turbine blades,
coolant pipes, etc., it is more expensive and difficult to discover the causes
and make changes. For example, cracks have been found in the turbine
blades of the high pressure oxygen turbopump. Are they caused by flaws in
the material, the effect of the oxygen atmosphere on properties of the
material, the thermal stresses of startup or shutdown, the vibration and
stresses of steady running, or mainly at some resonance at certain speeds,
etc.? How long can we run from crack initiation to crack failure, and how
does this depend on power level? Using the completed engine as a test bed
to resolve such questions is extremely expensive. One does not wish to lose
entire engines in order to find out where and how failure occurs. Yet, an
accurate knowledge of this information is essential to acquire a confidence
in the engine reliability in use. Without detailed understanding, confidence
cannot be attained.

A further disadvantage of the top-down method is that, if an
understanding of a fault is obtained, a simple fix, such as a new shape for
the turbine housing, may be impossible to implement without a redesign of
the entire engine.

The Space Shuttle Main Engine is a very remarkable machine. It has a
greater ratio of thrust to weight than any previous engine. It is built at the
edge of, or outside of, previous engineering experience. Therefore, as
expected, many different kinds of flaws and difficulties have turned up.
Because, unfortunately, it was built in the top-down manner, they are
difficult to find and to fix. The design aim of a lifetime of 55 mission
equivalent firings (27,000 seconds of operation, either in a mission of 500
seconds, or on a test stand) has not been obtained. The engine now requires
very frequent maintenance and replacement of important parts, such as

turbopumps, bearings, sheet metal housings, etc. The high-pressure fuel
turbopump had to be replaced every three or four mission equivalents
(although that may have been fixed, now) and the high-pressure oxygen
turbopump every five or six. This is at most ten percent of the original
specification. But our main concern here is the determination of reliability.

In a total of about 250,000 seconds of operation, the engines have failed
seriously perhaps 16 times. Engineering pays close attention to these
failings and tries to remedy them as quickly as possible. This it does by test
studies on special rigs experimentally designed for the flaw in question, by
careful inspection of the engine for suggestive clues (like cracks), and by
considerable study and analysis. In this way, in spite of the difficulties of
top-down design, through hard work many of the problems have apparently
been solved.

A list of some of the problems follows. Those followed by an asterisk (*)
are probably solved:

Turbine blade cracks in high pressure fuel turbopumps (HPFTP).
(May have been solved.)

Turbine blade cracks in high pressure oxygen turbopumps
(HPOTP).

Augmented Spark Igniter (ASI) line rupture.*
Purge check valve failure.*
ASI chamber erosion.*
HPFTP turbine sheet metal cracking.
HPFTP coolant liner failure.*
Main combustion chamber outlet elbow failure.*
Main combustion chamber inlet elbow weld offset.*
HPOTP subsynchronous whirl.*
Flight acceleration safety cutoff system (partial failure in a

redundant system).*
Bearing spalling (partially solved).
A vibration at 4,000 Hertz making some engines inoperable, etc.

Many of these solved problems are the early difficulties of a new design,
for 13 of them occurred in the first 125,000 seconds and only three in the

second 125,000 seconds. Naturally, one can never be sure that all the bugs
are out, and, for some, the fix may not have addressed the true cause. Thus,
it is not unreasonable to guess there may be at least one surprise in the next
250,000 seconds, a probability of 1/500 per engine per mission. On a
mission there are three engines, but some accidents would possibly be
contained, and only affect one engine. The system can abort with only two
engines. Therefore let us say that the unknown surprises do not, even of
themselves, permit us to guess that the probability of mission failure due to
the Space Shuttle Main Engine is less than 1/500. To this we must add the
chance of failure from known, but as yet unsolved, problems (those without
the asterisk in the list above). These we discuss below. (Engineers at
Rocketdyne, the manufacturer, estimate the total probability as 1/10,000.
Engineers at Marshall estimate it as 1/300, while NASA management, to
whom these engineers report, claims it is 1/100,000. An independent
engineer consulting for NASA thought 1 or 2 per 100 a reasonable
estimate.)

The history of the certification principles for these engines is confusing
and difficult to explain. Initially the rule seems to have been that two
sample engines must each have had twice the time operating without
failure, as the operating time of the engine to be certified (rule of 2x). At
least that is the FAA practice, and NASA seems to have adopted it,
originally expecting the certified time to be 10 missions (hence 20 missions
for each sample). Obviously the best engines to use for comparison would
be those of greatest total (flight plus test) operating time–the so-called “fleet
leaders.” But what if a third sample and several others fail in a short time?
Surely we will not be safe because two were unusual in lasting longer. The
short time might be more representative of the real possibilities, and in the
spirit of the safety factor of 2, we should only operate at half the time of the
short-lived samples.

The slow shift toward decreasing safety factor can be seen in many
examples. We take that of the HPFTP turbine blades. First of all the idea of
testing an entire engine was abandoned. Each engine number has had many
important parts (like the turbopumps themselves) replaced at frequent
intervals, so that the rule must be shifted from engines to components. We
accept an HPFTP for a certification time if two samples have each run

successfully for twice that time (and of course, as a practical matter, no
longer insisting that this time be as large as 10 missions). But what is
“successfull”? The FAA calls a turbine blade crack a failure, in order, in
practice, to really provide a safety factor greater than 2. There is some time
that an engine can run between the time a crack originally starts until the
time it has grown large enough to fracture. (The FAA is contemplating new
rules that take this extra safety time into account, but only if it is very
carefully analyzed through known models within a known range of
experience and with materials thoroughly tested. None of these conditions
apply to the Space Shuttle Main Engine.)

Cracks were found in many second stage HPFTP turbine blades. In one
case three were found after 1,900 seconds, while in another they were not
found after 4,200 seconds, although usually these longer runs showed
cracks. To follow this story further we shall have to realize that the stress
depends a great deal on the power level. The Challenger flight was to be at,
and previous flights had been at, a power level called 104% of rated power
level during most of the time the engines were operating. Judging from
some material data it is supposed that at the level 104% of rated power
level, the time to crack is about twice that at 109% or full power level
(FPL). Future flights were to be at this level because of heavier payloads,
and many tests were made at this level. Therefore dividing time at 104% by
2, we obtain units called equivalent full power level (EFPL). (Obviously,
some uncertainty is introduced by that, but it has not been studied.) The
earliest cracks mentioned above occurred at 1,375 EFPL.

Now the certification rule becomes “limit all second stage blades to a
maximum of 1,375 seconds EFPL.” If one objects that the safety factor of 2
is lost, it is pointed out that the one turbine ran for 3,800 seconds EFPL
without cracks, and half of this is 1,900 so we are being more conservative.
We have fooled ourselves in three ways. First we have only one sample, and
it is not the fleet leader, for the other two samples of 3,800 or more seconds
had 17 cracked blades between them. (There are 59 blades in the engine.)
Next we have abandoned the 2x rule and substituted equal time. And
finally, 1,375 is where we did see a crack. We can say that no crack had
been found below 1,375, but the last time we looked and saw no cracks was
1,100 seconds EFPL. We do not know when the crack formed between

these times; for example, cracks may have formed at 1,150 seconds EFPL.
(Approximately 2/3 of the blade sets tested in excess of 1,375 seconds
EFPL had cracks. Some recent experiments have, indeed, shown cracks as
early as 1,150 seconds.) It was important to keep the number high, for the
Challenger was to fly an engine very close to the limit by the time the flight
was over.

Finally it is claimed that the criteria are not abandoned, and the system is
safe, by giving up the FAA convention that there should be no cracks, and
considering only a completely fractured blade a failure. With this definition
no engine has yet failed. The idea is that since there is sufficient time for a
crack to grow to fracture we can insure that all is safe by inspecting all
blades for cracks. If they are found, replace them, and if none are found we
have enough time for a safe mission. This makes the crack problem not a
flight safety problem, but merely a maintenance problem.

This may in fact be true. But how well do we know that cracks always
grow slowly enough that no fracture can occur in a mission? Three engines
have run for long times with a few cracked blades (about 3,000 seconds
EFPL) with no blades broken off.

But a fix for this cracking may have been found. By changing the blade
shape, shot-peening the surface, and covering with insulation to exclude
thermal shock, the blades have not cracked so far.

A very similar story appears in the history of certification of the HPOTP,
but we shall not give the details here.

It is evident, in summary, that the Flight Readiness Reviews and
certification rules show a deterioration for some of the problems of the
Space Shuttle Main Engine that is closely analogous to the deterioration
seen in the rules for the Solid Rocket Booster.

Avionics

By “avionics” is meant the computer system on the Orbiter as well as its
input sensors and output actuators. At first we will restrict ourselves to the
computers proper and not be concerned with the reliability of the input
information from the sensors of temperature, pressure, etc., or with whether

the computer output is faithfully followed by the actuators of rocket firings,
mechanical controls, displays to astronauts, etc.

The computing system is very elaborate, having over 250,000 lines of
code. It is responsible, among many other things, for the automatic control
of the entire ascent to orbit, and for the descent until well into the
atmosphere (below Mach 1) once one button is pushed deciding the landing
site desired. It would be possible to make the entire landing automatically
(except that the landing gear lowering signal is expressly left out of
computer control, and must be provided by the pilot, ostensibly for safety
reasons) but such an entirely automatic landing is probably not as safe as a
pilot controlled landing. During orbital flight it is used in the control of
payloads, in displaying information to the astronauts, and the exchange of
information to the ground. It is evident that the safety of flight requires
guaranteed accuracy of this elaborate system of computer hardware and
software.

In brief, the hardware reliability is ensured by having four essentially
independent identical computer systems. Where possible each sensor also
has multiple copies, usually four, and each copy feeds all four of the
computer lines. If the inputs from the sensors disagree, depending on
circumstances, certain averages, or a majority selection is used as the
effective input. The algorithm used by each of the four computers is exactly
the same, so their inputs (since each sees all copies of the sensors) are the
same. Therefore at each step the results in each computer should be
identical. From time to time they are compared, but because they might
operate at slightly different speeds a system of stopping and waiting at
specified times is instituted before each comparison is made. If one of the
computers disagrees, or is too late in having its answer ready, the three
which do agree are assumed to be correct and the errant computer is taken
completely out of the system. If, now, another computer fails, as judged by
the agreement of the other two, it is taken out of the system, and the rest of
the flight canceled, and descent to the landing site is instituted, controlled
by the two remaining computers. It is seen that this is a redundant system
since the failure of only one computer does not affect the mission. Finally,
as an extra feature of safety, there is a fifth independent computer, whose
memory is loaded with only the programs for ascent and descent, and which

is capable of controlling the descent if there is a failure of more than two of
the computers of the main line of four.

There is not enough room in the memory of the main line computers for
all the programs of ascent, descent, and payload programs in flight, so the
memory is loaded about four times from tapes, by the astronauts.

Because of the enormous effort required to replace the software for such
an elaborate system, and for checking a new system out, no change has
been made in the hardware since the system began about fifteen years ago.
The actual hardware is obsolete; for example, the memories are of the old
ferrite core type. It is becoming more difficult to find manufacturers to
supply such old-fashioned computers reliably and of high quality. Modern
computers are very much more reliable, can run much faster, simplifying
circuits, and allowing more to be done, and would not require so much
loading of memory, for their memories are much larger.

The software is checked very carefully in a bottom-up fashion. First, each
new line of code is checked, then sections of codes or modules with special
function are verified. The scope is increased step by step until the new
changes are incorporated into a complete system and checked. This
complete output is considered the final product, newly released. But
completely independently there is an independent verification group, that
takes an adversary attitude to the software development group, and tests and
verifies the software as if it were a customer of a delivered product. There is
additional verification in using the new programs in simulators, etc. A
discovery of an error during the verification testing is considered very
serious, and its origin studied very carefully to avoid such mistakes in the
future. Such unexpected errors have been found only about six times in all
the programming and program changing (for new or altered payloads) that
have been done. The principle that is followed is that all the verification is
not an aspect of program safety, it is merely a test of that safety, in a non-
catastrophic verification. Flight safety is to be judged solely on how well
the programs do in the verification tests. A failure here generates
considerable concern.

To summarize, then, the computer software checking system and attitude
is of highest quality. There appears to be no process of gradually fooling
oneself while degrading standards so characteristic of the Solid Rocket

Booster or Space Shuttle Main Engine safety systems. To be sure, there
have been recent suggestions by management to curtail such elaborate and
expensive tests as being unnecessary at this late date in Shuttle history. This
must be resisted for it does not appreciate the mutual subtle influences, and
sources of error generated by even small changes of one part of a program
on another. There are perpetual requests for changes as new payloads and
new demands and modifications are suggested by the users. Changes are
expensive because they require extensive testing. The proper way to save
money is to curtail the number of requested changes, not the quality of
testing for each.

One might add that the elaborate system could be very much improved by
more modern hardware and programming techniques. Any outside
competition would have all the advantages of starting over, and whether
that is a good idea for NASA now should be carefully considered.

Finally, returning to the sensors and actuators of the avionics system, we
find that the attitude to system failure and reliability is not nearly as good as
for the computer system. For example, a difficulty was found with certain
temperature sensors sometimes failing. Yet 18 months later the same
sensors were still being used, still sometimes failing, until a launch had to
be scrubbed because two of them failed at the same time. Even on a
succeeding flight this unreliable sensor was used again. Again reaction
control systems, the rocket jets used for reorienting and control in flight,
still are somewhat unreliable. There is considerable redundancy, but a long
history of failures, none of which has yet been extensive enough to
seriously affect a flight. The action of the jets is checked by sensors, and if
they fail to fire, the computers choose another jet to fire. But they are not
designed to fail, and the problem should be solved.

Conclusions

If a reasonable launch schedule is to be maintained, engineering often
cannot be done fast enough to keep up with the expectations of originally
conservative certification criteria designed to guarantee a very safe vehicle.
In these situations, subtly, and often with apparently logical arguments, the
criteria are altered so that flights may still be certified in time. They

therefore fly in a relatively unsafe condition, with a chance of failure of the
order of a percent (it is difficult to be more accurate).

Official management, on the other hand, claims to believe the probability
of failure is a thousand times less. One reason for this may be an attempt to
assure the government of NASA perfection and success in order to ensure
the supply of funds. The other may be that they sincerely believe it to be
true, demonstrating an almost incredible lack of communication between
themselves and their working engineers.

In any event this has had very unfortunate consequences, the most serious
of which is to encourage ordinary citizens to fly in such a dangerous
machine, as if it had attained the safety of an ordinary airliner. The
astronauts, like test pilots, should know their risks, and we honor them for
their courage. Who can doubt that McAuliffe was equally a person of great
courage, who was closer to an awareness of the true risk than NASA
management would have us believe?

Let us make recommendations to ensure that NASA officials deal in a
world of reality in understanding technological weaknesses and
imperfections well enough to be actively trying to eliminate them. They
must live in reality in comparing the costs and utility of the Shuttle tc other
methods of entering space. And they must be realistic in making contracts,
in estimating costs, and the difficulty of the projects. Only realistic flight
schedules should be proposed, schedules that have a reasonable chance of
being met. If in this way the government would not support them, then so
be it. NASA owes it to the citizens from whom it asks support to be frank,
honest, and informative, so that these citizens can make the wisest decisions
for the use of their limited resources.

For a successful technology, reality must take precedence over public
relations, for nature cannot be fooled.

8

WHAT IS SCIENCE?

What is science? It is common sense! Or is it? In April 1966 the master
teacher delivered an address to the National Science Teachers’ Association
in which he gave his fellow teachers lessons on how to teach their students
to think like a scientist and how to view the world with curiosity, open-
mindedness, and, above all, doubt. This talk is also a tribute to the
enormous influence Feynman’s father–a uniforms salesman–had on
Feynman’s way of looking at the world.

I thank Mr. DeRose for the opportunity to join you science teachers. I also
am a science teacher. I have too much experience only in teaching graduate
students in physics, and as a result of that experience I know that I don’t
know how to teach.

I am sure that you who are real teachers working at the bottom level of
this hierarchy of teachers, instructors of teachers, experts on curricula, also
are sure that you, too, don’t know how to do it; otherwise you wouldn’t
bother to come to the Convention.

The subject “What Is Science?” is not my choice. It was Mr. DeRose’s
subject. But I would like to say that I think that “What Is Science?” is not at
all equivalent to “how to teach science,” and I must call that to your
attention for two reasons. In the first place, from the way that I am
preparing to give this lecture, it may seem that I am trying to tell you how
to teach science–I am not at all in any way, because I don’t know anything
about small children. I have one, so I know that I don’t know. The other is I
think that most of you (because there is so much talk and so many papers

and so many experts in the field) have some kind of a feeling of lack of
self-confidence. In some way you are always being lectured on how things
are not going too well and how you should learn to teach better. I am not
going to berate you for the bad works you are doing and indicate how it can
definitely be improved; that is not my intention.

As a matter of fact, we have very good students coming into Caltech, and
during the years we found them getting better and better. Now how it is
done, I don’t know. I wonder if you know. I don’t want to interfere with the
system; it’s very good.

Only two days ago we had a conference in which we decided that we
don’t have to teach a course in elementary quantum mechanics in the
graduate school anymore. When I was a student, they didn’t even have a
course in quantum mechanics in the graduate school it was considered too
difficult a subject. When I first started to teach, we had one. Now we teach
it to undergraduates. We discover now that we don’t have to have
elementary quantum mechanics for graduates from other schools. Why is it
getting pushed down? Because we are able to teach better in the university,
and that is because the students coming up are better trained.

What is science? Of course you all must know, if you teach it. That’s
common sense. What can I say? If you don’t know, every teacher’s edition
of every textbook gives a complete discussion of the subject. There is some
kind of distorted distillation and watered-down and mixed-up words of
Francis Bacon from some centuries ago, words which then were supposed
to be the deep philosophy of science. But one of the greatest experimental
scientists of the time who was really doing something, William Harvey,*
said that what Bacon said science was, was the science that a lord
chancellor would do. He spoke of making observations, but omitted the
vital factor of judgment about what to observe and what to pay attention to.

And so what science is, is not what the philosophers have said it is and
certainly not what the teacher editions say it is. What it is, is a problem
which I set for myself after I said I would give this talk.

After some time I was reminded of a little poem.

A centipede was happy quite, until a toad in fun
Said, “Pray, which leg comes after which?”


Click to View FlipBook Version