The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

1 Searle, Subsymbolic Functionalism and Synthetic Intelligence Diane Law Department of Computer Sciences The University of Texas at Austin [email protected]

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by , 2016-05-02 22:30:08

Searle, Subsymbolic Functionalism and Synthetic Intelligence

1 Searle, Subsymbolic Functionalism and Synthetic Intelligence Diane Law Department of Computer Sciences The University of Texas at Austin [email protected]

Searle, Subsymbolic Functionalism and Synthetic Intelligence

Diane Law

Department of Computer Sciences
The University of Texas at Austin

[email protected]
Technical Report AI94-222

June 1994

Abstract are possible targets of the Chinese Room Ar-
gument (CRA). The purpose of this paper is to
John Searle’s Chinese Room argument examine both of these styles of AI, differentiat-
raises many important questions for tradi- ing them from other types, in order to deter-
tional, symbolic AI, which none of the stan- mine whether they are different in ways that
dard replies adequately refutes. Even so, the can contribute significantly to the refutation of
argument does not necessarily imply that ma- Searle’s argument.
chines will never be truly able to think. Never-
theless, we must be willing to make some For its first few decades, the discipline of Ar-
changes to the foundations of traditional AI in tificial Intelligence was chiefly, if not exclu-
order to answer Searle’s questions satisfacto- sively, devoted to the symbolic paradigm. This
rily. This paper argues that if we constrain the is the sort of AI that everyone was doing when
sorts of architectures we consider as ap- Searle introduced the CRA. Most of its practi-
propriate models of the mind other than the tioners have counted themselves as Computa-
brain, such that they resemble the physical tional or AI Functionalists in terms of their
structure of the brain more closely, we gain philosophical affiliation. As such, they have
several desirable properties. It is possible that proffered various defenses against Searle’s ar-
these properties may help us solve the hard gument. This is hardly surprising, since one of
problems of intentionality, qualia and con- the basic tenets of this philosophy is the idea
sciousness that Computational Functionalism that we can best characterize the workings of
has so far not been able to address in satisfac- the mind as a series of operations performed on
tory ways. formal symbol structures (Bechtel, 1988; Newell
1976). Adherence to this notion entails a high-
Introduction level view of the mind in the sense that it dis-
misses the function of individual neurons (and
It may seem evident that if one is in basic even of neuronal structures) in the brain as be-
agreement with the points John Searle makes ing unnecessary to an understanding of cogni-
with his Chinese Room Argument (1980, 1984), tive processes.
that would be reason enough to abandon all at-
tempts at any form of what he calls strong AI. In the last ten to fifteen years, however, the
After all, the whole argument is meant to dem- field of AI has seen the growth of connection-
onstrate the futility of endeavors in that direc- ism as a major paradigm. Although there is
tion. In this paper, I will attempt to show that great variety in the aims and interests of con-
even if Searle is correct, there is hope for suc- nectionists, just as there is among the propo-
cess. nents of symbolic AI, there are many who are
specifically interested in biologically plausible
To begin, it is important to note that there models of the brain and who feel that even our
are really several different paradigms within present-day, admittedly crude models of neural
the area of Artificial Intelligence, whereas Searle networks may give us new insight into the way
directly addresses only what he calls “strong the mind works. When Searle introduced the
AI” in his argument. Of the different areas, two CRA, he was obviously targeting symbolic AI
in particular (the symbolic and subsymbolic (in particular, the work of Schank’s group at
paradigms) seem to fit his definition and thus

1

Yale; see Schank and Riesbeck, 1981 for an real flower other than its general physical
overview), so it is possible to hypothesize that shape, nor that an artificial limb is a really satis-
the CRA may not apply to the subsymbolic factory replacement for a natural one. Since
paradigm. The proponents of the symbolic ap- work with such concrete and practical aims is in
proach are quick to point out that connection- no way concerned with human cognition, we
ists still have not proved that it is possible in need consider it no further.
practice to use artificial neural networks
(ANNs) to model high-level cognitive functions There is a second group of AI researchers
effectively. Nonetheless, this paper will be con- with a very different objective. As we shall see,
cerned with defending connectionism from a we can further subdivide even this group, but
more theoretical, rather than a purely practical for now, if we confine ourselves to consider
point of view. What we would like to discover only their most general high-level goals, we can
is whether connectionism might be able to lead temporarily put them all into a single category.
us into new ways of thinking about the mind, This group includes all those people whose aim
productive ways of modeling it and ultimately, is to somehow emulate the workings of the hu-
whether connectionist systems might be the key man mind, whether it be with the intent of
to true thinking machines. simply gaining a better understanding of the
brain or with the intent of passing the Turing
Different Sorts of AI test or with the much more ambitious desire to
eventual build the sorts of perfect androids that
Before we can discuss the possibilities for AI, are popular in science fiction movies or on fu-
we must first define exactly what our goals are. turistic television programs.
As it turns out, various researchers have en-
tirely different aims. Some are not interested at As I noted, this latter group is not homoge-
all in anything we might call cognitive plausi- neous. Searle would argue that those who rely
bility; their work lies more in the realm of engi- on computer simulations only as tool to better
neering than in the cognitive sciences. This is understand the brain are not in the same cate-
the sort of work that the Department of Defense gory as those wishing to simulate a mind in all
often contracts and it is generally motivated by its aspects. I believe there are relatively few
a requirement for a very specific result in a very people who truly belong to the former category
limited domain. We call upon techniques from and who also consider their work part of AI.
AI simply because we cannot accomplish a par- Most of them work outside the field of
ticular goal with the ordinary algorithms that Computer Science and they are largely neuro-
we study in other areas of computer science, scientists or psychologists. These are people
such as systems or numerical programming. who are concerned with real, human intelli-
gence. As such, Searle would categorize their
Most people intuitively insist that intelli- work as “weak AI” and he explicitly exempts
gence involves something more than a com- them from his argument.
puter blindly and deterministically executing a
program in which a programmer explicitly de- On the other side, we have a group of re-
fines the actions to be taken in each of several searchers who are at least nominally the target
(carefully limited) situations. Thus, in order to of the CRA. These are the people who really are
accept that these programs exhibit something trying to lay the groundwork for the eventual
we might call artificial intelligence, we shall construction of something at least very similar
have to also accept that the modifier “artificial” to a human mind, either in silicon or perhaps in
places very severe constraints on our normal some other medium different from the human
definition of intelligence. In some sense, this brain. This paper argues that one can count
seems consistent with the constraints of mean- oneself in this group, but at the same time, be in
ing that are implied when we place “artificial” agreement with most of what Searle says in all
in front of other nouns. Surely no one would the variations of the CRA. This may seem para-
argue, for example, that there is any significant doxical, but the fact is that we can still subdi-
relationship between an artificial flower and a vide this second group once again.

2

On the one hand we have the proponents of we proceed with a discussion of this last para-
what Searle calls “strong AI”. His early defini- digm, I want to make it clear that there are all
tion of strong AI is this: sorts of people who use connectionist models
(in fields such as physics, mathematics and en-
[T]he computer is not merely a tool in the gineering) who are merely seeking practical an-
study of the mind; rather, the appropriately swers to underspecified or intractable prob-
programmed computer really is a mind, in lems, just as there are in symbolic AI. No effort
the sense that computers given the right is made in this sort of research to ensure bio-
programs can be literally said to understand logical plausibility; indeed, the learning algo-
and have other cognitive states (Searle rithms often come from formal disciplines such
1980). as nonlinear optimization and statistical me-
chanics. For this reason, we are not particularly
A decade later, he makes this definition more concerned here with such work. On the other
concise, saying, “Strong AI claims that thinking hand, there is a large number of connectionists
is merely the manipulation of formal symbols” whose interest is precisely the study of the way
(1990). His American Philosophical Association the brain works and who see connectionism as
address goes on to further clarify the problems the most reasonable tool to carry out their in-
that arise when symbolic manipulation forms vestigations.
the entire basis for programs which purport to
think, claiming that since syntax is not defined We must still exercise some care, however,
in terms of anything physical, when we say that many connectionists use arti-
ficial neural networks as a tool to discover how
computation is not discovered in the phys- the brain might work, because quite often, they
ics, it is assigned to it. Certain physical phe- cannot truly count themselves among the pro-
nomena [presumably such as the patterns of ponents of weak AI. On the contrary, many of
bits in registers and/or memory, in the case them see connectionism as the best hope for
of a von Neumann-type computer] are as- creating intelligence in some sort of machine
signed or used or programmed or inter- other than the human brain. Although it is at
preted syntactically (1991). present far beyond our capabilities to build an
ANN of sufficient complexity to mimic the
From these remarks, it is clear that Searle’s brain, as far as we can tell, it is not impossible in
complaints are chiefly with the Computational principle to do so. The idea is that if we could
Functionalists or the adherents of traditional, implement such a network, we would have rea-
symbolic AI. Of course there are many standard son to hope that it would “really be a mind," as
philosophical criticisms of functionalism Searle says.
(Bechtel, 1988, Churchland, 1986) and it is not
the aim of this paper to repeat them all here. This sort of work is an attempt at something
Still it is important to note that the foundation we might more appropriately call “synthetic in-
of this school of thought is a complete reliance telligence. We can make a distinction between
on symbolic processing. These are people who artificial and synthetic intelligence in the same
take Newell and Simon’s Physical Symbol Sys- way we make a distinction between artificial
tem Hypothesis (1976) both extremely seriously and synthetic rubies. An artificial ruby is not a
and very literally. The argument is that we can ruby at all and shares none of its chemical prop-
reduce all intelligence in some way to the ma- erties, while a synthetic ruby is truly a ruby.
nipulation of symbolic tokens. In fact, many in The difference is simply that a synthetic ruby is
the field of AI seem to define intelligence as just man-made in a laboratory, while a natural ruby
such manipulation. As a result, it follows that is not. Given that connectionist techniques con-
intelligence can be realized in a wide range of tinue to improve, providing truer models of
physical media. both neurons and neuronal structures, we
might appropriately consider a connectionist
There is yet one more style of research success to be something more similar to syn-
within the field of artificial intelligence: the thetic intelligence.
relatively new school of connectionism. Before

3

In the original version of the CRA, Searle cial role. At this point, Searle’s argument cannot
admits that he was not considering connection- stand.
ist systems (hardly surprising since work in this
area was far from the mainstream in 1980). Differentiating Connectionism and Symbolic AI
Nevertheless, he claims that a variant of the ar-
gument applies equally to them (1990). He calls The Chinese Gym argument is not Searle’s
this variation the Chinese Gym: only reason for believing that connectionist sys-
tems are no more adequate for building a
...[Consider] a hall containing many mono- thinking mind than are symbolic systems. De-
lingual, English speaking men. These men spite the parallel processing afforded by ANNs
would carry out the same operations as the (as well as by the brain), Searle points out that
nodes and synapses in a connectionist archi-
tecture...and the outcome would be the Any function that can be computed on a
same as having one man manipulate sym- parallel machine can also be computed on a
bols according to a rule book. No one in the serial machine...Computationally, serial and
gym speaks a word of Chinese, and there is parallel systems are equivalent...(1990).
no way for the system as a whole to learn
the meanings of any Chinese words. Searle is not the only person to make this ob-
jection. Actually, it is quite common and has its
In fact, if we take the system in isolation, origins in proofs that all sorts of systems are
Searle is right. This much alone does not Turing-computable, connectionist models being
improve the situation encountered in the origi- one such class. The idea is that if all these sys-
nal argument. Without a context, by which I tems are able to compute the same things, there
mean some sort of causal connections to the can be no substantive difference between them.
world that allow the system to associate words This is what Smolensky calls the implementa-
and concepts with their referents, the network tionalist view (1988). I will argue, as have many
does not understand anything. Still, I find this others, that ANNs provide us with something
argument to be somewhat problematic. Searle besides Turing-computability.
replaces the artificial neurons with men in a
gym and then points out that no one of these Despite arguments to the contrary (such as
men understands Chinese. If we map back now Smolensky’s), as recently as 1993, Marinov has
to the neurons the men represent, the argument devoted an entire paper to defending the
seems to rest on the idea that to understand proposition that there is no substantive differ-
Chinese, each individual neuron must under- ence between symbolic and connectionist mod-
stand Chinese. This is not a realistic notion, els of cognition.
since we know that, as processors, biological
neurons are far too simple to understand a Marinov’s argument relies critically on com-
natural language. If our neurons do not indi- parison of one particular sort of ANN (those
vidually understand language, then it is not fair using the back propagation learning rule) to a
to demand that the individual men in the room variety of standard symbolic machine-learning
understand either. Thus, this part of the argu- classification algorithms working on a single,
ment fails. The real problem with the Chinese narrowly defined task. His first claim is that
gym (and the reason that I agree that it is not a such neural networks are strikingly unlike
system that understands) is that the system is anything we know about the brain. In this he is
isolated from the environment. The words that correct, but since within the connectionist
it processes have no referents and thus no se- community back propagation is only one of
mantics. If we add real causal connections to many learning mechanisms (many others exist
the system, in the form of transducers that serve that are lower level and closer to what little we
as its only input, and effectors that serve as its understand about how the brain learns), it
only output, then we have the start of a seems as unreasonable to condemn all of con-
grounded system where semantics play a cru- nectionism on such a basis as it is to condemn
all machines on the basis of the poor perform-
ance of the first attempt at powered flight.

4

Marinov does not attack nor dismiss the idea negative examples before categorization can
of biological plausibility as a desirable goal, begin. This would be tantamount to having to
however, so it is fair to assume that he agrees discover the proportion of teacups in the world
that this is indeed something worth seeking. that actually have the same size top and bottom
Unfortunately, it is at least as difficult to defend circumference, the proportion of mugs that has
the biological (or even the cognitive) plausibility a handle so small that at most one finger will fit
of the standard symbolic machine-learning al- into it, etc., before we could begin to under-
gorithms. Let us examine them to see why. stand the difference between the two. Obvi-
ously, this is not something that humans need
Machine-learning algorithms often use a to do.
standard technique of building decision trees
based on some variation on the following gen- The third problem may only appear to be a
eral scheme. The programmer chooses a set of problem; that is, we may find out that we are
features, the values of which allow categoriza- doing exactly what the machine-learning algo-
tion of various examples. Quite often the train- rithms do. Still, intuitively, it does not seem
ing set consists of positive and negative exam- quite right to have to solve a lot of complicated
ples of a single class although they may also equations in order to tell a teacup from a mug,
represent two or more different classes. The at least not explicitly. It is true that introspec-
program computes the probability of an exam- tion can be a very bad indicator of what really
ple belonging to a particular class, which means goes on in the brain, but if we can trust it at all,
that it must see all the examples and count the then it appears that we choose features for cate-
representatives of each class before it can begin gorization without so much formal mathemat-
to categorize. The next step is to compute a ics. In fact, in the case of distinguishing types of
“gain” for each feature. Roughly speaking, this cups, it seems that processing is quite parallel.
number is a measure of how much closer we are We look at all the features of the example in
to being able to say that a particular example question and make a decision based on all the
belongs to a given class. Computing this term information that our senses can give us.
involves a rather complicated summation of
probabilities and logarithms. The program Furthermore, the discussion above really
chooses the feature that gives us the largest only addresses cognitive plausibility, saying
“gain” to be the root of the decision tree. The nothing of biology. Machine-learning tech-
process is repeated recursively for each feature niques tell us nothing about how the brain
until the tree is complete. might carry out these processes, whereas the
connectionist counterparts at least show that it
There are several problems with this, from is possible for a large number of very simple
the point of view of cognitive plausibility. First, processors working in concert (such as neurons
the features that allow categorization come in the brain) to learn to categorize.
from outside the system. If we were to try to
relate this to human learning, it would be A second major point that Marinov makes is
something like telling someone learning the dif- that it is a straightforward matter to convert the
ference between a teacup and a mug that he or decision trees that the machine-learning algo-
she should look at the ratio between the circum- rithms induce into explicit production rules
ference of the bottom versus the circumference which humans can easily understand and ap-
at the top, the thickness of the china, the height ply. In contrast, connectionist models store
versus the width, and the size of the handle. In knowledge in a distributed fashion which is
other words, a great deal of knowledge is built difficult, if not impossible to extract from the
in from the beginning. The program need not trained network. Whether or not this is a disad-
discover the features that are important, as is vantage depends to some extent on the goals of
generally the case for humans learning to cate- the categorization task. If the aim is to provide a
gorize; they are given. set of rules that will allow a human to distin-
guish a cup from a non-cup (to use Winston’s
The second problem comes from the need to famous example), then there is no contest;
see a representative number of positive and machine-learning wins, hands down. On the

5

other hand, if our goal is to gain some under- This is their ability to interpolate and to take ad-
standing of the way that humans might distin- vantage of what are often called soft constraints
guish members of a category from non- (see also Smolensky, 1988). Given that a net-
members, the connectionist system may give us work is trained on data that includes example I1
a truer picture of the process. After all, it cer- (producing output O1) and example I2
tainly doesn’t seem as if we use explicit rules to (producing output O2), when presented with a
figure out whether something is a cup or not. It novel input I1.5 that lies between the two
is actually more likely to be a much lower level trained inputs, it will produce an output O1.5
process, relying on visual perception and a fair (or possibly O1.4 or O1.6). Now, of course it is
amount of parallel processing, leading to simple possible that such an output is in some way
object recognition. It seems odd that Marinov nonsensical, but it is also a way for the network
should demand biological plausibility in one to say, in effect, “well, it’s something like I1, but
breath, yet reject it in the next, if it turns out it’s also something like I2” Human beings seem
that biology doesn’t produce the tidy results he to learn new concepts in this way quite often,
desires. using what people in educational psychology
call “cognitive hooks” upon which to build new
In his response to the Marinov article, Clark understanding. On the other hand, symbolic
(1993) makes some much more pointed distinc- systems are incapable of handling this sort of
tions. As he says, although the machine- input. A novel item either conforms to some-
learning algorithms can indeed employ micro- thing it knows or it does not. The difference is
features to induce decision trees, the researcher simply that connectionist systems are inherently
predetermines what those microfeatures will continuous, whereas symbolic systems are just
be.1 We have already mentioned some of the as inherently and unavoidably discrete.
problems that this occasions. On the other hand,
ANNs that do not enjoy the benefits of specially Of course we admit that there are ways to
prepared input data discover useful microfea- “fuzzify” knowledge representations in sym-
tures on their own. Quite often, they do not cor- bolic systems, but they typically require the in-
respond in any way to those that conscious troduction of some probabilistic measures, such
thought posits, although they produce distinc- as certainty factors, which are difficult, if not
tions just as effectively. Since a great deal of impossible to obtain accurately. With a connec-
categorization is not the result of conscious de- tionist system, the probabilities are gathered
liberation, it is at least worth speculating that automatically and precisely as the system
perhaps the brain uses exactly such non- learns. We may see this as a purely practical
intuitive sorts of features in classification tasks. problem for symbolic systems since we might
It seems plausible that they might, since the me- argue that we could simply use an ANN to
chanics of processing in the brain bears more gather statistics and then pass them on to a
physical resemblance to the processing that oc- symbolic system. Since the connectionist system
curs in ANNs than that of symbolic programs. can already make the correct decisions, the ad-
vantage to be gained would simply be increased
Connectionist models have another strength explanatory power. The symbolic system oper-
that Marinov ignores, but that Clark mentions. ates with a set of rules that we can print out
when a human user wants to know why the
1 There are connectionist models that take advan- system made the decision that it did. As ANNs
tage of the same idea, hand encoding input to make become more structured and more sophisti-
the learning task as fast and simple as possible. It is cated, it is possible that they will be able to give
interesting that a number of connectionists regard rule-like explanations as well. In that case they
these systems as a form of cheating, preferring to would have a great advantage over symbolic
concentrate research effort on developing new systems, since they would not only be able to
learning algorithms, new models of “neural” units give explanations of why they produced a cer-
and new automatic structuring techniques, rather tain output, they would also be more explana-
than to have to partially solve the problem before the tory in terms of how the brain does what it does
training ever begins.

6

and they would have no need to rely on an There is yet another crucial difference be-
outside system for any of its computation. tween connectionist and symbolic models
which is more important than any of the pre-
Another important way in which connection- ceding arguments, since it represents an advan-
ist models differ from their symbolic counter- tage for which there is no equivalent in any
parts is in the way they represent information. symbolic system: it is relatively straightfor-
As Clark points out, representations in connec- ward to situate ANNs in such a way as to give
tionist systems are distributed and superposi- them causal connections with the world.
tional. There are several advantages to this sort Strangely enough, however, few researchers
of representation. The first seems in some sense have consciously taken advantage of this fea-
to be a purely practical one; a distributed repre- ture, even though this is the very thing that
sentation makes it possible to store a great saves us from having to throw in the towel,
many concepts in a relatively small space, since even if we do believe that Searle is basically
each unit participates in the representation of right. The problem with most connectionist
many different items. Still, this advantage is models is that they treat the network as a “brain
somewhat more than simply a means to get in a vat," unconnected from the world in which
around having to buy computers with ever- it exists (whether it be the real world or a
greater amounts of memory. The fact is that the simulated one, probably doesn’t much matter,
brain itself has a finite number of neurons, and at least for purposes of early research). As Lak-
this is one means of explaining how it can store off put it in his reply to Smolensky’s target arti-
so overwhelmingly many facts, procedures, and cle,
episodic memories.
Smolensky’s discussion makes what I con-
Not only that, but the distributed representa- sider a huge omission: the body. The neural
tion also automatically affords a content- networks in the brain do not exist in isola-
addressable, associative memory. This comes tion: they are connected to the sensorimotor
“for free," and seems to be just the answer we system. For example, the neurons in a topo-
need for questions such as how it is that hu- graphic map of the retina are not just firing
mans can so often bring just the right piece of in isolation for the hell of it. They are firing
information immediately to the fore, with no in response to retinal input, which is in turn
apparent search, or why it is that when we are dependent on what is in front of one’s eyes.
“thinking, musing or reasoning, one thought An activation pattern in the topographic
reminds us of another” (Dellarosa, 1988). map of the retina is therefore not merely a
meaningless mathematical object in some
We have also successfully used artificial neu- dynamical system; it is meaningful...One
ral networks to solve problems for which we cannot just arbitrarily assign meaning to ac-
have no satisfactory algorithms, most notably tivation patterns over neural networks that
pattern-matching tasks. Handwriting recogni- are connected to the sensorimotor system.
tion is one such area. To my knowledge, there is The nature of the hookup to the body will
no symbolic method to solve this problem and, make such an activation pattern meaningful
although the connectionist systems that we use and play a role in fixing its meaning (1988).
to perform this job are not perfect, they are at
least able to solve the problem to an extent ac- This is a direct reply to Searle’s main objec-
ceptable for practical applications. Furthermore, tion and it is a reply that I see as much more
even humans sometimes have trouble recogniz- difficult to refute than any that have gone be-
ing non-standard handwriting. This is a case fore. In some ways, it is similar to the standard
where connectionist systems are definitely ca- replies, and it falls most squarely within the
pable of doing something that we have not been spirit of the systems reply. Yet the systems re-
able to do with symbolic systems Turing ply that I have seen defended so many times is
equivalence notwithstanding. In cases where wrong; not for the reason that Searle gives, but
we know of no algorithm that can produce the because the systems reply makes no require-
desired computation, ANNs can at least some ment for causal connections with the environ-
times give us the solution we require.

7

ment within which a system functions. When it lution, we can have no hope for an intentional
is precisely that environment which provides system. Tellingly, all of the researchers men-
the input and which the output affects, the sys- tioned use neural networks as their tool of
tem either survives and prospers because it choice. This is not simply a coincidence. By de-
learns to understand the meaning of things and sign, connectionist systems are uniquely capa-
events around it and respond in appropriate ble of doing what the brain seems to do. First,
ways, or it suffers and fails because it does not. sensory neurons receive information directly
from the environment (or from other neurons,
The reason that a connectionist model can in the case of recurrent networks). This input
have this special property, whereas a symbolic then directly determines the activation state of
system cannot, is that inputs to a neural net- the rest of the network. In the case of object rec-
work have a direct causal effect on the state of ognition, then, the perception of the object in-
the network; values on the input units deter- duces a particular pattern of activation in the
mine the values throughout the system. To put network. This pattern is an internal representa-
it another way, the network builds its own in- tion of the object. The network can then attach a
ternal representations of things as an immediate symbol (i.e., a name, which is also instantiated
consequence of the input it receives. In this as a pattern of activation) to that internal repre-
sense, the semantics of the model are an integral sentation. In this fashion, the symbol is
part of the system, and as such neither admit grounded. It has meaning. Nothing but this
nor require arbitrary outside interpretation. particular meaning produces this particular
pattern of activation. It is the object that directly
This is very different from the situation we causes it. We are not free to impose arbitrary
find in symbolic systems. There, human users interpretations on such representations; the rep-
impose all the meaning of the symbols that a resentation is a representation of one specific
program manipulates by interpreting the output thing and not of any other.
in appropriate ways. The fact that I attach the
symbol ‘female to the property list of the sym- With the above points we have several sig-
bol ‘Mary, means nothing to the computer, but nificant differences between the symbolic and
when the program prints the symbols: “Mary is connectionist paradigms. It is important to rec-
a female,” it means something to a human ob- ognize that they are differences in kind, proven
server. Most of us (Searle, quite notably) are not Turing equivalence notwithstanding. The last
convinced that this constitutes understanding at point demonstrates that, despite the undisputed
all. We ask that the symbols mean something to fact that connectionist systems are still crude
the computer, in the same way that they mean and unsophisticated and that they are still weak
something to us. This is what Harnad calls the in practice, in principle at least, they provide a
symbol grounding problem (1990). He states the crucial capability that symbolic systems cannot.
problem in this way: Turing computability implies only that if an
algorithm exists for a procedure that some
How can the semantic interpretation of a Turing machine equivalent can carry out that
formal symbol system be made intrinsic to computation. Obviously, since ANNs can
the system, rather than just parasitic on the provide for grounded meanings and since
meanings in our heads? How can the ANNs are Turing equivalent, some Turing
meanings of the meaningless symbol to- machine can produce the same results, but
kens, manipulated solely on the basis of apparently, it can do so only by simulating an
their shapes, be grounded in anything but ANN. This implies that ANNs have certain
other meaningless symbols? functional capabilities that no other Turing
machine has. Although we have no algorithm
Recently, a number of researchers (including for enforcing that representations mean
Harnad himself, 1991) have begun an attempt to something to a machine, we find that there is a
solve this problem (Feldman, Lakoff, et. al., way to ensure that they do, if we would only
1990; Stolcke, 1990; Regier, 1991; Sopena, 1988;
Jefferson, Collins et. al., 1990; Pfeifer and Ver-
schure, 1991; Nenov, 1991). Without such a so-

8

exploit it. It is unfortunate that even many con- information. Perceptual systems must also have
nectionists do not. methods of processing analog data. Since these
are precisely the specialties of ANNs, even
A Somewhat Different Functionalism many hard-liners in the symbolic camp admit
that perhaps it is best to concede this part of the
At least nominally, believing that it might be field to the connectionists.
possible to construct a synthetic mind implies
that one is a functionalist, since it also implies The parallel aspects of the brain are appar-
that the actual medium is not a critical feature ently not confined to processing sensory infor-
of the properties that the mind possesses. Yet mation, however, and it seems unwise to sug-
when we look at the traditional definitions of gest that the only explanatory power that con-
Computational Functionalism as a philosophical nectionist systems might have is in the realm of
stance, we see that “it views the mind as carry- these low-level processes. Indeed, there seem to
ing out formal operations on symbols encoded be many activities that go on in parallel which a
within it” (Bechtel, 1988). This gives us a great traditionalist would consider purely symbolic
deal of latitude in terms of the sorts of machines processing. An example of such parallel activity
we have at our disposal for implementation of is the familiar phenomenon that occurs when
the algorithms we hypothesize that the brain is we cannot remember a given fact such as a
carrying out. Nevertheless, it may be wise to name, or solve a particular problem. We think
give up some of that freedom, restricting our- very hard about it at a conscious level without
selves instead to using machines that are brain- success, but quite often, once we have aban-
like in important ways (i.e., multiple, relatively doned the conscious mental search, the answer
simple processors connected in significant and comes to us, seemingly “out of the blue." Min-
complex ways). If we do, we also gain a certain sky hypothesizes that this might be the result of
freedom in the sense that we no longer have to demons that we set to work when we originally
restrict ourselves to purely algorithmic process- come across the problem (1986, p. 274). This
ing. There is certainly no a priori reason for might be a satisfactory high-level explanation,
specifying that functionalism concern itself but the question remains, what exactly are these
strictly with symbolic computation, nor for as- demons (other than programs) and how do they
suming that the architecture of the brain is im- go about their work in the subconscious while
material to the sorts of things that it does. We we engage in unrelated mental activity at the
are still left with the notion that we can imple- conscious level?
ment the functions of the mind in some medium
other than the brain, even if we have narrowed In general, we will need to explain high-level
the field of potential physical realizations of this processes in terms of lower-level ones. Since
processing. what we know about the brain indicates that all
its activity consists of massively parallel neuro-
What else do we gain if we dispense with the nal firing, it is obvious that at some point we
stipulation that all cognition is the result of for- will need to explain how the activity of massive
mal symbol manipulation and that we should numbers of relatively simple processors can ac-
therefore model it as such? We can start with count for all of cognition. It may turn out that
the least controversial gains: those that are we cannot find any way that the brain could
admitted by those proponents of symbolic AI implement certain high-level theories. We may
whom Smolensky refers to as revisionists also discover that such an implementation is far
(1988). Many people readily accept that we can more complex and unnatural than that required
best model certain so-called low-level aspects of by a lower level, connectionist explanation. In
cognition with correspondingly low-level con- either case, we will have to admit that even
nectionist systems. These include such things as though our high-level explanations seem to ex-
olfactory, auditory and visual processing. It is plain mental activity, in the end, they are at best
clear that this sort of sensory processing is speculative.
massively parallel in nature and that it deals
routinely with noisy and even contradictory

9

Even though it may be the case that we can with these models is that they assume an initial
find symbolic, rule-like explanations for much phase of explicit reasoning, following equally
of mental processing or that we can identify fea- explicit symbolic rules as a prerequisite to
tures that allow for effective categorization, it building more automatic means for solving
does not necessarily follow that the brain uses problems. Yet we find no evidence of such ex-
them in its own processing. Indeed, as we saw plicit behavior for much of what we do (using
above in the discussion of Marinov’s paper, a our native language is a case in point), nor evi-
connectionist system may do at least as good a dence of explicit knowledge of the rules that we
job of classification using features that are not might conceivably have used to “reason
only counter-intuitive, but which are sometimes through” the problem initially. It may be more
completely opaque to conscious understanding. productive to proceed with the idea that these
Thus it would seem to be a mistake to assume “rules” are not at all like the symbolic rules that
that the brain must do things (particularly those a production system (for example) uses. It is
things that we do unconsciously) in the way difficult to imagine exactly what these rules
that we can most easily describe. We can gen- might be like unless we are familiar with con-
erally accept that introspection does not always nectionist systems, where we find that behavior
lead to correct theories of the mind. that we can describe with rules occurs routinely
without any rules in any sort of symbolic form
The previous paragraph sounds suspiciously being present in the system. If it makes us more
like something the eliminative materialists comfortable, we can simply say that the “rules”
might suggest and indeed, they may turn out to are distributed among the weights in the sys-
be right. On the other hand, there is so far no tem, just as the representations for other entities
reason to believe that all the explanations for are.
mental phenomena that cognitive science and
symbolic AI have theorized are wrong. It may This is not to suggest that we should simply
be that we will have to revise or replace at least discard all theories that rely on rule-following
some of them. Still, the fact that connectionist behavior, without further ado. For one thing, it
systems are Turing equivalent indicates that we is obvious that we at least seem to use explicit
could find a way to implement many of these algorithms for conscious problem-solving.
theories in connectionist architectures, even Thus, any theory of mind must be able to ex-
though several unsolved problems currently plain this phenomenon in some way. For this
stand in the way. That is not to say that follow- reason, current, ongoing research on the vari-
ing the implementationalist path is the proper able-binding problem in connectionist systems,
thing to do, since we may find more explana- although still preliminary, is exceedingly impor-
tory theories by simply allowing ANNs to find tant (see Shastri and Ajjanagadde, 1993;
their own ways of solving problems. Smolensky, 1990), since as I mentioned above,
we will ultimately need to explain all such high-
At the same time, there is one important ar- level behavior in terms of the sorts of
gument against trying to reduce all of cognition processing that the physical brain can perform.
to rule-following behavior. Considering the Furthermore, it may well be that for certain
amount of attention that we must pay to the purposes, the more transparent explanations
procedure of much of conscious explicit rule- that explicit-rule based theories offer will be
following behavior and the care that we must more useful and easier to manipulate. Such
take to perform the correct steps accurately and purposes would be ordinary folk-psychology
in the right order, it is difficult to explain how predictions of the behavior of our fellow human
such behavior can take place in the subcon- beings and methods to deal with problematic
scious with rules that we quite often cannot behavior such as learning disabilities, neuroses,
even state. Laird, Newell and Rosenbloom and antisocial behavior. It doesn’t seem helpful
(1987) have proposed “chunking” as one possi- in such cases to simply say, “well, Johnny’s
ble solution to this problem and Anderson’s brain is just wired up wrong. Short of adjusting
Act* (1983) proposes the compilation of “macro- all his synapses, there is nothing to be done!”
operators” as a similar solution. The problem

10

Difficult Problems for Synthetic Intelligence terms are simply not so concrete nor so compo-
sitional.
There are many mental phenomena that nei-
ther traditional AI nor Computational Func- One major class of referents that belong to
tionalism has been able to explain. Searle says this group are the referents for subjective sensa-
that there are basically four features of mental tions or internal mental states. All of our terms
phenomena that make the mind-brain problem for emotions, for example, fall into this cate-
intractable: consciousness, intentionality, sub- gory. Notwithstanding the idea that we may
jectivity and mental causation (Searle, 1984). learn critical notions about such states as pain
These are the really hard problems for synthetic from the behavior they produce, as Wittgen-
intelligence, for Cognitive Science and for phi- stein argues (1953); intuitively, our most inti-
losophy in general. Some Computational Func- mate understanding of the word “pain” comes
tionalists (along with the eliminative material- from personal experience of painful feelings.
ists) have “solved” them by denying their exis- We can make similar arguments for other
tence to some extent or another; others have words that refer to internal states, whether they
presumed that a machine running the “right” be sensations or emotions. It is not at all clear
program would somehow produce them by that we shall ever be able to make a machine
some as yet unspecified means. Still others are feel pain (or anything else for that matter), and
satisfied with a purely behavioristic test of in- thus there may not be any way to ground such
telligence such as the Turing test, saying in es- terms. Yet it seems important to attempt to do
sence that whether a machine simulation of the so, since such feelings apparently have causal
mind actually includes these features is imma- powers.
terial, as long as the machine produces the ap-
propriate outputs. I have no easy answer to this problem, but I
do have some hope that we shall find an an-
Yet these problems do not seem so easy to swer. We are generally reluctant to grant that
dismiss. Certainly, the average “man in the single-celled organisms feel even primitive sen-
street” feels that these are important aspects of sations such as pain or even hunger (suppos-
the mind, that they are at the heart of the “mark edly unlearned responses, Rolls, 1990) and of
of the mental” and that without them, we can- course it seems odd to imagine that they might
not grant that a machine is truly intelligent. We feel happy or jealous. It is less strange to think
evidently must deal with these features in a about vertebrates feeling pain, although most
more constructive way if we are to satisfy our people are still not willing to attribute the full
most intuitive requirements for intelligence. range of human emotions to any animals other
than humans themselves. The difference seems
We have already seen part of a solution to to be that as we observe more and more
the problem of intentionality, when we consid- evolved organisms, we can more easily imagine
ered the importance of connecting the system to that they are capable of an ever wider range of
the environment. Nonetheless, we have not yet feelings.
solved the problem entirely. For one thing, at
best, merely adding causal connections between If there is anything to our intuitions and if it
machine-mind and world can only provide ref- is indeed the case that certain primitive emo-
erents for concrete objects and events. Obvi- tions are innate, rather than learned, then it
ously, this does not take care of referents for would seem that the most productive course to
things that do not exist, although we might follow would be the course of evolution. In the
reasonably surmise that many of these things absence of a fully explanatory theory of subjec-
are composites of things that do exist (e.g., uni- tive sensation, the use of genetic algorithms
corns or Santa Claus). Forming such composites (Holland, 1975; Goldberg, 1989) may be our best
is a strong point of connectionist systems which bet in an effort to produce artificial neural net-
excel at interpolation tasks. We cannot dispose works that function just as people do when they
of other sorts of referents so easily. Many of the experience a sensation such as pain. According
things for which every natural language has to theories extant in experimental psychology,
we learn more “sophisticated” emotions, such

11

as fear, through the mediation of primary rein- neural network3 as just such states. Another
forcers such as pain (Rolls, 1990). If this is the way of looking at this is to ask what is happen-
case, then there is some hope that if we can ing in the brain of a human who feels something
achieve primitive internal states in an ANN in particular, such as pain. There are certain
through evolutionary processes, then other physical changes that take place. For example,
emotional states could follow through learning. there are changes in the concentrations of cer-
tain hormones and transmitters and the firing
In some sense, this is not a completely satis- rate of certain neurons changes (Rolls, 1990).
factory solution, since it is possible to imagine The exact meaning of all these changes is not
an embodied network that does “all the right completely clear, but certainly, if an ANN were
things” when it runs into a table at full speed, to undergo similar changes of state (ignoring for
for example, and yet which feels nothing, de- the moment hormonal changes; we can treat
spite all its sensors. It might indeed be possible changes in levels of transmitters as changes in
to use genetic algorithms to produce such ro- connection strength) in response to external or
bots, selectively reinforcing those networks that internal stimuli, it would seem fair to surmise
avoid painful situations whenever possible and that the network might actually be feeling
which react in convincing ways (shouting and something. At least, it is definitely in a state that
nursing the injured part, for instance) when is not normal and it has undergone the same
avoidance is not possible. Still, we have no way sorts of state changes that occur in the human
to tell if they really experience something sub- brain. If appropriate behavior accompanies
jectively awful or if their reactions are purely these state changes, then we have a reasonably
behavioristic. On the other hand, we have no strong reason to believe that internal states
way to tell that about each other, either. If we similar to our own exist. At the very least, the
ask why we are the way we are, we have no system can now ground the word “pain” in its
better answer than to say that in all probability more social meaning. Furthermore, an agent
we are that way because evolution made us so. capable of these state changes could understand
If we use genetic algorithms to produce at least something of what others go through in
(perhaps only part of) synthetic minds, our an- similar situations. If we were to expand our un-
swer to the question of why they behave the derstanding of computers to include chemical
way they do is exactly the same. processes as well as electrical ones, and we
could show that the state changes are similar to
In a more positive vein, it is difficult to con- those of humans, the claim grows even
ceive of an evolved ANN that would behave in stronger. Whether evolutionary techniques can
convincingly appropriate ways while respond- actually produce these effects is an empirical
ing to combinations of stimuli2 and still not be question.
in some special state that we might reasonably
identify as a pain state. If pain is instantiated in With the last few paragraphs, I have outlined
humans via particular brain states, then we are some concrete ways to try at least to deal with
at least close to an answer. Furthermore, if we both intentionality and subjectivity, methods
consider pain or other subjective experiences to that have no direct counterparts in purely sym-
be particular mental states with causal efficacy bolic processing. According to Searle, we still
(i.e., able to cause other mental states to occur need to consider consciousness and mental cau-
or able to provoke physical reactions), then we sation. When he speaks of the latter, we can
may identify the states produced in an artificial
3 We must assume that the network in question is
2 We can imagine a creature engaged in some in- specifically one that is evolved through genetic al-
tensely pleasurable pursuit hardly noticing a mildly gorithms and that we determine the fitness of such
painful experience while the same creature might networks on the basis of their appropriate reactions
react violently to the same stimulus if it were already to events that would cause particular subjective
tired, frustrated or otherwise stressed. internal states in humans. Of course this is a
behavioristic measure, but I see no other alternative.

12

presume he is talking about various causal train of conscious thought as long as we are
powers of the mind. For instance, certain men- awake, no matter how hard we try.
tal states can lead to other mental states or to
motor action that has an effect on the environ- I suspect that consciousness has a great deal
ment. If this is all, then this is the easiest to at- to do with attention, or perhaps it even is iden-
tain of the four properties. Since the states of tical to attention. To put it in terms of struc-
connectionist systems are by nature associative, tured artificial neural networks, we can imagine
it is obvious that certain states would lead natu- that we might have a connectionist system built
rally to other related states. This accounts for out of many interrelated “specialist” networks,
internal causality. On the other hand, if an each of which performs specific tasks. Some of
ANN controls effectors as we have outlined these networks are gating networks that de-
above, then surely we cannot deny that the ca- termine which other networks can contribute
pability exists for the state of the artificial mind their outputs to some larger process or compu-
to alter facets of the environment. tation. We can imagine heterarchies of such
gating networks4 that compete for dominance.
Consciousness, is of course the most difficult Emergency situations would take immediate
of all. It is of necessity the hardest problem, if precedence, for example, while problem-solving
for no other reason than that we really don’t would involve gating the outputs of inference-
have anything more than a vague intuitive no- performing networks. These networks in turn
tion of what it is. Patricia Churchland (1986, p. feed and are fed by an appropriate associative
370) relates a surprising story of the famous memory module. It may be that our conscious
patient H.M. in which she notes that although thought reduces to nothing more exotic than
he can solve the Towers of Hanoi Puzzle, he this. If this is the case, then it appears quite
does not remember having done it before and possible that we could account for conscious
he does not realize that he has the skills neces- thought via assemblies of ANNs.
sary to do it. It is, she says, “as though some
part of H.M.’s nervous system knows what he Conclusions
is doing and has the relevant complex inten-
tions, but H.M. does not.” This seems extremely Clearly we face many problems and uncer-
odd to us because it flies in the face of our in- tainties in the quest for a theory of mind. It is
tuitions about consciousness. How can we be possible that some of the things that seem so
“aware” (at the level of the nervous system) hard are difficult simply because we are looking
without being consciously aware? We talk at them in the wrong way. Since the human
about dreams as being the product of the sub- brain is the only intelligent machine with which
conscious mind, opposing it to the conscious we are familiar, it does seem unwise to try to
mind, and yet while we are dreaming it seems divorce intelligence from it completely and at-
very much like the sort of thing that our minds tempt to study cognition with purely abstract
do while we are conscious. Indeed, the brain and formal methods. As the maxim goes in the
waves of dreaming subjects are very similar to field of aesthetic design, “form follows func-
those produced by alert subjects and quite un- tion." We know that evolution is a satisficer
like the brain waves produced in other states rather than an optimizer, but it does seem
(Shepherd, 1983). We can take it for granted worth considering that the architecture of the
that, at least while we are alert, our brains are brain is the way it is for some good reason.
doing many things in parallel, processing all
sorts of sensory information while we think One problem for the field of Artificial Intelli-
consciously about an upcoming beach vacation gence is the way we go about designing our
or try to remember what else we needed at the programs. We do it (just as they teach us in our
grocery store. Perhaps at the same time, we
have a nagging feeling that there is something 4 Some systems like this exist already, with the
else important to which we really should be gating networks being trained to do their jobs along
attending. Indeed, we cannot “turn off” the with the other networks (e.g., see Jacobs, Jordan and
Barto, 1991; Jordan and Jacobs, 1993).

13

first year programming courses) top-down. We Nature, and so I would guess that we will need
are trying to simulate very high-level mental to let evolution play a large role in shaping the
phenomena, but (ignoring what they teach us at architectures of mind that we employ. It is quite
school), we never bother to decompose the likely that if we can succeed in using this sort of
problem down to its low-level details. Of course tool to implement true synthetic intelligence, we
there is one very good reason for this. If we did won’t end up with a copy of the human brain,
continue our design work down to the lowest but that may be so much for the better, since if
levels, there would undoubtedly be several gen- we find that the exact structure of the brain is
erations of us who would never get out of the not essential for intelligence, that in itself will
design phase. That would mean several genera- tell us a very great deal of what we would like
tions of researchers who would rarely have any to know. It is my hope that we will not have to
reason to publish papers, which would indeed go so far as to simulate a brain neuron for exact
be a dire circumstance! Fortunately, there is a neuron, but I am fairly certain that we do need
simple alternative: we simply do not start up so to begin at a level that is much closer to the
high. It is hard to imagine that we will be very neuron than to the symbolic representation of
successful if we keep trying to set a high level abstract ideas. Whether we succeed or not is
process on top of a void. In some sense, Marvin still very much an open question, but it seems
Minsky’s late 60’s program for stacking blocks obvious that if we are to do so, we shall need to
did just that: it kept trying to place the top block avail ourselves of many of the solutions that
first (Minsky, 1989). Nature has already derived.

It seems likely that we will not only need to References
think about the problems in more bottom-up
fashion (some might argue that this is a giant Anderson, J.R. 1983. The Architecture of Cogni-
step backwards), but we will probably have to tion. Cambridge. MA: Harvard Univer-
change our emphasis in terms of the tools we sity Press.
use as well. Obviously, I think that we will
probably find it useful to increase our reliance Bechtel, W. 1988. Philosophy of Mind: An Over-
on neural networks; but I also believe that we view for Cognitive Science. Hillsdale NJ:
cannot afford to just keep using the very crude Lawrence Erlbaum Associates.
models we have at present, but will have to
continue to refine them and find ways to make Churchland, P.M. 1986. Matter and Conscious-
them more realistic. Some work is already being ness. Cambridge, MA: MIT
done in this respect, for example Nenov (1991) Press/Bradford Books.
has recently built a much more sophisticated
neural model of memory than anything we have Churchland, P.S. 1986. Neurophilosophy: Toward
seen so far and is now working on biologically a Unified Science of the Mind-Brain. Cam-
inspired models of attentional mechanisms. bridge, MA: MIT Press/Bradford
Shepherd et al. (1989) have similarly shown that Books.
more realistic neural models of cortical pyrami-
dal neurons have significantly greater compu- Clark, A. 1993. Superpositional Connectionism:
tational powers than the usual artificial neuron. A Reply to Marinov. Minds and Ma-
We may also need to rethink our ideas about chines, 3:3 pp. 271-281. Kluwer Aca-
computers themselves, incorporating chemical demic Publishers.
processes as well as electricity. I also believe
that it will be necessary for us to fall back on Dellarosa, D. 1988. The Psychological Appeal of
less deterministic methods. After all, the human Connectionism. The Behavioral and Brain
brain was not built over the course of a few Sciences 11:1 pp. 28-29. Cambridge Uni-
months or years, nor was it designed first and versity Press.
then implemented. My feeling is that we are not
likely to be able to do a great deal better than

14

Feldman, J.A., G. Lakoff, A. Stolcke and S.H. Brain Sciences 11:1 pp. 39-40. Cambridge
Weber 1990. Miniature Language Ac- University Press.
quisition: a Touchstone for Cognitive
Science. Proceedings of the 12th Annual Marinov, M.S. 1993. On the Spuriousness of the
Meeting of the Cognitive Science Society. Symbolic/Subsymbolic Distinction.
Minds and Machines, 3:3 pp. 253-271.
Goldberg, D.E. 1989. Genetic Algorithms in Kluwer Academic Publishers.
Search, Optimization and Machine Learn-
ing. Reading, MA: Addison-Wesley. Minsky, M. 1986. The Society of Mind. N.Y.: Si-
mon and Schuster.
Harnad, S. 1990. The Symbol Grounding Prob-
lem. Physica D 42:1-3 pp. 335-346.  1989. The Intelligence Transplant. Dis-
cover. 10:10, pp. 52-8.
 1992. Connecting Object to Symbol in
Modeling Cognition. In A. Clarke and Nenov, V.I. 1991. Perceptually Grounded Language
R. Lutz. (eds.) Connectionism in Context. Acquisition: A Neural/Procedural Hybrid
Springer Verlag. Model. TR-UCLA-AI-91-07

Harnad, S., S.J. Hanson and J. Lubin 1991. Cate- Newell, A. and H.A. Simon 1976. Computer
gorical Perception and the Evolution of Science as Empirical Inquiry: Symbols
Supervised Learning in Neural Nets. and Search. Reprinted in J.L. Garfield
Presented at the American Association (ed.) Foundations of Cognitive Science: The
for Artificial Intelligence Symposium on Essential Readings 1990, pp. 113-138.
Symbol Grounding: Problem and Prac- N.Y. Paragon House.
tice. Stanford University, March. 1991.
Pfeifer, R. and P. Verschure 1991. Distributed
Holland, J.H. 1975. Adaptation in Natural and Adaptive Control: A Paradigm for De-
Artificial Systems. Ann Arbor: Univer- signing Autonomous Agents. In F. J.
sity of Michigan Press. Varela and P. Bourgnine (eds.) Proceed-
ings of the First European Conference on
Jacobs, R.A., M.I. Jordan and A.G. Barto 1991. Artificial Life: Toward a Practice of
Task Decomposition Through Compe- Autonomous Systems. Cambridge, MA:
tition in a Modular Connectionist Ar- MIT Press/Bradford Books, pp. 21-30.
chitecture: The What and Where Vision
Tasks. Cognitive Science 15, pp. 219-250. Regier, T. 1991. Learning Perceptually-
Grounded Semantics in the L0 Project.
Jefferson, D., R. Collins, C. Cooper, M. Dyer, M. Proceedings of the 29th Annual Meeting of
Flowers, R. Korf, C. Taylor and A. the Association for Computational Lin-
Wang 1990. Evolution as a Theme in guistics.
Artificial Life: The Genesys/Tracker
System. TR-UCLA-AI-90-09. Rolls, E.T. 1990. A Theory of Emotion, and its
Application to Understanding the Neu-
Jordan, M.I. and R.A. Jordan 1993. Hierarchical ral Basis of Emotion. Cognition and
Mixtures of Experts and the EM Algo- Emotion 4:3. pp. 161-190.
rithm. A.I. Memo No. 1440/ MIT.
Schank, R.C. and C.K. Riesbeck 1981. Inside
Laird, J.E., A. Newell and P.S. Rosenbloom Computer Understanding. Hillsdale,
1987. Soar: An Architecture for General New Jersey: Lawrence Erlbaum.
Intelligence. Artificial Intelligence 33:1
pp. 1-64. Searle, J.R. 1980. Minds, Brains and Programs.
The Behavioral and Brain Sciences. 3, pp.
Lakoff, G. 1988. Smolensky, Semantics and the 417-58.
Sensorimotor System. The Behavioral and

15

 1984. Minds Brains and Science. Cam-
bridge, MA: Harvard University Press.

 1990. Is the Brain’s Mind a Computer
Program? Scientific American Jan. 1990
pp. 26-31.

 1991. Is the Brain a Digital Computer?
Proceedings of the American Philosophical
Association 64:3 pp. 21-37.

Shastri, L. and V. Ajjanagadde 1993. From
Simple Associations to Systematic Rea-
soning: A Connectionist Representation
of Rules, Variables and Dynamic Bind-
ings Using Temporal Synchrony. The
Behavioral and Brain Sciences 16, pp. 417-
494.

Shepherd, G.M. 1983 Neurobiology. N.Y. Oxford
University Press. p. 478.

Shepherd, G.M., T.B. Woolf and N.T. Carnevale
1990. Comparisons Between Active
Properties of Distal Dendritic Branches
and Spines: Implications for Neuronal
Computations. Journal of Cognitive Neu-
roscience. 1:3 pp. 273-286.

Smolensky, P. 1988. On the Proper Treatment of
Connectionism. The Behavioral and Brain
Sciences 11:1 pp. 1-74. Cambridge Uni-
versity Press.

 1990. Tensor product variable binding and
the representation of symbolic structures in
connectionist systems. Artificial Intelli-
gence v. 46 pp. 159-216.

Sopena, J.M. 1988. Verbal Description of Visual
Blocks World Using Neural Networks.
UB-DPB-8810. Universitat de Barcelona.

Stolcke, A. 1990. Learning Feature-Based Se-
mantics with Simple Recurrent Net-
works. International Computer Science
Institute, Berkeley, CA. TR-90-015.

Wittgenstein, L. 1953. Philosophical Investiga-
tions. N.Y: Macmillan.

16


Click to View FlipBook Version