Chapter 6 | Learning 193
Although your friend is fine and you determine that you have intestinal flu (the food is not the culprit),
you’ve developed a taste aversion; the next time you are at a restaurant and someone orders curry, you
immediately feel ill. While the chicken dish is not what made you sick, you are experiencing taste aversion:
you’ve been conditioned to be averse to a food after a single, negative experience.
How does this occur—conditioning based on a single instance and involving an extended time lapse
between the event and the negative stimulus? Research into taste aversion suggests that this response may
be an evolutionary adaptation designed to help organisms quickly learn to avoid harmful foods (Garcia
& Rusiniak, 1980; Garcia & Koelling, 1966). Not only may this contribute to species survival via natural
selection, but it may also help us develop strategies for challenges such as helping cancer patients through
the nausea induced by certain treatments (Holmes, 1993; Jacobsen et al., 1993; Hutton, Baracos, & Wismer,
2007; Skolin et al., 2006).
Once we have established the connection between the unconditioned stimulus and the conditioned
stimulus, how do we break that connection and get the dog, cat, or child to stop responding? In Tiger’s
case, imagine what would happen if you stopped using the electric can opener for her food and began to
use it only for human food. Now, Tiger would hear the can opener, but she would not get food. In classical
conditioning terms, you would be giving the conditioned stimulus, but not the unconditioned stimulus.
Pavlov explored this scenario in his experiments with dogs: sounding the tone without giving the dogs the
meat powder. Soon the dogs stopped responding to the tone. Extinction is the decrease in the conditioned
response when the unconditioned stimulus is no longer presented with the conditioned stimulus. When
presented with the conditioned stimulus alone, the dog, cat, or other organism would show a weaker and
weaker response, and finally no response. In classical conditioning terms, there is a gradual weakening
and disappearance of the conditioned response.
What happens when learning is not used for a while—when what was learned lies dormant? As we
just discussed, Pavlov found that when he repeatedly presented the bell (conditioned stimulus) without
the meat powder (unconditioned stimulus), extinction occurred; the dogs stopped salivating to the bell.
However, after a couple of hours of resting from this extinction training, the dogs again began to salivate
when Pavlov rang the bell. What do you think would happen with Tiger’s behavior if your electric can
opener broke, and you did not use it for several months? When you finally got it fixed and started using
it to open Tiger’s food again, Tiger would remember the association between the can opener and her
food—she would get excited and run to the kitchen when she heard the sound. The behavior of Pavlov’s
dogs and Tiger illustrates a concept Pavlov called spontaneous recovery: the return of a previously
extinguished conditioned response following a rest period (Figure 6.7).
194 Chapter 6 | Learning
Figure 6.7 This is the curve of acquisition, extinction, and spontaneous recovery. The rising curve shows the
conditioned response quickly getting stronger through the repeated pairing of the conditioned stimulus and the
unconditioned stimulus (acquisition). Then the curve decreases, which shows how the conditioned response weakens
when only the conditioned stimulus is presented (extinction). After a break or pause from conditioning, the
conditioned response reappears (spontaneous recovery).
Of course, these processes also apply in humans. For example, let’s say that every day when you walk to
campus, an ice cream truck passes your route. Day after day, you hear the truck’s music (neutral stimulus),
so you finally stop and purchase a chocolate ice cream bar. You take a bite (unconditioned stimulus) and
then your mouth waters (unconditioned response). This initial period of learning is known as acquisition,
when you begin to connect the neutral stimulus (the sound of the truck) and the unconditioned stimulus
(the taste of the chocolate ice cream in your mouth). During acquisition, the conditioned response gets
stronger and stronger through repeated pairings of the conditioned stimulus and unconditioned stimulus.
Several days (and ice cream bars) later, you notice that your mouth begins to water (conditioned response)
as soon as you hear the truck’s musical jingle—even before you bite into the ice cream bar. Then one day
you head down the street. You hear the truck’s music (conditioned stimulus), and your mouth waters
(conditioned response). However, when you get to the truck, you discover that they are all out of ice cream.
You leave disappointed. The next few days you pass by the truck and hear the music, but don’t stop to
get an ice cream bar because you’re running late for class. You begin to salivate less and less when you
hear the music, until by the end of the week, your mouth no longer waters when you hear the tune. This
illustrates extinction. The conditioned response weakens when only the conditioned stimulus (the sound
of the truck) is presented, without being followed by the unconditioned stimulus (chocolate ice cream in
the mouth). Then the weekend comes. You don’t have to go to class, so you don’t pass the truck. Monday
morning arrives and you take your usual route to campus. You round the corner and hear the truck again.
What do you think happens? Your mouth begins to water again. Why? After a break from conditioning,
the conditioned response reappears, which indicates spontaneous recovery.
Acquisition and extinction involve the strengthening and weakening, respectively, of a learned
association. Two other learning processes—stimulus discrimination and stimulus generalization—are
involved in distinguishing which stimuli will trigger the learned association. Animals (including humans)
need to distinguish between stimuli—for example, between sounds that predict a threatening event and
sounds that do not—so that they can respond appropriately (such as running away if the sound is
threatening). When an organism learns to respond differently to various stimuli that are similar, it is called
stimulus discrimination. In classical conditioning terms, the organism demonstrates the conditioned
response only to the conditioned stimulus. Pavlov’s dogs discriminated between the basic tone that
sounded before they were fed and other tones (e.g., the doorbell), because the other sounds did not predict
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 195
the arrival of food. Similarly, Tiger, the cat, discriminated between the sound of the can opener and the
sound of the electric mixer. When the electric mixer is going, Tiger is not about to be fed, so she does not
come running to the kitchen looking for food.
On the other hand, when an organism demonstrates the conditioned response to stimuli that are similar to
the condition stimulus, it is called stimulus generalization, the opposite of stimulus discrimination. The
more similar a stimulus is to the condition stimulus, the more likely the organism is to give the conditioned
response. For instance, if the electric mixer sounds very similar to the electric can opener, Tiger may come
running after hearing its sound. But if you do not feed her following the electric mixer sound, and you
continue to feed her consistently after the electric can opener sound, she will quickly learn to discriminate
between the two sounds (provided they are sufficiently dissimilar that she can tell them apart).
Sometimes, classical conditioning can lead to habituation. Habituation occurs when we learn not to
respond to a stimulus that is presented repeatedly without change. As the stimulus occurs over and over,
we learn not to focus our attention on it. For example, imagine that your neighbor or roommate constantly
has the television blaring. This background noise is distracting and makes it difficult for you to focus when
you’re studying. However, over time, you become accustomed to the stimulus of the television noise, and
eventually you hardly notice it any longer.
BEHAVIORISM
John B. Watson, shown in Figure 6.8, is considered the founder of behaviorism. Behaviorism is a school
of thought that arose during the first part of the 20th century, which incorporates elements of Pavlov’s
classical conditioning (Hunt, 2007). In stark contrast with Freud, who considered the reasons for behavior
to be hidden in the unconscious, Watson championed the idea that all behavior can be studied as a
simple stimulus-response reaction, without regard for internal processes. Watson argued that in order for
psychology to become a legitimate science, it must shift its concern away from internal mental processes
because mental processes cannot be seen or measured. Instead, he asserted that psychology must focus on
outward observable behavior that can be measured.
Figure 6.8 John B. Watson used the principles of classical conditioning in the study of human emotion.
Watson’s ideas were influenced by Pavlov’s work. According to Watson, human behavior, just like animal
behavior, is primarily the result of conditioned responses. Whereas Pavlov’s work with dogs involved the
conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of
human emotions (Watson, 1919). Thus began Watson’s work with his graduate student Rosalie Rayner
and a baby called Little Albert. Through their experiments with Little Albert, Watson and Rayner (1920)
demonstrated how fears can be conditioned.
In 1920, Watson was the chair of the psychology department at Johns Hopkins University. Through his
position at the university he came to meet Little Albert’s mother, Arvilla Merritte, who worked at a campus
196 Chapter 6 | Learning
hospital (DeAngelis, 2010). Watson offered her a dollar to allow her son to be the subject of his experiments
in classical conditioning. Through these experiments, Little Albert was exposed to and conditioned to fear
certain things. Initially he was presented with various neutral stimuli, including a rabbit, a dog, a monkey,
masks, cotton wool, and a white rat. He was not afraid of any of these things. Then Watson, with the
help of Rayner, conditioned Little Albert to associate these stimuli with an emotion—fear. For example,
Watson handed Little Albert the white rat, and Little Albert enjoyed playing with it. Then Watson made
a loud sound, by striking a hammer against a metal bar hanging behind Little Albert’s head, each time
Little Albert touched the rat. Little Albert was frightened by the sound—demonstrating a reflexive fear of
sudden loud noises—and began to cry. Watson repeatedly paired the loud sound with the white rat. Soon
Little Albert became frightened by the white rat alone. In this case, what are the UCS, CS, UCR, and CR?
Days later, Little Albert demonstrated stimulus generalization—he became afraid of other furry things:
a rabbit, a furry coat, and even a Santa Claus mask (Figure 6.9). Watson had succeeded in conditioning
a fear response in Little Albert, thus demonstrating that emotions could become conditioned responses.
It had been Watson’s intention to produce a phobia—a persistent, excessive fear of a specific object or
situation— through conditioning alone, thus countering Freud’s view that phobias are caused by deep,
hidden conflicts in the mind. However, there is no evidence that Little Albert experienced phobias in later
years. Little Albert’s mother moved away, ending the experiment, and Little Albert himself died a few
years later of unrelated causes. While Watson’s research provided new insight into conditioning, it would
be considered unethical by today’s standards.
Figure 6.9 Through stimulus generalization, Little Albert came to fear furry things, including Watson in a Santa
Claus mask.
LINK TO LEARNING
View scenes from John Watson’s experiment (http://openstaxcollege.org/l/
Watson1) in which Little Albert was conditioned to respond in fear to furry objects.
As you watch the video, look closely at Little Albert’s reactions and the manner in
which Watson and Rayner present the stimuli before and after conditioning. Based
on what you see, would you come to the same conclusions as the researchers?
EVERYDAY CONNECTION
Advertising and Associative Learning
Advertising executives are pros at applying the principles of associative learning. Think about the car
commercials you have seen on television. Many of them feature an attractive model. By associating the model
with the car being advertised, you come to see the car as being desirable (Cialdini, 2008). You may be asking
yourself, does this advertising technique actually work? According to Cialdini (2008), men who viewed a car
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 197
commercial that included an attractive model later rated the car as being faster, more appealing, and better
designed than did men who viewed an advertisement for the same car minus the model.
Have you ever noticed how quickly advertisers cancel contracts with a famous athlete following a scandal?
As far as the advertiser is concerned, that athlete is no longer associated with positive feelings; therefore, the
athlete cannot be used as an unconditioned stimulus to condition the public to associate positive feelings (the
unconditioned response) with their product (the conditioned stimulus).
Now that you are aware of how associative learning works, see if you can find examples of these types of
advertisements on television, in magazines, or on the Internet.
6.3 Operant Conditioning
Learning Objectives
By the end of this section, you will be able to:
• Define operant conditioning
• Explain the difference between reinforcement and punishment
• Distinguish between reinforcement schedules
The previous section of this chapter focused on the type of associative learning known as classical
conditioning. Remember that in classical conditioning, something in the environment triggers a reflex
automatically, and researchers train the organism to react to a different stimulus. Now we turn to the
second type of associative learning, operant conditioning. In operant conditioning, organisms learn to
associate a behavior and its consequence (Table 6.1). A pleasant consequence makes that behavior more
likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore,
does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish.
Table 6.1 Classical and Operant Conditioning Compared
Conditioning Classical Conditioning Operant Conditioning
approach
An unconditioned stimulus (such as The target behavior is followed by
Stimulus timing food) is paired with a neutral reinforcement or punishment to
stimulus (such as a bell). The neutral either strengthen or weaken it, so
stimulus eventually becomes the that the learner is more likely to
conditioned stimulus, which brings exhibit the desired behavior in the
about the conditioned response future.
(salivation).
The stimulus (either reinforcement
The stimulus occurs immediately or punishment) occurs soon after the
before the response. response.
Psychologist B. F. Skinner saw that classical conditioning is limited to existing behaviors that are
reflexively elicited, and it doesn’t account for new behaviors such as riding a bike. He proposed a theory
about how such behaviors come about. Skinner believed that behavior is motivated by the consequences
we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of
consequences is based on the law of effect, which was first proposed by psychologist Edward Thorndike.
198 Chapter 6 | Learning
According to the law of effect, behaviors that are followed by consequences that are satisfying to the
organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are
less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about
a desired result, the organism is more likely to do it again. If an organism does something that does not
bring about a desired result, the organism is less likely to do it again. An example of the law of effect is in
employment. One of the reasons (and often the main reason) we show up for work is because we get paid
to do so. If we stop getting paid, we will likely stop showing up—even if we love our job.
Working with Thorndike’s law of effect as his foundation, Skinner began conducting scientific experiments
on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning
(Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be
known as a “Skinner box” (Figure 6.10). A Skinner box contains a lever (for rats) or disk (for pigeons) that
the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated
with certain behaviors. A recorder counts the number of responses made by the animal.
Figure 6.10 (a) B. F. Skinner developed operant conditioning for systematic study of how behaviors are
strengthened or weakened according to their consequences. (b) In a Skinner box, a rat presses a lever in an operant
conditioning chamber to receive a food reward. (credit a: modification of work by "Silly rabbit"/Wikimedia Commons)
LINK TO LEARNING
Watch this brief video clip (http://openstaxcollege.org/l/skinner1) to learn more
about operant conditioning: Skinner is interviewed, and operant conditioning of
pigeons is demonstrated.
In discussing operant conditioning, we use several everyday words—positive, negative, reinforcement,
and punishment—in a specialized manner. In operant conditioning, positive and negative do not mean
good and bad. Instead, positive means you are adding something, and negative means you are taking
something away. Reinforcement means you are increasing a behavior, and punishment means you are
decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive
or negative. All reinforcers (positive or negative) increase the likelihood of a behavioral response. All
punishers (positive or negative) decrease the likelihood of a behavioral response. Now let’s combine
these four terms: positive reinforcement, negative reinforcement, positive punishment, and negative
punishment (Table 6.2).
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 199
Table 6.2 Positive and Negative Reinforcement and Punishment
Positive Reinforcement Punishment
Negative
Something is added to increase the Something is added to decrease the
likelihood of a behavior. likelihood of a behavior.
Something is removed to increase the Something is removed to decrease the
likelihood of a behavior. likelihood of a behavior.
REINFORCEMENT
The most effective way to teach a person or animal a new behavior is with positive reinforcement. In
positive reinforcement, a desirable stimulus is added to increase a behavior.
For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome
quickly cleans his room because he wants a new art set. Let’s pause for a moment. Some people might
say, “Why should I reward my child for doing what is expected?” But in fact we are constantly and
consistently rewarded in our lives. Our paychecks are rewards, as are high grades and acceptance into
our preferred school. Being praised for doing a good job and for passing a driver’s test is also a reward.
Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most
effective ways to increase achievement in school districts with below-average reading scores was to pay
the children to read. Specifically, second-grade students in Dallas were paid $2 each time they read a book
and passed a short quiz about the book. The result was a significant increase in reading comprehension
(Fryer, 2010). What do you think about this program? If Skinner were alive today, he would probably
think this was a great idea. He was a strong proponent of using operant conditioning principles to
influence students’ behavior at school. In fact, in addition to the Skinner box, he also invented what
he called a teaching machine that was designed to reward small steps in learning (Skinner, 1961)—an
early forerunner of computer-assisted learning. His teaching machine tested students’ knowledge as
they worked through various school subjects. If students answered questions correctly, they received
immediate positive reinforcement and could continue; if they answered incorrectly, they did not receive
any reinforcement. The idea was that students would spend additional time studying the material to
increase their chance of being reinforced the next time (Skinner, 1961).
In negative reinforcement, an undesirable stimulus is removed to increase a behavior. For example, car
manufacturers use the principles of negative reinforcement in their seatbelt systems, which go “beep,
beep, beep” until you fasten your seatbelt. The annoying sound stops when you exhibit the desired
behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also
used frequently in horse training. Riders apply pressure—by pulling the reins or squeezing their legs—and
then remove the pressure when the horse performs the desired behavior, such as turning or speeding up.
The pressure is the negative stimulus that the horse wants to remove.
PUNISHMENT
Many people confuse negative reinforcement with punishment in operant conditioning, but they are two
very different mechanisms. Remember that reinforcement, even when it is negative, always increases a
behavior. In contrast, punishment always decreases a behavior. In positive punishment, you add an
undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to
get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease
the behavior (texting in class). In negative punishment, you remove an aversive stimulus to decrease
behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a
stimulus (the toy) is removed in order to decrease the behavior.
200 Chapter 6 | Learning
Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example,
imagine your four-year-old son, Brandon, hit his younger brother. You have Brandon write 100 times
“I will not hit my brother" (positive punishment). Chances are he won’t repeat this behavior. While
strategies like this are common today, in the past children were often subject to physical punishment,
such as spanking. It’s important to be aware of some of the drawbacks in using physical punishment on
children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may
become fearful of the person who delivered the punishment—you, his parent. Similarly, children who
are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010).
Consequently, most schools in the United States have banned corporal punishment. Second, punishment
may cause children to become more aggressive and prone to antisocial behavior and delinquency
(Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in
turn, they may act out this same behavior when they become angry and frustrated. For example, because
you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends
when they won’t share their toys.
While positive punishment can be effective in some cases, Skinner suggested that the use of punishment
should be weighed against the possible negative effects. Today’s psychologists and parenting experts favor
reinforcement over punishment—they recommend that you catch your child doing something good and
reward her for it.
Shaping
In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of
rewarding only the target behavior, in shaping, we reward successive approximations of a target behavior.
Why is shaping needed? Remember that in order for reinforcement to work, the organism must first
display the behavior. Shaping is needed because it is extremely unlikely that an organism will display
anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many
small, achievable steps. The specific steps used in the process are the following:
1. Reinforce any response that resembles the desired behavior.
2. Then reinforce the response that more closely resembles the desired behavior. You will no longer
reinforce the previously reinforced response.
3. Next, begin to reinforce the response that even more closely resembles the desired behavior.
4. Continue to reinforce closer and closer approximations of the desired behavior.
5. Finally, only reinforce the desired behavior.
Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to
teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many
unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing
ping pong; the technique is commonly used by animal trainers today. An important part of shaping is
stimulus discrimination. Recall Pavlov’s dogs—he trained them to respond to the tone of a bell, and not
to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping
behavior.
LINK TO LEARNING
Here is a brief video (http://openstaxcollege.org/l/pingpong) of Skinner’s pigeons
playing ping pong.
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 201
It’s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with
humans? Let’s consider parents whose goal is to have their child learn to clean his room. They use shaping
to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and
reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether
to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys.
Finally, he cleans his entire room.
PRIMARY AND SECONDARY REINFORCERS
Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let’s go back
to Skinner’s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded
with food each time they pressed the lever. For animals, food would be an obvious reinforcer.
What would be a good reinforce for humans? For your daughter Sydney, it was the promise of a toy if she
cleaned her room. How about Joaquin, the soccer player? If you gave Joaquin a piece of candy every time
he made a goal, you would be using a primary reinforcer. Primary reinforcers are reinforcers that have
innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and
touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose
their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing
and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as
well as provide pleasure.
A secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a
primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called
out “Great shot!” every time Joaquin made a goal. Another example, money, is only worth something
when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all
primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the
Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What
about the stickers on the behavior chart? They also are secondary reinforcers.
Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary
reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known
as token economies, are built around the use of these kinds of token reinforcers. Token economies have
been found to be very effective at modifying behavior in a variety of settings such as schools, prisons,
and mental hospitals. For example, a study by Cangi and Daly (2013) found that use of a token economy
increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school
children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the
children in the study exhibited appropriate behavior (not hitting or pinching), they received a “quiet
hands” token. When they hit or pinched, they lost a token. The children could then exchange specified
amounts of tokens for minutes of playtime.
EVERYDAY CONNECTION
Behavior Modification in Children
Parents and teachers often use behavior modification to change a child’s behavior. Behavior modification
uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are
switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several
behaviors are listed (Figure 6.11). Sticker charts are a form of token economies, as described in the text. Each
time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize,
or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best
to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a
wide range of behaviors, from students raising their hands, to walking quietly in the hall, to turning in their
202 Chapter 6 | Learning
homework. At home, parents might create a behavior chart that rewards children for things such as putting
away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the
reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be
done consistently.
Figure 6.11 Sticker charts are a form of positive reinforcement and a tool for behavior modification. Once
this little girl earns a certain number of stickers for demonstrating a desired behavior, she will be rewarded
with a trip to the ice cream parlor. (credit: Abigail Batchelder)
Time-out is another popular technique used in behavior modification with children. It operates on the principle
of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the
desirable activity at hand (Figure 6.12). For example, say that Sophia and her brother Mario are playing with
building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time-
out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the
room for a few minutes. When she comes back, she doesn’t throw blocks.
There are several important points that you should know if you plan to implement time-out as a behavior
modification technique. First, make sure the child is being removed from a desirable activity and placed in a
less desirable location. If the activity is something undesirable for the child, this technique will backfire because
it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important.
The general rule of thumb is one minute for each year of the child’s age. Sophia is five; therefore, she sits in
a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally,
as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your
child to time-out; ignore your child during time-out (because caregiver attention may reinforce misbehavior);
and give the child a hug or a kind word when time-out is over.
Figure 6.12 Time-out is a popular form of negative punishment used by caregivers. When a child
misbehaves, he or she is removed from a desirable activity in an effort to decrease the unwanted behavior.
For example, (a) a child might be playing on the playground with friends and push another child; (b) the child
who misbehaved would then be removed from the activity for a short period of time. (credit a: modification of
work by Simone Ramella; credit b: modification of work by “JefferyTurner”/Flickr)
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 203
REINFORCEMENT SCHEDULES
Remember, the best way to teach a person or animal a behavior is to use positive reinforcement. For
example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat
might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating
the pellet, what do you think the hungry rat did next? It hit the lever again, and received another pellet
of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer
each time it displays a behavior, it is called continuous reinforcement. This reinforcement schedule is the
quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Let’s
look back at the dog that was learning to sit earlier in the chapter. Now, each time he sits, you give him
a treat. Timing is important here: you will be most successful if you present the reinforcer immediately
after he sits, so that he can make an association between the target behavior (sitting) and the consequence
(getting a treat).
LINK TO LEARNING
Watch this video clip (http://openstaxcollege.org/l/sueyin) where veterinarian Dr.
Sophia Yin shapes a dog’s behavior using the steps outlined above.
Once a behavior is trained, researchers and trainers often turn to another type of reinforcement
schedule—partial reinforcement. In partial reinforcement, also referred to as intermittent reinforcement,
the person or animal does not get reinforced every time they perform the desired behavior. There are
several different types of partial reinforcement schedules (Table 6.3). These schedules are described as
either fixed or variable, and as either interval or ratio. Fixed refers to the number of responses between
reinforcements, or the amount of time between reinforcements, which is set and unchanging. Variable
refers to the number of responses or amount of time between reinforcements, which varies or changes.
Interval means the schedule is based on the time between reinforcements, and ratio means the schedule is
based on the number of responses between reinforcements.
Table 6.3 Reinforcement Schedules
Reinforcement Description Result Example
Schedule
Fixed interval Reinforcement is Moderate response rate Hospital patient uses
delivered at predictable with significant pauses patient-controlled,
Variable time intervals (e.g., after after reinforcement doctor-timed pain relief
interval 5, 10, 15, and 20
minutes). Moderate yet steady Checking Facebook
response rate
Reinforcement is
delivered at
unpredictable time
intervals (e.g., after 5, 7,
10, and 20 minutes).
204 Chapter 6 | Learning
Table 6.3 Reinforcement Schedules
Reinforcement Description Result Example
Schedule
Fixed ratio Reinforcement is High response rate with Piecework—factory
delivered after a pauses after worker getting paid for
Variable ratio predictable number of reinforcement every x number of items
responses (e.g., after 2, 4, manufactured
6, and 8 responses). High and steady
response rate Gambling
Reinforcement is
delivered after an
unpredictable number of
responses (e.g., after 1, 4,
5, and 9 responses).
Now let’s combine these four terms. A fixed interval reinforcement schedule is when behavior is
rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During
recovery, she is expected to experience pain and will require prescription medications for pain relief. June
is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June
pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward
(pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be
rewarded.
With a variable interval reinforcement schedule, the person or animal gets the reinforcement based
on varying amounts of time, which are unpredictable. Say that Manuel is the manager at a fast-food
restaurant. Every once in a while someone from the quality control division comes to Manuel’s restaurant.
If the restaurant is clean and the service is fast, everyone on that shift earns a $20 bonus. Manuel never
knows when the quality control person will show up, so he always tries to keep the restaurant clean and
ensures that his employees provide prompt and courteous service. His productivity regarding prompt
service and keeping a clean restaurant are steady because he wants his crew to earn the bonus.
With a fixed ratio reinforcement schedule, there are a set number of responses that must occur before
the behavior is rewarded. Carla sells glasses at an eyeglass store, and she earns a commission every time
she sells a pair of glasses. She always tries to sell people more pairs of glasses, including prescription
sunglasses or a backup pair, so she can increase her commission. She does not care if the person really
needs the prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells does not
matter because her commission is not based on quality; it’s only based on the number of pairs sold.
This distinction in the quality of performance can help determine which reinforcement method is most
appropriate for a particular situation. Fixed ratios are better suited to optimize the quantity of output,
whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output.
In a variable ratio reinforcement schedule, the number of responses needed for a reward varies. This is
the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule
is gambling. Imagine that Sarah—generally a smart, thrifty woman—visits Las Vegas for the first time.
She is not a gambler, but out of curiosity she puts a quarter into the slot machine, and then another, and
another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to
quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That’s more like it!
Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all
her gains and is $10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money
into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 205
that with the next quarter she could win $50, or $100, or even more. Because the reinforcement schedule
in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time
they will win big. This is one of the reasons that gambling is so addictive—and so resistant to extinction.
In operant conditioning, extinction of a reinforced behavior occurs at some point after reinforcement stops,
and the speed at which this happens depends on the reinforcement schedule. In a variable ratio schedule,
the point of extinction comes very slowly, as described above. But in the other reinforcement schedules,
extinction may come quickly. For example, if June presses the button for the pain relief medication before
the allotted time her doctor has approved, no medication is administered. She is on a fixed interval
reinforcement schedule (dosed hourly), so extinction occurs quickly when reinforcement doesn’t come at
the expected time. Among the reinforcement schedules, variable ratio is the most productive and the most
resistant to extinction. Fixed interval is the least productive and the easiest to extinguish (Figure 6.13).
Figure 6.13 The four reinforcement schedules yield different response patterns. The variable ratio schedule is
unpredictable and yields high and steady response rates, with little if any pause after reinforcement (e.g., gambler). A
fixed ratio schedule is predictable and produces a high response rate, with a short pause after reinforcement (e.g.,
eyeglass saleswoman). The variable interval schedule is unpredictable and produces a moderate, steady response
rate (e.g., restaurant manager). The fixed interval schedule yields a scallop-shaped response pattern, reflecting a
significant pause after reinforcement (e.g., surgery patient).
CONNECT THE CONCEPTS
CONNECT THE CONCEPTS
Gambling and the Brain
Skinner (1953) stated, “If the gambling establishment cannot persuade a patron to turn over money with no return,
it may achieve the same effect by returning part of the patron's money on a variable-ratio schedule” (p. 397).
Skinner uses gambling as an example of the power and effectiveness of conditioning behavior based on a variable
ratio reinforcement schedule. In fact, Skinner was so confident in his knowledge of gambling addiction that he
even claimed he could turn a pigeon into a pathological gambler (“Skinner’s Utopia,” 1971). Beyond the power
of variable ratio reinforcement, gambling seems to work on the brain in the same way as some addictive drugs.
The Illinois Institute for Addiction Recovery (n.d.) reports evidence suggesting that pathological gambling is an
addiction similar to a chemical addiction (Figure 6.14). Specifically, gambling may activate the reward centers
of the brain, much like cocaine does. Research has shown that some pathological gamblers have lower levels
of the neurotransmitter (brain chemical) known as norepinephrine than do normal gamblers (Roy, et al., 1988).
According to a study conducted by Alec Roy and colleagues, norepinephrine is secreted when a person feels
stress, arousal, or thrill; pathological gamblers use gambling to increase their levels of this neurotransmitter.
Another researcher, neuroscientist Hans Breiter, has done extensive research on gambling and its effects on the
brain. Breiter (as cited in Franzen, 2001) reports that “Monetary reward in a gambling-like experiment produces
brain activation very similar to that observed in a cocaine addict receiving an infusion of cocaine” (para. 1).
206 Chapter 6 | Learning
Deficiencies in serotonin (another neurotransmitter) might also contribute to compulsive behavior, including a
gambling addiction.
It may be that pathological gamblers’ brains are different than those of other people, and perhaps this difference
may somehow have led to their gambling addiction, as these studies seem to suggest. However, it is very difficult
to ascertain the cause because it is impossible to conduct a true experiment (it would be unethical to try to turn
randomly assigned participants into problem gamblers). Therefore, it may be that causation actually moves in the
opposite direction—perhaps the act of gambling somehow changes neurotransmitter levels in some gamblers’
brains. It also is possible that some overlooked factor, or confounding variable, played a role in both the gambling
addiction and the differences in brain chemistry.
Figure 6.14 Some research suggests that pathological gamblers use gambling to compensate for abnormally
low levels of the hormone norepinephrine, which is associated with stress and is secreted in moments of arousal
and thrill. (credit: Ted Murphy)
COGNITION AND LATENT LEARNING
Although strict behaviorists such as Skinner and Watson refused to believe that cognition (such as
thoughts and expectations) plays a role in learning, another behaviorist, Edward C. Tolman, had a
different opinion. Tolman’s experiments with rats demonstrated that organisms can learn even if they
do not receive immediate reinforcement (Tolman & Honzik, 1930; Tolman, Ritchie, & Kalish, 1946). This
finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order
for learning to occur, thus suggesting a cognitive aspect to learning.
In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through
it. He also studied a comparison group that was rewarded with food at the end of the maze. As the
unreinforced rats explored the maze, they developed a cognitive map: a mental picture of the layout of
the maze (Figure 6.15). After 10 sessions in the maze without reinforcement, food was placed in a goal
box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way
through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all
along. This is known as latent learning: learning that occurs but is not observable in behavior until there
is a reason to demonstrate it.
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 207
Figure 6.15 Psychologist Edward Tolman found that rats use cognitive maps to navigate through a maze. Have you
ever worked your way through various levels on a video game? You learned when to turn left or right, move up or
down. In that case you were relying on a cognitive map, just like the rats in a maze. (credit: modification of work by
"FutUndBeidl"/Flickr)
Latent learning also occurs in humans. Children may learn by watching the actions of their parents but
only demonstrate it at a later date, when the learned material is needed. For example, suppose that Ravi’s
dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but
he’s never driven there himself, so he has not had a chance to demonstrate that he’s learned the way. One
morning Ravi’s dad has to leave early for a meeting, so he can’t drive Ravi to school. Instead, Ravi follows
the same route on his bike that his dad would have taken in the car. This demonstrates latent learning.
Ravi had learned the route to school, but had no need to demonstrate this knowledge earlier.
EVERYDAY CONNECTION
This Place Is Like a Maze
Have you ever gotten lost in a building and couldn’t find your way back out? While that can be frustrating,
you’re not alone. At one time or another we’ve all gotten lost in places like a museum, hospital, or university
library. Whenever we go someplace new, we build a mental representation—or cognitive map—of the location,
as Tolman’s rats built a cognitive map of their maze. However, some buildings are confusing because they
include many areas that look alike or have short lines of sight. Because of this, it’s often difficult to predict
what’s around a corner or decide whether to turn left or right to get out of a building. Psychologist Laura Carlson
(2010) suggests that what we place in our cognitive map can impact our success in navigating through the
208 Chapter 6 | Learning
environment. She suggests that paying attention to specific features upon entering a building, such as a picture
on the wall, a fountain, a statue, or an escalator, adds information to our cognitive map that can be used later
to help find our way out of the building.
LINK TO LEARNING
Watch this video (http://openstaxcollege.org/l/carlsonmaps) to learn more about
Carlson’s studies on cognitive maps and navigation in buildings.
6.4 Observational Learning (Modeling)
Learning Objectives
By the end of this section, you will be able to:
• Define observational learning
• Discuss the steps in the modeling process
• Explain the prosocial and antisocial effects of observational learning
Previous sections of this chapter focused on classical and operant conditioning, which are forms of
associative learning. In observational learning, we learn by watching others and then imitating, or
modeling, what they do or say. The individuals performing the imitated behavior are called models.
Research suggests that this imitative learning involves a specific type of neuron, called a mirror neuron
(Hickock, 2010; Rizzolatti, Fadiga, Fogassi, & Gallese, 2002; Rizzolatti, Fogassi, & Gallese, 2006).
Humans and other animals are capable of observational learning. As you will see, the phrase “monkey see,
monkey do” really is accurate (Figure 6.16). The same could be said about other animals. For example,
in a study of social learning in chimpanzees, researchers gave juice boxes with straws to two groups of
captive chimpanzees. The first group dipped the straw into the juice box, and then sucked on the small
amount of juice at the end of the straw. The second group sucked through the straw directly, getting much
more juice. When the first group, the “dippers,” observed the second group, “the suckers,” what do you
think happened? All of the “dippers” in the first group switched to sucking through the straws directly. By
simply observing the other chimps and modeling their behavior, they learned that this was a more efficient
method of getting juice (Yamamoto, Humle, and Tanaka, 2013).
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 209
Figure 6.16 This spider monkey learned to drink water from a plastic bottle by seeing the behavior modeled by a
human. (credit: U.S. Air Force, Senior Airman Kasey Close)
Imitation is much more obvious in humans, but is imitation really the sincerest form of flattery? Consider
Claire’s experience with observational learning. Claire’s nine-year-old son, Jay, was getting into trouble at
school and was defiant at home. Claire feared that Jay would end up like her brothers, two of whom were
in prison. One day, after yet another bad day at school and another negative note from the teacher, Claire,
at her wit’s end, beat her son with a belt to get him to behave. Later that night, as she put her children to
bed, Claire witnessed her four-year-old daughter, Anna, take a belt to her teddy bear and whip it. Claire
was horrified, realizing that Anna was imitating her mother. It was then that Claire knew she wanted to
discipline her children in a different manner.
Like Tolman, whose experiments with rats suggested a cognitive component to learning, psychologist
Albert Bandura’s ideas about learning were different from those of strict behaviorists. Bandura and other
researchers proposed a brand of behaviorism called social learning theory, which took cognitive processes
into account. According to Bandura, pure behaviorism could not explain why learning can take place in
the absence of external reinforcement. He felt that internal mental states must also have a role in learning
and that observational learning involves much more than imitation. In imitation, a person simply copies
what the model does. Observational learning is much more complex. According to Lefrançois (2012) there
are several ways that observational learning can occur:
1. You learn a new response. After watching your coworker get chewed out by your boss for coming
in late, you start leaving home 10 minutes earlier so that you won’t be late.
2. You choose whether or not to imitate the model depending on what you saw happen to the model.
Remember Julian and his father? When learning to surf, Julian might watch how his father pops
up successfully on his surfboard and then attempt to do the same thing. On the other hand, Julian
might learn not to touch a hot stove after watching his father get burned on a stove.
3. You learn a general rule that you can apply to other situations.
Bandura identified three kinds of models: live, verbal, and symbolic. A live model demonstrates a behavior
in person, as when Ben stood up on his surfboard so that Julian could see how he did it. A verbal
instructional model does not perform the behavior, but instead explains or describes the behavior, as when
a soccer coach tells his young players to kick the ball with the side of the foot, not with the toe. A symbolic
model can be fictional characters or real people who demonstrate behaviors in books, movies, television
shows, video games, or Internet sources (Figure 6.17).
210 Chapter 6 | Learning
Figure 6.17 (a) Yoga students learn by observation as their yoga instructor demonstrates the correct stance and
movement for her students (live model). (b) Models don’t have to be present for learning to occur: through symbolic
modeling, this child can learn a behavior by watching someone demonstrate it on television. (credit a: modification of
work by Tony Cecala; credit b: modification of work by Andrew Hyde)
LINK TO LEARNING
Latent learning and modeling are used all the time in the world of marketing and
advertising. This commercial (http://openstaxcollege.org/l/jeter) played for
months across the New York, New Jersey, and Connecticut areas, Derek Jeter, an
award-winning baseball player for the New York Yankees, is advertising a Ford. The
commercial aired in a part of the country where Jeter is an incredibly well-known
athlete. He is wealthy, and considered very loyal and good looking. What message are the advertisers
sending by having him featured in the ad? How effective do you think it is?
STEPS IN THE MODELING PROCESS
Of course, we don’t learn a behavior simply by observing a model. Bandura described specific steps in the
process of modeling that must be followed if learning is to be successful: attention, retention, reproduction,
and motivation. First, you must be focused on what the model is doing—you have to pay attention. Next,
you must be able to retain, or remember, what you observed; this is retention. Then, you must be able to
perform the behavior that you observed and committed to memory; this is reproduction. Finally, you must
have motivation. You need to want to copy the behavior, and whether or not you are motivated depends
on what happened to the model. If you saw that the model was reinforced for her behavior, you will be
more motivated to copy her. This is known as vicarious reinforcement. On the other hand, if you observed
the model being punished, you would be less motivated to copy her. This is called vicarious punishment.
For example, imagine that four-year-old Allison watched her older sister Kaitlyn playing in their mother’s
makeup, and then saw Kaitlyn get a time out when their mother came in. After their mother left the room,
Allison was tempted to play in the make-up, but she did not want to get a time-out from her mother. What
do you think she did? Once you actually demonstrate the new behavior, the reinforcement you receive
plays a part in whether or not you will repeat the behavior.
Bandura researched modeling behavior, particularly children’s modeling of adults’ aggressive and violent
behaviors (Bandura, Ross, & Ross, 1961). He conducted an experiment with a five-foot inflatable doll that
he called a Bobo doll. In the experiment, children’s aggressive behavior was influenced by whether the
teacher was punished for her behavior. In one scenario, a teacher acted aggressively with the doll, hitting,
throwing, and even punching the doll, while a child watched. There were two types of responses by the
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 211
children to the teacher’s behavior. When the teacher was punished for her bad behavior, the children
decreased their tendency to act as she had. When the teacher was praised or ignored (and not punished
for her behavior), the children imitated what she did, and even what she said. They punched, kicked, and
yelled at the doll.
LINK TO LEARNING
Watch this video clip (http://openstaxcollege.org/l/bobodoll) to see a portion of
the famous Bobo doll experiment, including an interview with Albert Bandura.
What are the implications of this study? Bandura concluded that we watch and learn, and that this learning
can have both prosocial and antisocial effects. Prosocial (positive) models can be used to encourage socially
acceptable behavior. Parents in particular should take note of this finding. If you want your children to
read, then read to them. Let them see you reading. Keep books in your home. Talk about your favorite
books. If you want your children to be healthy, then let them see you eat right and exercise, and spend time
engaging in physical fitness activities together. The same holds true for qualities like kindness, courtesy,
and honesty. The main idea is that children observe and learn from their parents, even their parents’
morals, so be consistent and toss out the old adage “Do as I say, not as I do,” because children tend to copy
what you do instead of what you say. Besides parents, many public figures, such as Martin Luther King,
Jr. and Mahatma Gandhi, are viewed as prosocial models who are able to inspire global social change. Can
you think of someone who has been a prosocial model in your life?
The antisocial effects of observational learning are also worth mentioning. As you saw from the example
of Claire at the beginning of this section, her daughter viewed Claire’s aggressive behavior and copied
it. Research suggests that this may help to explain why abused children often grow up to be abusers
themselves (Murrell, Christoff, & Henning, 2007). In fact, about 30% of abused children become abusive
parents (U.S. Department of Health & Human Services, 2013). We tend to do what we know. Abused
children, who grow up witnessing their parents deal with anger and frustration through violent and
aggressive acts, often learn to behave in that manner themselves. Sadly, it’s a vicious cycle that’s difficult
to break.
Some studies suggest that violent television shows, movies, and video games may also have antisocial
effects (Figure 6.18) although further research needs to be done to understand the correlational and
causational aspects of media violence and behavior. Some studies have found a link between viewing
violence and aggression seen in children (Anderson & Gentile, 2008; Kirsch, 2010; Miller, Grabell, Thomas,
Bermann, & Graham-Bermann, 2012). These findings may not be surprising, given that a child graduating
from high school has been exposed to around 200,000 violent acts including murder, robbery, torture,
bombings, beatings, and rape through various forms of media (Huston et al., 1992). Not only might
viewing media violence affect aggressive behavior by teaching people to act that way in real life situations,
but it has also been suggested that repeated exposure to violent acts also desensitizes people to it.
Psychologists are working to understand this dynamic.
212 Chapter 6 | Learning
Figure 6.18 Can video games make us violent? Psychological researchers study this topic. (credit:
"woodleywonderworks"/Flickr)
LINK TO LEARNING
View this video (http://openstaxcollege.org/l/videogamevio) to hear Brad
Bushman, a psychologist who has published extensively on human aggression and
violence, discuss his research.
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 213
Key Terms
acquisition period of initial learning in classical conditioning in which a human or an animal begins to
connect a neutral stimulus and an unconditioned stimulus so that the neutral stimulus will begin to elicit
the conditioned response
associative learning form of learning that involves connecting certain stimuli or events that occur
together in the environment (classical and operant conditioning)
classical conditioning learning in which the stimulus or experience occurs before the behavior and then
gets paired or associated with the behavior
cognitive map mental picture of the layout of the environment
conditioned response (CR) response caused by the conditioned stimulus
conditioned stimulus (CS) stimulus that elicits a response due to its being paired with an unconditioned
stimulus
continuous reinforcement rewarding a behavior every time it occurs
extinction decrease in the conditioned response when the unconditioned stimulus is no longer paired
with the conditioned stimulus
fixed interval reinforcement schedule behavior is rewarded after a set amount of time
fixed ratio reinforcement schedule set number of responses must occur before a behavior is rewarded
habituation when we learn not to respond to a stimulus that is presented repeatedly without change
higher-order conditioning (also, second-order conditioning) using a conditioned stimulus to condition a
neutral stimulus
instinct unlearned knowledge, involving complex patterns of behavior; instincts are thought to be more
prevalent in lower animals than in humans
latent learning learning that occurs, but it may not be evident until there is a reason to demonstrate it
law of effect behavior that is followed by consequences satisfying to the organism will be repeated and
behaviors that are followed by unpleasant consequences will be discouraged
learning change in behavior or knowledge that is the result of experience
model person who performs a behavior that serves as an example (in observational learning)
negative punishment taking away a pleasant stimulus to decrease or stop a behavior
negative reinforcement taking away an undesirable stimulus to increase a behavior
neutral stimulus (NS) stimulus that does not initially elicit a response
observational learning type of learning that occurs by watching others
operant conditioning form of learning in which the stimulus/experience happens after the behavior is
demonstrated
partial reinforcement rewarding behavior only some of the time
214 Chapter 6 | Learning
positive punishment adding an undesirable stimulus to stop or decrease a behavior
positive reinforcement adding a desirable stimulus to increase a behavior
primary reinforcer has innate reinforcing qualities (e.g., food, water, shelter, sex)
punishment implementation of a consequence in order to decrease a behavior
reflex unlearned, automatic response by an organism to a stimulus in the environment
reinforcement implementation of a consequence in order to increase a behavior
secondary reinforcer has no inherent value unto itself and only has reinforcing qualities when linked
with something else (e.g., money, gold stars, poker chips)
shaping rewarding successive approximations toward a target behavior
spontaneous recovery return of a previously extinguished conditioned response
stimulus discrimination ability to respond differently to similar stimuli
stimulus generalization demonstrating the conditioned response to stimuli that are similar to the
conditioned stimulus
unconditioned response (UCR) natural (unlearned) behavior to a given stimulus
unconditioned stimulus (UCS) stimulus that elicits a reflexive response
variable interval reinforcement schedule behavior is rewarded after unpredictable amounts of time
have passed
variable ratio reinforcement schedule number of responses differ before a behavior is rewarded
vicarious punishment process where the observer sees the model punished, making the observer less
likely to imitate the model’s behavior
vicarious reinforcement process where the observer sees the model rewarded, making the observer
more likely to imitate the model’s behavior
Summary
6.1 What Is Learning?
Instincts and reflexes are innate behaviors—they occur naturally and do not involve learning. In contrast,
learning is a change in behavior or knowledge that results from experience. There are three main types
of learning: classical conditioning, operant conditioning, and observational learning. Both classical and
operant conditioning are forms of associative learning where associations are made between events that
occur together. Observational learning is just as it sounds: learning by observing others.
6.2 Classical Conditioning
Pavlov’s pioneering work with dogs contributed greatly to what we know about learning. His experiments
explored the type of associative learning we now call classical conditioning. In classical conditioning,
organisms learn to associate events that repeatedly happen together, and researchers study how a reflexive
response to a stimulus can be mapped to a different stimulus—by training an association between the
two stimuli. Pavlov’s experiments show how stimulus-response bonds are formed. Watson, the founder
of behaviorism, was greatly influenced by Pavlov’s work. He tested humans by conditioning fear in an
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 215
infant known as Little Albert. His findings suggest that classical conditioning can explain how some fears
develop.
6.3 Operant Conditioning
Operant conditioning is based on the work of B. F. Skinner. Operant conditioning is a form of learning in
which the motivation for a behavior happens after the behavior is demonstrated. An animal or a human
receives a consequence after performing a specific behavior. The consequence is either a reinforcer or
a punisher. All reinforcement (positive or negative) increases the likelihood of a behavioral response.
All punishment (positive or negative) decreases the likelihood of a behavioral response. Several types of
reinforcement schedules are used to reward behavior depending on either a set or variable period of time.
6.4 Observational Learning (Modeling)
According to Bandura, learning can occur by watching others and then modeling what they do or say.
This is known as observational learning. There are specific steps in the process of modeling that must
be followed if learning is to be successful. These steps include attention, retention, reproduction, and
motivation. Through modeling, Bandura has shown that children learn many things both good and bad
simply by watching their parents, siblings, and others.
Review Questions 5. A stimulus that does not initially elicit a
response in an organism is a(n) ________.
1. Which of the following is an example of a
reflex that occurs at some point in the a. unconditioned stimulus
development of a human being? b. neutral stimulus
c. conditioned stimulus
a. child riding a bike d. unconditioned response
b. teen socializing
c. infant sucking on a nipple 6. In Watson and Rayner’s experiments, Little
d. toddler walking Albert was conditioned to fear a white rat, and
then he began to be afraid of other furry white
2. Learning is best defined as a relatively objects. This demonstrates ________.
permanent change in behavior that ________.
a. higher order conditioning
a. is innate b. acquisition
b. occurs as a result of experience c. stimulus discrimination
c. is found only in humans d. stimulus generalization
d. occurs by observing others
7. Extinction occurs when ________.
3. Two forms of associative learning are ________ a. the conditioned stimulus is presented
and ________. repeatedly without being paired with an
unconditioned stimulus
a. classical conditioning; operant conditioning b. the unconditioned stimulus is presented
b. classical conditioning; Pavlovian repeatedly without being paired with a
conditioned stimulus
conditioning c. the neutral stimulus is presented repeatedly
c. operant conditioning; observational without being paired with an
unconditioned stimulus
learning d. the neutral stimulus is presented repeatedly
d. operant conditioning; learning conditioning without being paired with a conditioned
stimulus
4. In ________ the stimulus or experience occurs
before the behavior and then gets paired with the
behavior.
a. associative learning
b. observational learning
c. operant conditioning
d. classical conditioning
216 Chapter 6 | Learning
8. In Pavlov’s work with dogs, the psychic 13. The person who performs a behavior that
secretions were ________. serves as an example is called a ________.
a. unconditioned responses a. teacher
b. conditioned responses b. model
c. unconditioned stimuli c. instructor
d. conditioned stimuli d. coach
9. ________ is when you take away a pleasant 14. In Bandura’s Bobo doll study, when the
stimulus to stop a behavior. children who watched the aggressive model were
placed in a room with the doll and other toys, they
a. positive reinforcement ________.
b. negative reinforcement
c. positive punishment a. ignored the doll
d. negative punishment b. played nicely with the doll
c. played with tinker toys
10. Which of the following is not an example of a d. kicked and threw the doll
primary reinforcer?
15. Which is the correct order of steps in the
a. food modeling process?
b. money
c. water a. attention, retention, reproduction,
d. sex motivation
11. Rewarding successive approximations toward b. motivation, attention, reproduction,
a target behavior is ________. retention
a. shaping c. attention, motivation, retention,
b. extinction reproduction
c. positive reinforcement
d. negative reinforcement d. motivation, attention, retention,
reproduction
12. Slot machines reward gamblers with money
according to which reinforcement schedule? 16. Who proposed observational learning?
a. Ivan Pavlov
a. fixed ratio b. John Watson
b. variable ratio c. Albert Bandura
c. fixed interval d. B. F. Skinner
d. variable interval
Critical Thinking Questions
17. Compare and contrast classical and operant conditioning. How are they alike? How do they differ?
18. What is the difference between a reflex and a learned behavior?
19. If the sound of your toaster popping up toast causes your mouth to water, what are the UCS, CS, and
CR?
20. Explain how the processes of stimulus generalization and stimulus discrimination are considered
opposites.
21. How does a neutral stimulus become a conditioned stimulus?
22. What is a Skinner box and what is its purpose?
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 6 | Learning 217
23. What is the difference between negative reinforcement and punishment?
24. What is shaping and how would you use shaping to teach a dog to roll over?
25. What is the effect of prosocial modeling and antisocial modeling?
26. Cara is 17 years old. Cara’s mother and father both drink alcohol every night. They tell Cara that
drinking is bad and she shouldn’t do it. Cara goes to a party where beer is being served. What do you think
Cara will do? Why?
Personal Application Questions
27. What is your personal definition of learning? How do your ideas about learning compare with the
definition of learning presented in this text?
28. What kinds of things have you learned through the process of classical conditioning? Operant
conditioning? Observational learning? How did you learn them?
29. Can you think of an example in your life of how classical conditioning has produced a positive
emotional response, such as happiness or excitement? How about a negative emotional response, such as
fear, anxiety, or anger?
30. Explain the difference between negative reinforcement and punishment, and provide several
examples of each based on your own experiences.
31. Think of a behavior that you have that you would like to change. How could you use behavior
modification, specifically positive reinforcement, to change your behavior? What is your positive
reinforcer?
32. What is something you have learned how to do after watching someone else?
218 Chapter 6 | Learning
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 219
Chapter 7
Thinking and Intelligence
Figure 7.1 Thinking is an important part of our human experience, and one that has captivated people for centuries.
Today, it is one area of psychological study. The 19th-century Girl with a Book by José Ferraz de Almeida Júnior, the
20th-century sculpture The Thinker by August Rodin, and Shi Ke’s 10th-century painting Huike Thinking all reflect the
fascination with the process of human thought. (credit “middle”: modification of work by Jason Rogers; credit “right”:
modification of work by Tang Zu-Ming)
Chapter Outline
7.1 What Is Cognition?
7.2 Language
7.3 Problem Solving
7.4 What Are Intelligence and Creativity?
7.5 Measures of Intelligence
7.6 The Source of Intelligence
Introduction
Why is it so difficult to break habits—like reaching for your ringing phone even when you shouldn’t, such
as when you’re driving? How does a person who has never seen or touched snow in real life develop an
understanding of the concept of snow? How do young children acquire the ability to learn language with
no formal instruction? Psychologists who study thinking explore questions like these.
Cognitive psychologists also study intelligence. What is intelligence, and how does it vary from person
to person? Are “street smarts” a kind of intelligence, and if so, how do they relate to other types of
intelligence? What does an IQ test really measure? These questions and more will be explored in this
chapter as you study thinking and intelligence.
In other chapters, we discussed the cognitive processes of perception, learning, and memory. In this
chapter, we will focus on high-level cognitive processes. As a part of this discussion, we will consider
thinking and briefly explore the development and use of language. We will also discuss problem solving
and creativity before ending with a discussion of how intelligence is measured and how our biology
and environments interact to affect intelligence. After finishing this chapter, you will have a greater
appreciation of the higher-level cognitive processes that contribute to our distinctiveness as a species.
220 Chapter 7 | Thinking and Intelligence
7.1 What Is Cognition?
Learning Objectives
By the end of this section, you will be able to:
• Describe cognition
• Distinguish concepts and prototypes
• Explain the difference between natural and artificial concepts
Imagine all of your thoughts as if they were physical entities, swirling rapidly inside your mind. How is it
possible that the brain is able to move from one thought to the next in an organized, orderly fashion? The
brain is endlessly perceiving, processing, planning, organizing, and remembering—it is always active. Yet,
you don’t notice most of your brain’s activity as you move throughout your daily routine. This is only one
facet of the complex processes involved in cognition. Simply put, cognition is thinking, and it encompasses
the processes associated with perception, knowledge, problem solving, judgment, language, and memory.
Scientists who study cognition are searching for ways to understand how we integrate, organize, and
utilize our conscious cognitive experiences without being aware of all of the unconscious work that our
brains are doing (for example, Kahneman, 2011).
COGNITION
Upon waking each morning, you begin thinking—contemplating the tasks that you must complete that
day. In what order should you run your errands? Should you go to the bank, the cleaners, or the grocery
store first? Can you get these things done before you head to class or will they need to wait until school
is done? These thoughts are one example of cognition at work. Exceptionally complex, cognition is an
essential feature of human consciousness, yet not all aspects of cognition are consciously experienced.
Cognitive psychology is the field of psychology dedicated to examining how people think. It attempts
to explain how and why we think the way we do by studying the interactions among human thinking,
emotion, creativity, language, and problem solving, in addition to other cognitive processes. Cognitive
psychologists strive to determine and measure different types of intelligence, why some people are better
at problem solving than others, and how emotional intelligence affects success in the workplace, among
countless other topics. They also sometimes focus on how we organize thoughts and information gathered
from our environments into meaningful categories of thought, which will be discussed later.
CONCEPTS AND PROTOTYPES
The human nervous system is capable of handling endless streams of information. The senses serve as
the interface between the mind and the external environment, receiving stimuli and translating it into
nervous impulses that are transmitted to the brain. The brain then processes this information and uses the
relevant pieces to create thoughts, which can then be expressed through language or stored in memory
for future use. To make this process more complex, the brain does not gather information from external
environments only. When thoughts are formed, the brain also pulls information from emotions and
memories (Figure 7.2). Emotion and memory are powerful influences on both our thoughts and behaviors.
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 221
Figure 7.2 Sensations and information are received by our brains, filtered through emotions and memories, and
processed to become thoughts.
In order to organize this staggering amount of information, the brain has developed a file cabinet of sorts
in the mind. The different files stored in the file cabinet are called concepts. Concepts are categories or
groupings of linguistic information, images, ideas, or memories, such as life experiences. Concepts are,
in many ways, big ideas that are generated by observing details, and categorizing and combining these
details into cognitive structures. You use concepts to see the relationships among the different elements of
your experiences and to keep the information in your mind organized and accessible.
Concepts are informed by our semantic memory (you learned about this concept when you studied
memory) and are present in every aspect of our lives; however, one of the easiest places to notice concepts
is inside a classroom, where they are discussed explicitly. When you study United States history, for
example, you learn about more than just individual events that have happened in America’s past. You
absorb a large quantity of information by listening to and participating in discussions, examining maps,
and reading first-hand accounts of people’s lives. Your brain analyzes these details and develops an overall
understanding of American history. In the process, your brain gathers details that inform and refine your
understanding of related concepts like democracy, power, and freedom.
Concepts can be complex and abstract, like justice, or more concrete, like types of birds. In psychology,
for example, Piaget’s stages of development are abstract concepts. Some concepts, like tolerance, are
agreed upon by many people, because they have been used in various ways over many years. Other
concepts, like the characteristics of your ideal friend or your family’s birthday traditions, are personal and
individualized. In this way, concepts touch every aspect of our lives, from our many daily routines to the
guiding principles behind the way governments function.
Another technique used by your brain to organize information is the identification of prototypes for the
concepts you have developed. A prototype is the best example or representation of a concept. For example,
for the category of civil disobedience, your prototype could be Rosa Parks. Her peaceful resistance to
segregation on a city bus in Montgomery, Alabama, is a recognizable example of civil disobedience.
Or your prototype could be Mohandas Gandhi, sometimes called Mahatma Gandhi (“Mahatma” is an
honorific title) (Figure 7.3).
222 Chapter 7 | Thinking and Intelligence
Figure 7.3 In 1930, Mohandas Gandhi led a group in peaceful protest against a British tax on salt in India.
Mohandas Gandhi served as a nonviolent force for independence for India while simultaneously
demanding that Buddhist, Hindu, Muslim, and Christian leaders—both Indian and British—collaborate
peacefully. Although he was not always successful in preventing violence around him, his life provides
a steadfast example of the civil disobedience prototype (Constitutional Rights Foundation, 2013). Just as
concepts can be abstract or concrete, we can make a distinction between concepts that are functions of our
direct experience with the world and those that are more artificial in nature.
NATURAL AND ARTIFICIAL CONCEPTS
In psychology, concepts can be divided into two categories, natural and artificial. Natural concepts
are created “naturally” through your experiences and can be developed from either direct or indirect
experiences. For example, if you live in Essex Junction, Vermont, you have probably had a lot of direct
experience with snow. You’ve watched it fall from the sky, you’ve seen lightly falling snow that barely
covers the windshield of your car, and you’ve shoveled out 18 inches of fluffy white snow as you’ve
thought, “This is perfect for skiing.” You’ve thrown snowballs at your best friend and gone sledding down
the steepest hill in town. In short, you know snow. You know what it looks like, smells like, tastes like,
and feels like. If, however, you’ve lived your whole life on the island of Saint Vincent in the Caribbean,
you may never have actually seen snow, much less tasted, smelled, or touched it. You know snow from
the indirect experience of seeing pictures of falling snow—or from watching films that feature snow as
part of the setting. Either way, snow is a natural concept because you can construct an understanding of it
through direct observations or experiences of snow (Figure 7.4).
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 223
Figure 7.4 (a) Our concept of snow is an example of a natural concept—one that we understand through direct
observation and experience. (b) In contrast, artificial concepts are ones that we know by a specific set of
characteristics that they always exhibit, such as what defines different basic shapes. (credit a: modification of work by
Maarten Takens; credit b: modification of work by “Shayan (USA)”/Flickr)
An artificial concept, on the other hand, is a concept that is defined by a specific set of characteristics.
Various properties of geometric shapes, like squares and triangles, serve as useful examples of artificial
concepts. A triangle always has three angles and three sides. A square always has four equal sides and
four right angles. Mathematical formulas, like the equation for area (length × width) are artificial concepts
defined by specific sets of characteristics that are always the same. Artificial concepts can enhance the
understanding of a topic by building on one another. For example, before learning the concept of “area of
a square” (and the formula to find it), you must understand what a square is. Once the concept of “area
of a square” is understood, an understanding of area for other geometric shapes can be built upon the
original understanding of area. The use of artificial concepts to define an idea is crucial to communicating
with others and engaging in complex thought. According to Goldstone and Kersten (2003), concepts act as
building blocks and can be connected in countless combinations to create complex thoughts.
SCHEMATA
A schema is a mental construct consisting of a cluster or collection of related concepts (Bartlett, 1932).
There are many different types of schemata, and they all have one thing in common: schemata are a
method of organizing information that allows the brain to work more efficiently. When a schema is
activated, the brain makes immediate assumptions about the person or object being observed.
There are several types of schemata. A role schema makes assumptions about how individuals in certain
roles will behave (Callero, 1994). For example, imagine you meet someone who introduces himself as a
firefighter. When this happens, your brain automatically activates the “firefighter schema” and begins
making assumptions that this person is brave, selfless, and community-oriented. Despite not knowing
this person, already you have unknowingly made judgments about him. Schemata also help you fill in
gaps in the information you receive from the world around you. While schemata allow for more efficient
information processing, there can be problems with schemata, regardless of whether they are accurate:
Perhaps this particular firefighter is not brave, he just works as a firefighter to pay the bills while studying
to become a children’s librarian.
An event schema, also known as a cognitive script, is a set of behaviors that can feel like a routine. Think
about what you do when you walk into an elevator (Figure 7.5). First, the doors open and you wait to
let exiting passengers leave the elevator car. Then, you step into the elevator and turn around to face
the doors, looking for the correct button to push. You never face the back of the elevator, do you? And
when you’re riding in a crowded elevator and you can’t face the front, it feels uncomfortable, doesn’t it?
Interestingly, event schemata can vary widely among different cultures and countries. For example, while
it is quite common for people to greet one another with a handshake in the United States, in Tibet, you
224 Chapter 7 | Thinking and Intelligence
greet someone by sticking your tongue out at them, and in Belize, you bump fists (Cairns Regional Council,
n.d.)
Figure 7.5 What event schema do you perform when riding in an elevator? (credit: “Gideon”/Flickr)
Because event schemata are automatic, they can be difficult to change. Imagine that you are driving home
from work or school. This event schema involves getting in the car, shutting the door, and buckling your
seatbelt before putting the key in the ignition. You might perform this script two or three times each day.
As you drive home, you hear your phone’s ring tone. Typically, the event schema that occurs when you
hear your phone ringing involves locating the phone and answering it or responding to your latest text
message. So without thinking, you reach for your phone, which could be in your pocket, in your bag, or
on the passenger seat of the car. This powerful event schema is informed by your pattern of behavior and
the pleasurable stimulation that a phone call or text message gives your brain. Because it is a schema, it is
extremely challenging for us to stop reaching for the phone, even though we know that we endanger our
own lives and the lives of others while we do it (Neyfakh, 2013) (Figure 7.6).
Figure 7.6 Texting while driving is dangerous, but it is a difficult event schema for some people to resist.
Remember the elevator? It feels almost impossible to walk in and not face the door. Our powerful event
schema dictates our behavior in the elevator, and it is no different with our phones. Current research
suggests that it is the habit, or event schema, of checking our phones in many different situations that
makes refraining from checking them while driving especially difficult (Bayer & Campbell, 2012). Because
texting and driving has become a dangerous epidemic in recent years, psychologists are looking at ways
to help people interrupt the “phone schema” while driving. Event schemata like these are the reason why
many habits are difficult to break once they have been acquired. As we continue to examine thinking, keep
in mind how powerful the forces of concepts and schemata are to our understanding of the world.
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 225
7.2 Language
Learning Objectives
By the end of this section, you will be able to:
• Define language and demonstrate familiarity with the components of language
• Understand how the use of language develops
• Explain the relationship between language and thinking
Language is a communication system that involves using words and systematic rules to organize those
words to transmit information from one individual to another. While language is a form of
communication, not all communication is language. Many species communicate with one another through
their postures, movements, odors, or vocalizations. This communication is crucial for species that need to
interact and develop social relationships with their conspecifics. However, many people have asserted that
it is language that makes humans unique among all of the animal species (Corballis & Suddendorf, 2007;
Tomasello & Rakoczy, 2003). This section will focus on what distinguishes language as a special form of
communication, how the use of language develops, and how language affects the way we think.
COMPONENTS OF LANGUAGE
Language, be it spoken, signed, or written, has specific components: a lexicon and grammar. Lexicon refers
to the words of a given language. Thus, lexicon is a language’s vocabulary. Grammar refers to the set
of rules that are used to convey meaning through the use of the lexicon (Fernández & Cairns, 2011). For
instance, English grammar dictates that most verbs receive an “-ed” at the end to indicate past tense.
Words are formed by combining the various phonemes that make up the language. A phoneme (e.g., the
sounds “ah” vs. “eh”) is a basic sound unit of a given language, and different languages have different
sets of phonemes. Phonemes are combined to form morphemes, which are the smallest units of language
that convey some type of meaning (e.g., “I” is both a phoneme and a morpheme). We use semantics and
syntax to construct language. Semantics and syntax are part of a language’s grammar. Semantics refers to
the process by which we derive meaning from morphemes and words. Syntax refers to the way words are
organized into sentences (Chomsky, 1965; Fernández & Cairns, 2011).
We apply the rules of grammar to organize the lexicon in novel and creative ways, which allow us to
communicate information about both concrete and abstract concepts. We can talk about our immediate
and observable surroundings as well as the surface of unseen planets. We can share our innermost
thoughts, our plans for the future, and debate the value of a college education. We can provide detailed
instructions for cooking a meal, fixing a car, or building a fire. The flexibility that language provides to
relay vastly different types of information is a property that makes language so distinct as a mode of
communication among humans.
LANGUAGE DEVELOPMENT
Given the remarkable complexity of a language, one might expect that mastering a language would
be an especially arduous task; indeed, for those of us trying to learn a second language as adults, this
might seem to be true. However, young children master language very quickly with relative ease. B. F.
Skinner (1957) proposed that language is learned through reinforcement. Noam Chomsky (1965) criticized
this behaviorist approach, asserting instead that the mechanisms underlying language acquisition are
biologically determined. The use of language develops in the absence of formal instruction and appears
to follow a very similar pattern in children from vastly different cultures and backgrounds. It would
seem, therefore, that we are born with a biological predisposition to acquire a language (Chomsky, 1965;
Fernández & Cairns, 2011). Moreover, it appears that there is a critical period for language acquisition,
226 Chapter 7 | Thinking and Intelligence
such that this proficiency at acquiring language is maximal early in life; generally, as people age, the ease
with which they acquire and master new languages diminishes (Johnson & Newport, 1989; Lenneberg,
1967; Singleton, 1995).
Children begin to learn about language from a very early age (Table 7.1). In fact, it appears that this is
occurring even before we are born. Newborns show preference for their mother’s voice and appear to be
able to discriminate between the language spoken by their mother and other languages. Babies are also
attuned to the languages being used around them and show preferences for videos of faces that are moving
in synchrony with the audio of spoken language versus videos that do not synchronize with the audio
(Blossom & Morgan, 2006; Pickens, 1994; Spelke & Cortelyou, 1981).
Table 7.1 Stages of Language and Communication Development
Stage Age Developmental Language and Communication
1 0–3 months Reflexive communication
2 3–8 months Reflexive communication; interest in others
3 8–13 months Intentional communication; sociability
4 12–18 months First words
5 18–24 months Simple sentences of two words
6 2–3 years Sentences of three or more words
7 3–5 years Complex sentences; has conversations
DIG DEEPER
The Case of Genie
In the fall of 1970, a social worker in the Los Angeles area found a 13-year-old girl who was being raised in
extremely neglectful and abusive conditions. The girl, who came to be known as Genie, had lived most of her
life tied to a potty chair or confined to a crib in a small room that was kept closed with the curtains drawn. For a
little over a decade, Genie had virtually no social interaction and no access to the outside world. As a result of
these conditions, Genie was unable to stand up, chew solid food, or speak (Fromkin, Krashen, Curtiss, Rigler,
& Rigler, 1974; Rymer, 1993). The police took Genie into protective custody.
Genie’s abilities improved dramatically following her removal from her abusive environment, and early on, it
appeared she was acquiring language—much later than would be predicted by critical period hypotheses that
had been posited at the time (Fromkin et al., 1974). Genie managed to amass an impressive vocabulary in
a relatively short amount of time. However, she never developed a mastery of the grammatical aspects of
language (Curtiss, 1981). Perhaps being deprived of the opportunity to learn language during a critical period
impeded Genie’s ability to fully acquire and use language.
You may recall that each language has its own set of phonemes that are used to generate morphemes,
words, and so on. Babies can discriminate among the sounds that make up a language (for example, they
can tell the difference between the “s” in vision and the “ss” in fission); early on, they can differentiate
between the sounds of all human languages, even those that do not occur in the languages that are used in
their environments. However, by the time that they are about 1 year old, they can only discriminate among
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 227
those phonemes that are used in the language or languages in their environments (Jensen, 2011; Werker &
Lalonde, 1988; Werker & Tees, 1984).
LINK TO LEARNING
Visit this website (http://openstaxcollege.org/l/language) to learn more about how
babies lose the ability to discriminate among all possible human phonemes as they
age.
After the first few months of life, babies enter what is known as the babbling stage, during which time they
tend to produce single syllables that are repeated over and over. As time passes, more variations appear in
the syllables that they produce. During this time, it is unlikely that the babies are trying to communicate;
they are just as likely to babble when they are alone as when they are with their caregivers (Fernández &
Cairns, 2011). Interestingly, babies who are raised in environments in which sign language is used will also
begin to show babbling in the gestures of their hands during this stage (Petitto, Holowka, Sergio, Levy, &
Ostry, 2004).
Generally, a child’s first word is uttered sometime between the ages of 1 year to 18 months, and for the
next few months, the child will remain in the “one word” stage of language development. During this
time, children know a number of words, but they only produce one-word utterances. The child’s early
vocabulary is limited to familiar objects or events, often nouns. Although children in this stage only make
one-word utterances, these words often carry larger meaning (Fernández & Cairns, 2011). So, for example,
a child saying “cookie” could be identifying a cookie or asking for a cookie.
As a child’s lexicon grows, she begins to utter simple sentences and to acquire new vocabulary at a very
rapid pace. In addition, children begin to demonstrate a clear understanding of the specific rules that
apply to their language(s). Even the mistakes that children sometimes make provide evidence of just how
much they understand about those rules. This is sometimes seen in the form of overgeneralization. In
this context, overgeneralization refers to an extension of a language rule to an exception to the rule. For
example, in English, it is usually the case that an “s” is added to the end of a word to indicate plurality.
For example, we speak of one dog versus two dogs. Young children will overgeneralize this rule to cases
that are exceptions to the “add an s to the end of the word” rule and say things like “those two gooses” or
“three mouses.” Clearly, the rules of the language are understood, even if the exceptions to the rules are
still being learned (Moskowitz, 1978).
LANGUAGE AND THOUGHT
When we speak one language, we agree that words are representations of ideas, people, places, and events.
The given language that children learn is connected to their culture and surroundings. But can words
themselves shape the way we think about things? Psychologists have long investigated the question of
whether language shapes thoughts and actions, or whether our thoughts and beliefs shape our language.
Two researchers, Edward Sapir and Benjamin Lee Whorf, began this investigation in the 1940s. They
wanted to understand how the language habits of a community encourage members of that community
to interpret language in a particular manner (Sapir, 1941/1964). Sapir and Whorf proposed that language
determines thought, suggesting, for example, that a person whose community language did not have past-
tense verbs would be challenged to think about the past (Whorf, 1956). Researchers have since identified
this view as too absolute, pointing out a lack of empiricism behind what Sapir and Whorf proposed
(Abler, 2013; Boroditsky, 2011; van Troyer, 1994). Today, psychologists continue to study and debate the
relationship between language and thought.
228 Chapter 7 | Thinking and Intelligence
WHAT DO YOU THINK?
The Meaning of Language
Think about what you know of other languages; perhaps you even speak multiple languages. Imagine for
a moment that your closest friend fluently speaks more than one language. Do you think that friend thinks
differently, depending on which language is being spoken? You may know a few words that are not translatable
from their original language into English. For example, the Portuguese word saudade originated during the
15th century, when Portuguese sailors left home to explore the seas and travel to Africa or Asia. Those left
behind described the emptiness and fondness they felt as saudade (Figure 7.7). The word came to express
many meanings, including loss, nostalgia, yearning, warm memories, and hope. There is no single word in
English that includes all of those emotions in a single description. Do words such as saudade indicate that
different languages produce different patterns of thought in people? What do you think??
Figure 7.7 These two works of art depict saudade. (a) Saudade de Nápoles, which is translated into
“missing Naples,” was painted by Bertha Worms in 1895. (b) Almeida Júnior painted Saudade in 1899.
Language may indeed influence the way that we think, an idea known as linguistic determinism. One
recent demonstration of this phenomenon involved differences in the way that English and Mandarin
Chinese speakers talk and think about time. English speakers tend to talk about time using terms that
describe changes along a horizontal dimension, for example, saying something like “I’m running behind
schedule” or “Don’t get ahead of yourself.” While Mandarin Chinese speakers also describe time in
horizontal terms, it is not uncommon to also use terms associated with a vertical arrangement. For
example, the past might be described as being “up” and the future as being “down.” It turns out that these
differences in language translate into differences in performance on cognitive tests designed to measure
how quickly an individual can recognize temporal relationships. Specifically, when given a series of
tasks with vertical priming, Mandarin Chinese speakers were faster at recognizing temporal relationships
between months. Indeed, Boroditsky (2001) sees these results as suggesting that “habits in language
encourage habits in thought” (p. 12).
One group of researchers who wanted to investigate how language influences thought compared how
English speakers and the Dani people of Papua New Guinea think and speak about color. The Dani have
two words for color: one word for light and one word for dark. In contrast, the English language has 11
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 229
color words. Researchers hypothesized that the number of color terms could limit the ways that the Dani
people conceptualized color. However, the Dani were able to distinguish colors with the same ability as
English speakers, despite having fewer words at their disposal (Berlin & Kay, 1969). A recent review of
research aimed at determining how language might affect something like color perception suggests that
language can influence perceptual phenomena, especially in the left hemisphere of the brain. You may
recall from earlier chapters that the left hemisphere is associated with language for most people. However,
the right (less linguistic hemisphere) of the brain is less affected by linguistic influences on perception
(Regier & Kay, 2009)
7.3 Problem Solving
Learning Objectives
By the end of this section, you will be able to:
• Describe problem solving strategies
• Define algorithm and heuristic
• Explain some common roadblocks to effective problem solving
People face problems every day—usually, multiple problems throughout the day. Sometimes these
problems are straightforward: To double a recipe for pizza dough, for example, all that is required is
that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more
complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your
supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You
finished the report last night, but your printer will not work today. What should you do? First, you need
to identify the problem and then apply a strategy for solving the problem.
PROBLEM-SOLVING STRATEGIES
When you are presented with a problem—whether it is a complex mathematical problem or a broken
printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly
identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a
solution.
A problem-solving strategy is a plan of action used to find a solution. Different strategies have different
action plans associated with them (Table 7.2). For example, a well-known strategy is trial and error. The
old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken
printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the
paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial
and error, you would continue to try different solutions until you solved your problem. Although trial and
error is not typically one of the most time-efficient strategies, it is a commonly used one.
Table 7.2 Problem-Solving Strategies
Method Description Example
Trial and
error Continue trying different Restarting phone, turning off WiFi, turning off
solutions until problem is bluetooth in order to determine why your phone is
Algorithm solved malfunctioning
Step-by-step problem- Instruction manual for installing new software on your
solving formula computer
230 Chapter 7 | Thinking and Intelligence
Table 7.2 Problem-Solving Strategies
Method Description Example
Heuristic Working backwards; breaking a task into steps
General problem-solving
framework
Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you
with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an
algorithm as a recipe with highly detailed instructions that produce the same result every time they are
performed. Algorithms are used frequently in our everyday lives, especially in computer science. When
you run a search on the Internet, search engines like Google use algorithms to decide which entries will
appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your
newsfeed. Can you identify other situations in which algorithms are used?
A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly
to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman,
1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an
example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite
its time-saving characteristics, it is not always the best method for making a rational decision. Different
types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when
one of five conditions is met (Pratkanis, 1989):
• When one is faced with too much information
• When the time to make a decision is limited
• When the decision to be made is unimportant
• When there is access to very little information to use in making the decision
• When an appropriate heuristic happens to come to mind in the same moment
Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end
result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM
on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to
plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM,
and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You
use the working backwards heuristic to plan the events of your day on a regular basis, probably without
even thinking about it.
Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series
of smaller steps. Students often use this common method to complete a large research project or long
essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the
chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough
draft, develop a final draft, organize the references list, and proofread their work before turning in the
project. The large task becomes less overwhelming when it is broken down into a series of small steps.
EVERYDAY CONNECTION
Solving Puzzles
Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles
and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 231
newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below (Figure 7.8) is a 4×4 grid. To
solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must
total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded
box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.
Figure 7.8 How long did it take you to solve this sudoku puzzle? (You can see the answer at the end of this
section.)
Here is another popular type of puzzle (Figure 7.9) that challenges your spatial reasoning skills. Connect all
nine dots with four connecting straight lines without lifting your pencil from the paper:
Figure 7.9 Did you figure it out? (The answer is at the end of this section.) Once you understand how to
crack this puzzle, you won’t forget.
232 Chapter 7 | Thinking and Intelligence
Take a look at the “Puzzling Scales” logic puzzle below (Figure 7.10). Sam Loyd, a well-known puzzle master,
created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).
Figure 7.10 What steps did you take to solve this puzzle? You can read the solution at the end of this
section.
PITFALLS TO PROBLEM SOLVING
Not all problems are successfully solved, however. What challenges stop us from successfully solving a
problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a
different result.” Imagine a person in a room that has four doorways. One doorway that has always been
open in the past is now locked. The person, accustomed to exiting the room by that particular doorway,
keeps trying to get out through the same doorway even though the other three doorways are open. The
person is stuck—but she just needs to go to another doorway, instead of trying to get out through the
locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in
the past but is clearly not working now.
Functional fixedness is a type of mental set where you cannot perceive an object being used for something
other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission
Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft.
An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of
being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The
engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift
air filter, which saved the lives of the astronauts.
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 233
LINK TO LEARNING
Check out this Apollo 13 scene (http://openstaxcollege.org/l/Apollo13) where the
group of NASA engineers are given the task of overcoming functional fixedness.
Researchers have investigated whether functional fixedness is affected by culture. In one experiment,
individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that
for which the object was originally intended. For example, the participants were told a story about a bear
and a rabbit that were separated by a river and asked to select among various objects, including a spoon,
a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the
imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants
longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to
know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects
their ability to transcend functional fixedness. It was determined that functional fixedness is experienced
in both industrialized and nonindustrialized cultures (German & Barrett, 2005).
In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and
reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a
situation. For example, let’s say you and three friends wanted to rent a house and had a combined target
budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a
very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why
would the realtor show you the run-down houses and the nice house? The realtor may be challenging your
anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a
decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to
spend that you may not recognize what kinds of houses are available at that price point.
The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For
example, if you think that your professor is not very nice, you notice all of the instances of rude behavior
exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily
basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even
though it really wasn’t. In other words, you knew all along that things would turn out the way they did.
Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone
or something; for example, you may assume that your professors spend their free time reading books and
engaging in intellectual conversation, because the idea of them spending their time playing volleyball or
visiting an amusement park does not fit in with your stereotypes of professors.
Finally, the availability heuristic is a heuristic in which you make a decision based on an example,
information, or recent experience that is that readily available to you, even though it may not be the best
example to inform your decision. Biases tend to “preserve that which is already established—to maintain
our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These
biases are summarized in Table 7.3.
Table 7.3 Summary of Decision Biases
Bias Description
Anchoring
Tendency to focus on one particular piece of information when making decisions
or problem-solving
234 Chapter 7 | Thinking and Intelligence
Table 7.3 Summary of Decision Biases
Bias Description
Confirmation Focuses on information that confirms existing beliefs
Hindsight
Representative Belief that the event just experienced was predictable
Availability
Unintentional stereotyping of someone or something
Decision is based upon either an available precedent or an example that may be
faulty
LINK TO LEARNING
Please visit this site (http://openstaxcollege.org/l/CogBias) to see a clever music
video that a high school teacher made to explain these and other cognitive biases to
his AP psychology students.
Were you able to determine how many marbles are needed to balance the scales in Figure 7.10? You need
nine. Were you able to solve the problems in Figure 7.8 and Figure 7.9? Here are the answers (Figure
7.11).
Figure 7.11
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 235
7.4 What Are Intelligence and Creativity?
Learning Objectives
By the end of this section, you will be able to:
• Define intelligence
• Explain the triarchic theory of intelligence
• Identify the difference between intelligence theories
• Explain emotional intelligence
A four-and-a-half-year-old boy sits at the kitchen table with his father, who is reading a new story aloud
to him. He turns the page to continue reading, but before he can begin, the boy says, “Wait, Daddy!” He
points to the words on the new page and reads aloud, “Go, Pig! Go!” The father stops and looks at his son.
“Can you read that?” he asks. “Yes, Daddy!” And he points to the words and reads again, “Go, Pig! Go!”
This father was not actively teaching his son to read, even though the child constantly asked questions
about letters, words, and symbols that they saw everywhere: in the car, in the store, on the television. The
dad wondered about what else his son might understand and decided to try an experiment. Grabbing a
sheet of blank paper, he wrote several simple words in a list: mom, dad, dog, bird, bed, truck, car, tree. He
put the list down in front of the boy and asked him to read the words. “Mom, dad, dog, bird, bed, truck,
car, tree,” he read, slowing down to carefully pronounce bird and truck. Then, “Did I do it, Daddy?” “You
sure did! That is very good.” The father gave his little boy a warm hug and continued reading the story
about the pig, all the while wondering if his son’s abilities were an indication of exceptional intelligence
or simply a normal pattern of linguistic development. Like the father in this example, psychologists have
wondered what constitutes intelligence and how it can be measured.
CLASSIFYING INTELLIGENCE
What exactly is intelligence? The way that researchers have defined the concept of intelligence has been
modified many times since the birth of psychology. British psychologist Charles Spearman believed
intelligence consisted of one general factor, called g, which could be measured and compared among
individuals. Spearman focused on the commonalities among various intellectual abilities and
demphasized what made each unique. Long before modern psychology developed, however, ancient
philosophers, such as Aristotle, held a similar view (Cianciolo & Sternberg, 2004).
Others psychologists believe that instead of a single factor, intelligence is a collection of distinct abilities.
In the 1940s, Raymond Cattell proposed a theory of intelligence that divided general intelligence into
two components: crystallized intelligence and fluid intelligence (Cattell, 1963). Crystallized intelligence
is characterized as acquired knowledge and the ability to retrieve it. When you learn, remember, and
recall information, you are using crystallized intelligence. You use crystallized intelligence all the time in
your coursework by demonstrating that you have mastered the information covered in the course. Fluid
intelligence encompasses the ability to see complex relationships and solve problems. Navigating your
way home after being detoured onto an unfamiliar route because of road construction would draw upon
your fluid intelligence. Fluid intelligence helps you tackle complex, abstract challenges in your daily life,
whereas crystallized intelligence helps you overcome concrete, straightforward problems (Cattell, 1963).
Other theorists and psychologists believe that intelligence should be defined in more practical terms. For
example, what types of behaviors help you get ahead in life? Which skills promote success? Think about
this for a moment. Being able to recite all 44 presidents of the United States in order is an excellent party
trick, but will knowing this make you a better person?
236 Chapter 7 | Thinking and Intelligence
Robert Sternberg developed another theory of intelligence, which he titled the triarchic theory of
intelligence because it sees intelligence as comprised of three parts (Sternberg, 1988): practical, creative,
and analytical intelligence (Figure 7.12).
Figure 7.12 Sternberg’s theory identifies three types of intelligence: practical, creative, and analytical.
Practical intelligence, as proposed by Sternberg, is sometimes compared to “street smarts.” Being practical
means you find solutions that work in your everyday life by applying knowledge based on your
experiences. This type of intelligence appears to be separate from traditional understanding of IQ;
individuals who score high in practical intelligence may or may not have comparable scores in creative
and analytical intelligence (Sternberg, 1988).
This story about the 2007 Virginia Tech shootings illustrates both high and low practical intelligences.
During the incident, one student left her class to go get a soda in an adjacent building. She planned to
return to class, but when she returned to her building after getting her soda, she saw that the door she used
to leave was now chained shut from the inside. Instead of thinking about why there was a chain around
the door handles, she went to her class’s window and crawled back into the room. She thus potentially
exposed herself to the gunman. Thankfully, she was not shot. On the other hand, a pair of students was
walking on campus when they heard gunshots nearby. One friend said, “Let’s go check it out and see what
is going on.” The other student said, “No way, we need to run away from the gunshots.” They did just
that. As a result, both avoided harm. The student who crawled through the window demonstrated some
creative intelligence but did not use common sense. She would have low practical intelligence. The student
who encouraged his friend to run away from the sound of gunshots would have much higher practical
intelligence.
Analytical intelligence is closely aligned with academic problem solving and computations. Sternberg
says that analytical intelligence is demonstrated by an ability to analyze, evaluate, judge, compare, and
contrast. When reading a classic novel for literature class, for example, it is usually necessary to compare
the motives of the main characters of the book or analyze the historical context of the story. In a science
course such as anatomy, you must study the processes by which the body uses various minerals in
different human systems. In developing an understanding of this topic, you are using analytical
intelligence. When solving a challenging math problem, you would apply analytical intelligence to analyze
different aspects of the problem and then solve it section by section.
Creative intelligence is marked by inventing or imagining a solution to a problem or situation. Creativity
in this realm can include finding a novel solution to an unexpected problem or producing a beautiful work
of art or a well-developed short story. Imagine for a moment that you are camping in the woods with some
friends and realize that you’ve forgotten your camp coffee pot. The person in your group who figures out
a way to successfully brew coffee for everyone would be credited as having higher creative intelligence.
Multiple Intelligences Theory was developed by Howard Gardner, a Harvard psychologist and former
student of Erik Erikson. Gardner’s theory, which has been refined for more than 30 years, is a more
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 237
recent development among theories of intelligence. In Gardner’s theory, each person possesses at least
eight intelligences. Among these eight intelligences, a person typically excels in some and falters in others
(Gardner, 1983). Table 7.4 describes each type of intelligence.
Table 7.4 Multiple Intelligences
Intelligence Characteristics Representative
Type Career
Perceives different functions of language, different
Linguistic sounds and meanings of words, may easily learn Journalist, novelist,
intelligence multiple languages poet, teacher
Capable of seeing numerical patterns, strong ability to
Logical- use reason and logic Scientist,
mathematical mathematician
intelligence Understands and appreciates rhythm, pitch, and tone;
may play multiple instruments or perform as a vocalist Composer, performer
Musical High ability to control the movements of the body and
intelligence use the body to perform various physical tasks Dancer, athlete,
athletic coach, yoga
Bodily Ability to perceive the relationship between objects and instructor
kinesthetic how they move in space
intelligence Choreographer,
sculptor, architect,
Spatial aviator, sailor
intelligence
Counselor, social
Interpersonal Ability to understand and be sensitive to the various worker, salesperson
intelligence emotional states of others
Key component of
Intrapersonal Ability to access personal feelings and motivations, and personal success over
intelligence use them to direct behavior and reach personal goals time
Naturalist High capacity to appreciate the natural world and Biologist, ecologist,
intelligence interact with the species within it environmentalist
Gardner’s theory is relatively new and needs additional research to better establish empirical support. At
the same time, his ideas challenge the traditional idea of intelligence to include a wider variety of abilities,
although it has been suggested that Gardner simply relabeled what other theorists called “cognitive styles”
as “intelligences” (Morgan, 1996). Furthermore, developing traditional measures of Gardner’s intelligences
is extremely difficult (Furnham, 2009; Gardner & Moran, 2006; Klein, 1997).
Gardner’s inter- and intrapersonal intelligences are often combined into a single type: emotional
intelligence. Emotional intelligence encompasses the ability to understand the emotions of yourself and
others, show empathy, understand social relationships and cues, and regulate your own emotions and
respond in culturally appropriate ways (Parker, Saklofske, & Stough, 2009). People with high emotional
intelligence typically have well-developed social skills. Some researchers, including Daniel Goleman, the
author of Emotional Intelligence: Why It Can Matter More than IQ, argue that emotional intelligence is a better
predictor of success than traditional intelligence (Goleman, 1995). However, emotional intelligence has
been widely debated, with researchers pointing out inconsistencies in how it is defined and described,
238 Chapter 7 | Thinking and Intelligence
as well as questioning results of studies on a subject that is difficulty to measure and study emperically
(Locke, 2005; Mayer, Salovey, & Caruso, 2004)
Intelligence can also have different meanings and values in different cultures. If you live on a small island,
where most people get their food by fishing from boats, it would be important to know how to fish
and how to repair a boat. If you were an exceptional angler, your peers would probably consider you
intelligent. If you were also skilled at repairing boats, your intelligence might be known across the whole
island. Think about your own family’s culture. What values are important for Latino families? Italian
families? In Irish families, hospitality and telling an entertaining story are marks of the culture. If you are
a skilled storyteller, other members of Irish culture are likely to consider you intelligent.
Some cultures place a high value on working together as a collective. In these cultures, the importance of
the group supersedes the importance of individual achievement. When you visit such a culture, how well
you relate to the values of that culture exemplifies your cultural intelligence, sometimes referred to as
cultural competence.
CREATIVITY
Creativity is the ability to generate, create, or discover new ideas, solutions, and possibilities. Very creative
people often have intense knowledge about something, work on it for years, look at novel solutions, seek
out the advice and help of other experts, and take risks. Although creativity is often associated with the
arts, it is actually a vital form of intelligence that drives people in many disciplines to discover something
new. Creativity can be found in every area of life, from the way you decorate your residence to a new way
of understanding how a cell works.
Creativity is often assessed as a function of one’s ability to engage in divergent thinking. Divergent
thinking can be described as thinking “outside the box;” it allows an individual to arrive at unique,
multiple solutions to a given problem. In contrast, convergent thinking describes the ability to provide a
correct or well-established answer or solution to a problem (Cropley, 2006; Gilford, 1967)
EVERYDAY CONNECTION
Creativity
Dr. Tom Steitz, the Sterling Professor of Biochemistry and Biophysics at Yale University, has spent his career
looking at the structure and specific aspects of RNA molecules and how their interactions cold help produce
antibiotics and ward off diseases. As a result of his lifetime of work, he won the Nobel Prize in Chemistry in
2009. He wrote, “Looking back over the development and progress of my career in science, I am reminded
how vitally important good mentorship is in the early stages of one's career development and constant face-to-
face conversations, debate and discussions with colleagues at all stages of research. Outstanding discoveries,
insights and developments do not happen in a vacuum” (Steitz, 2010, para. 39). Based on Steitz’s comment, it
becomes clear that someone’s creativity, although an individual strength, benefits from interactions with others.
Think of a time when your creativity was sparked by a conversation with a friend or classmate. How did that
person influence you and what problem did you solve using creativity?
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 239
7.5 Measures of Intelligence
Learning Objectives
By the end of this section, you will be able to:
• Explain how intelligence tests are developed
• Describe the history of the use of IQ tests
• Describe the purposes and benefits of intelligence testing
While you’re likely familiar with the term “IQ” and associate it with the idea of intelligence, what does
IQ really mean? IQ stands for intelligence quotient and describes a score earned on a test designed to
measure intelligence. You’ve already learned that there are many ways psychologists describe intelligence
(or more aptly, intelligences). Similarly, IQ tests—the tools designed to measure intelligence—have been
the subject of debate throughout their development and use.
When might an IQ test be used? What do we learn from the results, and how might people use this
information? IQ tests are expensive to administer and must be given by a licensed psychologist.
Intelligence testing has been considered both a bane and a boon for education and social policy. In
this section, we will explore what intelligence tests measure, how they are scored, and how they were
developed.
MEASURING INTELLIGENCE
It seems that the human understanding of intelligence is somewhat limited when we focus on traditional or
academic-type intelligence. How then, can intelligence be measured? And when we measure intelligence,
how do we ensure that we capture what we’re really trying to measure (in other words, that IQ tests
function as valid measures of intelligence)? In the following paragraphs, we will explore the how
intelligence tests were developed and the history of their use.
The IQ test has been synonymous with intelligence for over a century. In the late 1800s, Sir Francis
Galton developed the first broad test of intelligence (Flanagan & Kaufman, 2004). Although he was not
a psychologist, his contributions to the concepts of intelligence testing are still felt today (Gordon, 1995).
Reliable intelligence testing (you may recall from earlier chapters that reliability refers to a test’s ability to
produce consistent results) began in earnest during the early 1900s with a researcher named Alfred Binet
(Figure 7.13). Binet was asked by the French government to develop an intelligence test to use on children
to determine which ones might have difficulty in school; it included many verbally based tasks. American
researchers soon realized the value of such testing. Louis Terman, a Stanford professor, modified Binet’s
work by standardizing the administration of the test and tested thousands of different-aged children to
establish an average score for each age. As a result, the test was normed and standardized, which means
that the test was administered consistently to a large enough representative sample of the population that
the range of scores resulted in a bell curve (bell curves will be discussed later). Standardization means that
the manner of administration, scoring, and interpretation of results is consistent. Norming involves giving
a test to a large population so data can be collected comparing groups, such as age groups. The resulting
data provide norms, or referential scores, by which to interpret future scores. Norms are not expectations
of what a given group should know but a demonstration of what that group does know. Norming and
standardizing the test ensures that new scores are reliable. This new version of the test was called the
Stanford-Binet Intelligence Scale (Terman, 1916). Remarkably, an updated version of this test is still widely
used today.
240 Chapter 7 | Thinking and Intelligence
Figure 7.13 French psychologist Alfred Binet helped to develop intelligence testing. (b) This page is from a 1908
version of the Binet-Simon Intelligence Scale. Children being tested were asked which face, of each pair, was prettier.
In 1939, David Wechsler, a psychologist who spent part of his career working with World War I veterans,
developed a new IQ test in the United States. Wechsler combined several subtests from other intelligence
tests used between 1880 and World War I. These subtests tapped into a variety of verbal and nonverbal
skills, because Wechsler believed that intelligence encompassed “the global capacity of a person to act
purposefully, to think rationally, and to deal effectively with his environment” (Wechsler, 1958, p. 7). He
named the test the Wechsler-Bellevue Intelligence Scale (Wechsler, 1981). This combination of subtests
became one of the most extensively used intelligence tests in the history of psychology. Although its name
was later changed to the Wechsler Adult Intelligence Scale (WAIS) and has been revised several times,
the aims of the test remain virtually unchanged since its inception (Boake, 2002). Today, there are three
intelligence tests credited to Wechsler, the Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV),
the Wechsler Intelligence Scale for Children (WISC-V), and the Wechsler Preschool and Primary Scale of
Intelligence—IV (WPPSI-IV) (Wechsler, 2012). These tests are used widely in schools and communities
throughout the United States, and they are periodically normed and standardized as a means of
recalibration. Interestingly, the periodic recalibrations have led to an interesting observation known as the
Flynn effect. Named after James Flynn, who was among the first to describe this trend, the Flynn effect
refers to the observation that each generation has a significantly higher IQ than the last. Flynn himself
argues, however, that increased IQ scores do not necessarily mean that younger generations are more
intelligent per se (Flynn, Shaughnessy, & Fulgham, 2012). As a part of the recalibration process, the WISC-
V was given to thousands of children across the country, and children taking the test today are compared
with their same-age peers (Figure 7.13).
The WISC-V is composed of 14 subtests, which comprise five indices, which then render an IQ score. The
five indices are Verbal Comprehension, Visual Spatial, Fluid Reasoning, Working Memory, and Processing
Speed. When the test is complete, individuals receive a score for each of the five indices and a Full Scale IQ
score. The method of scoring reflects the understanding that intelligence is comprised of multiple abilities
in several cognitive realms and focuses on the mental processes that the child used to arrive at his or her
answers to each test item.
Ultimately, we are still left with the question of how valid intelligence tests are. Certainly, the most modern
versions of these tests tap into more than verbal competencies, yet the specific skills that should be assessed
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5
Chapter 7 | Thinking and Intelligence 241
in IQ testing, the degree to which any test can truly measure an individual’s intelligence, and the use of the
results of IQ tests are still issues of debate (Gresham & Witt, 1997; Flynn, Shaughnessy, & Fulgham, 2012;
Richardson, 2002; Schlinger, 2003).
WHAT DO YOU THINK?
Intellectually Disabled Criminals and Capital Punishment
The case of Atkins v. Virginia was a landmark case in the United States Supreme Court. On August 16, 1996,
two men, Daryl Atkins and William Jones, robbed, kidnapped, and then shot and killed Eric Nesbitt, a local
airman from the U.S. Air Force. A clinical psychologist evaluated Atkins and testified at the trial that Atkins had
an IQ of 59. The mean IQ score is 100. The psychologist concluded that Atkins was mildly mentally retarded.
The jury found Atkins guilty, and he was sentenced to death. Atkins and his attorneys appealed to the Supreme
Court. In June 2002, the Supreme Court reversed a previous decision and ruled that executions of mentally
retarded criminals are ‘cruel and unusual punishments’ prohibited by the Eighth Amendment. The court wrote
in their decision:
Clinical definitions of mental retardation require not only subaverage intellectual functioning, but
also significant limitations in adaptive skills. Mentally retarded persons frequently know the
difference between right and wrong and are competent to stand trial. Because of their impairments,
however, by definition they have diminished capacities to understand and process information, to
communicate, to abstract from mistakes and learn from experience, to engage in logical reasoning,
to control impulses, and to understand others’ reactions. Their deficiencies do not warrant an
exemption from criminal sanctions, but diminish their personal culpability (Atkins v. Virginia, 2002,
par. 5).
The court also decided that there was a state legislature consensus against the execution of the mentally
retarded and that this consensus should stand for all of the states. The Supreme Court ruling left it up to
the states to determine their own definitions of mental retardation and intellectual disability. The definitions
vary among states as to who can be executed. In the Atkins case, a jury decided that because he had many
contacts with his lawyers and thus was provided with intellectual stimulation, his IQ had reportedly increased,
and he was now smart enough to be executed. He was given an execution date and then received a stay of
execution after it was revealed that lawyers for co-defendant, William Jones, coached Jones to “produce a
testimony against Mr. Atkins that did match the evidence” (Liptak, 2008). After the revelation of this misconduct,
Atkins was re-sentenced to life imprisonment.
Atkins v. Virginia (2002) highlights several issues regarding society’s beliefs around intelligence. In the Atkins
case, the Supreme Court decided that intellectual disability does affect decision making and therefore should
affect the nature of the punishment such criminals receive. Where, however, should the lines of intellectual
disability be drawn? In May 2014, the Supreme Court ruled in a related case (Hall v. Florida) that IQ scores
cannot be used as a final determination of a prisoner’s eligibility for the death penalty (Roberts, 2014).
THE BELL CURVE
The results of intelligence tests follow the bell curve, a graph in the general shape of a bell. When the bell
curve is used in psychological testing, the graph demonstrates a normal distribution of a trait, in this case,
intelligence, in the human population. Many human traits naturally follow the bell curve. For example,
if you lined up all your female schoolmates according to height, it is likely that a large cluster of them
would be the average height for an American woman: 5’4”–5’6”. This cluster would fall in the center of
the bell curve, representing the average height for American women (Figure 7.14). There would be fewer
women who stand closer to 4’11”. The same would be true for women of above-average height: those who
stand closer to 5’11”. The trick to finding a bell curve in nature is to use a large sample size. Without a
large sample size, it is less likely that the bell curve will represent the wider population. A representative
sample is a subset of the population that accurately represents the general population. If, for example, you
242 Chapter 7 | Thinking and Intelligence
measured the height of the women in your classroom only, you might not actually have a representative
sample. Perhaps the women’s basketball team wanted to take this course together, and they are all in your
class. Because basketball players tend to be taller than average, the women in your class may not be a good
representative sample of the population of American women. But if your sample included all the women
at your school, it is likely that their heights would form a natural bell curve.
Figure 7.14 Are you of below-average, average, or above-average height?
The same principles apply to intelligence tests scores. Individuals earn a score called an intelligence
quotient (IQ). Over the years, different types of IQ tests have evolved, but the way scores are interpreted
remains the same. The average IQ score on an IQ test is 100. Standard deviations describe how data are
dispersed in a population and give context to large data sets. The bell curve uses the standard deviation
to show how all scores are dispersed from the average score (Figure 7.15). In modern IQ testing, one
standard deviation is 15 points. So a score of 85 would be described as “one standard deviation below
the mean.” How would you describe a score of 115 and a score of 70? Any IQ score that falls within one
standard deviation above and below the mean (between 85 and 115) is considered average, and 82% of the
population has IQ scores in this range. An IQ score of 130 or above is considered a superior level.
This OpenStax book is available for free at http://cnx.org/content/col11629/1.5