The Behavioral Model

Learning Objectives

  • Describe the key concepts and applications of the behavioral approach to psychopathology

The Behavioral Perspective

In contrast to the psychodynamic approaches of Freud and the neo-Freudians, which focus on how mental illness connects childhood experiences to inner (unconscious) processes and defense mechanisms, the learning, or behavioral approaches, focus only on observable behavior. The early behaviorists were primarily concerned with the scientific study of human learning and behavior; they rejected Freud’s theories and treatments mostly because those relied so heavily on Freud’s own theories and “interpretations” of client’s behaviors. They emphasize the study of what is observable—actual behaviors by animals and human beings rather than trying to study or evaluate things that could not be seen or tested. John B. Watson wrote an article in which he said, “Psychology as the behaviorist views it is a purely objective experimental branch of natural science. Its theoretical goal is the prediction and control of behavior. Introspection forms no essential part of its methods, nor is the scientific value of its data dependent upon the readiness with which they lend themselves to interpretation.”[1]

Behaviorists also do not believe in biological determinism: they do not see personality traits as inborn. Instead, they view personality as significantly shaped by the reinforcements and behavioral consequences that occur in the environment. In other words, people behave in a consistent manner based on prior learning. B. F. Skinner, a strict behaviorist, believed that the environment was solely responsible for all behavior, including the enduring, consistent behavior patterns studied by personality theorists. He proposed that we demonstrate consistent behavior patterns because we have developed certain response tendencies (Skinner, 1953). In other words, we learn to behave in particular ways. We increase the behaviors that lead to positive consequences, and we decrease the behaviors that lead to negative consequences.
Skinner disagreed with Freud’s idea that personality is fixed in childhood. He argued that personality develops over our entire life, not only in the first few years. Our responses can change as we come across new situations; therefore, we can expect more variability over time in personality than Freud would anticipate. For example, consider a young woman, Greta, a risk-taker. She drives fast and participates in dangerous sports such as hang gliding and kiteboarding. But after she gets married and has children, the system of reinforcements and punishments in her environment changes. Speeding and extreme sports are no longer reinforced, so she no longer engages in those behaviors. In fact, Greta now describes herself as a cautious person.
Over time and through ongoing studies, the field of behaviorism has changed and paved the way toward integrating cognitive elements that also influence learning and shape behavior. Yet, the early principles are still useful and play an important role in treating many types of mental illness. The behavioral model is generally viewed as including three major areas: classical conditioning, operant conditioning, and observational learning/social learning.

Classical Conditioning

Two figures are usually central in reviews of the principles of classical conditioning. The first was Ivan Pavlov (1849–1936), a Nobel-prize winning Russian physiologist, who also studied dogs and discovered the basic principles of classical conditioning (Figure 1).

A portrait shows Ivan Pavlov.

Figure 1. Ivan Pavlov’s research on the digestive system of dogs unexpectedly led to his discovery of the learning process now known as classical conditioning.

Pavlov’s studied the digestive system (Hunt, 2007) in dogs. Over time, Pavlov (1927) noted that the dogs began to salivate not only at the taste of food, but also at the sight of food, at the sight of an empty food bowl, and even at the sound of the laboratory assistants’ footsteps. Salivating to food in the mouth is reflexive, so no learning is involved. However, dogs don’t naturally salivate at the sight of an empty bowl or the sound of footsteps. To explore this phenomenon, Pavlov designed a series of carefully controlled experiments. He trained the dogs to salivate in response to stimuli that had nothing to do with food, such as the sound of a bell, a light, and a touch on the leg. Through his experiments, Pavlov realized that an organism has two types of responses to its environment: (1) unconditioned (unlearned) responses, or reflexes, and (2) conditioned (learned) responses. (If you remember that unlearned = unconditioned,  and learned = conditioned, it is easier to make sense of the information that follows.)

In his experiments, the dogs salivated each time meat powder was presented to them. The meat powder in this situation was an unconditioned stimulus (UCS): a stimulus that elicits a reflexive response in an organism. The dogs’ salivation was an unconditioned response (UCR): a natural (unlearned) reaction to a given stimulus. Before conditioning, think of the dogs’ stimulus and response like this:

    Meat powder (UCS) → Salivation (UCR)   

In classical conditioning, a neutral stimulus is presented immediately before an unconditioned stimulus. Pavlov would sound a tone (like ringing a bell) and then give the dogs the meat powder (Figure 2 below). The tone was the neutral stimulus (NS), which is a stimulus that does not naturally elicit a response. Prior to conditioning, the dogs did not salivate when they just heard the tone because the tone had no association or meaning for the dogs. Quite simply, this pairing means the following:

Tone (NS) + Meat Powder (UCS) → Salivation (UCR)

When Pavlov repeatedly paired the tone with the meat powder, the previously neutral stimulus (the tone) also began to elicit salivation from the dogs. Thus, the neutral stimulus had changed and became the conditioned (learned) stimulus (CS), which is a stimulus that elicits a response after repeatedly being paired with an unconditioned stimulus. Eventually, the dogs began to salivate to the tone alone, just as they previously had salivated at the sound of the assistants’ footsteps. The behavior caused by the conditioned stimulus is called the conditioned response (CR). In the case of Pavlov’s dogs, they had learned to associate the tone (conditioned stimulus or CS) with being fed, and they began to salivate (conditioned response or CR) in anticipation of food.

Tone (CS) → Salivation (CR)

Two illustrations are labeled “before conditioning” and show a dog salivating over a dish of food, and a dog not salivating while a bell is rung. An illustration labeled “during conditioning” shows a dog salivating over a bowl of food while a bell is rung. An illustration labeled “after conditioning” shows a dog salivating while a bell is rung.

Figure 2. Before conditioning, an unconditioned stimulus (food) produces an unconditioned response (salivation), and a neutral stimulus (bell) does not produce a response. During conditioning, the unconditioned stimulus (food) is presented repeatedly just after the presentation of the neutral stimulus (bell). After conditioning, the neutral stimulus alone produces a conditioned response (salivation), thus becoming a conditioned stimulus.

Try IT

Application to Human Beings

A photograph shows John B. Watson.

Figure 2. John B. Watson used the principles of classical conditioning in the study of human emotion.

The second figure central to classical conditioning was John B. Watson, often regarded as the founder of behaviorism. Watson’s ideas were influenced by Pavlov’s work. According to Watson, human behavior, just like animal behavior, is primarily the result of conditioned responses. While Pavlov’s work with dogs involved the conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of human emotions (Watson, 1919). He first tested this together with his graduate student Rosalie Rayner in a series of studies with a baby called Little Albert. Although this study would be considered unethical today in how it was carried out, Watson and Rayner (1920) demonstrated how fears can be conditioned in humans.

In 1920, Watson was the chair of the psychology department at Johns Hopkins University. There, he came met Little Albert’s mother, Arvilla Merritte, who worked at a campus hospital (DeAngelis, 2010). Watson offered her a dollar to allow her son to be the subject of his experiments in classical conditioning. Initially, he was presented with various neutral stimuli, including a rabbit, a dog, a monkey, masks, cotton wool, and a white rat. He was not afraid of any of these things. Then Watson, with the help of Rayner, conditioned Little Albert to associate these stimuli with an emotion: fear.

A photograph shows a man wearing a mask with a white beard; his face is close to a baby who is crawling away. A caption reads, “Now he fears even Santa Claus.”

Figure 3. Through stimulus generalization, Little Albert came to fear furry things, including Watson in a Santa Claus mask.

For example, Watson handed Little Albert the white rat, and Little Albert enjoyed playing with it (neutral stimulus). Then Watson made a loud sound, by striking a hammer against a metal bar hanging behind Little Albert’s head, each time Little Albert touched the rat. Little Albert was frightened by the sound—demonstrating a reflexive fear of sudden loud noises—and began to cry. Watson repeatedly paired the loud sound with the white rat. Soon Little Albert became frightened by the white rat alone. In this case, what are the UCS, CS, UCR, and CR? (Write out your answers to practice). Days later, Little Albert demonstrated stimulus generalization—he became afraid of other furry things: a rabbit, a furry coat, and even a Santa Claus mask (Figure 3). Watson had succeeded in conditioning a fear response in Little Albert, thus demonstrating that emotions could become conditioned responses. It had been Watson’s intention to produce a phobia—a persistent, excessive fear of a specific object or situation—through conditioning alone, thus countering Freud’s view that phobias are caused by deep, hidden conflicts in the mind. However, there is no evidence that Little Albert experienced phobias in later years. Little Albert’s mother moved away, ending the experiment, and Little Albert himself died a few years later of unrelated causes.

Try It

Examples of Classical Conditioning Related to Mental Illness

Classical conditioning can occur in many regular contexts in life from food aversion (getting sick after eating something and then feeling nauseated if you see the same food later on), emotional reactions to people or places, to reactions to chemotherapy (experiencing nausea when seeing a doctor or nurse involved in the treatment even if outside of the hospital or clinic). It also can play a significant role in some forms of mental illnesses. For example, fear conditioning plays a role in creating many anxiety disorders in humans, such as phobias and panic disorders, where people associate cues (such as closed spaces, or a shopping mall) with panic or other emotional trauma (see Mineka & Zinbarg, 2006). Here, rather than a physical response (like dogs drooling), the conditioned stimulus (CS) triggers an emotional reaction. Simple examples may include a child who sees a dog (previously a neutral stimulus) and then gets bitten, increasing the chance of developing a phobia towards dogs (now a CS) or a student, who when younger, gave a presentation in class and was laughed at, contributing to social anxiety. Remember that the biopsychosocial model emphasizes multiple interactions in the development of mental illnesses, but these types of learning experiences certainly are relevant and contribute to mental illness.

Classical conditioning can also play a role in drug or alcohol addictions. When a drug is consumed, it can become paired with previously neutral cues that are present at the same time (e.g., rooms, odors, or drug paraphernalia). In this regard, if someone associates a particular smell with the high from the drug, whenever that person smells the same odor later, it may cue behavioral or emotional responses that encourage continued use. But drug cues have an even more interesting property: they can elicit physical or physiological responses that represent the body attempting to compensate for the upcoming effect of the drug (see Siegel, 1989). For example, morphine suppresses pain; however, if someone is used to taking morphine, a cue that signals the “drug is coming soon” can actually make the person more sensitive to pain. Because the person knows a pain suppressant will soon be administered, the body becomes more sensitive, anticipating that “the drug will soon take care of it.” Remarkably, such conditioned compensatory responses, in turn, decrease the impact of the drug on the body—because the body has become more sensitive to pain.

This conditioned compensatory response has many implications. For instance, a drug user will be most tolerant to the drug in the presence of cues that have been linked to it (because such cues generate compensatory responses) and so the user increases the dose they are taking to get the same high. As a result, overdose is often not due to an increase in dosage, but to taking the drug in a new place without the familiar cues—which would have otherwise allowed the user to tolerate the drug at the dose being used (see Siegel, Hinson, Krank, & McCully, 1982). Conditioned compensatory responses (which include heightened pain sensitivity and decreased body temperature, among others) might also cause discomfort, thus motivating the drug user to continue usage of the drug to reduce them. This is one of several ways classical conditioning might be a factor in drug addiction and dependence.

Operant Conditioning

The second major area of the behavioral model relevant to mental disorders is operant conditioning. In operant conditioning, organisms learn to associate a behavior with its consequence (Table 1). A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish, making her more likely to flip again when she hears a whistle.

Table 1. Classical and Operant Conditioning Compared
Classical Conditioning Operant Conditioning
Conditioning approach An unconditioned stimulus (such as food) is paired with a neutral stimulus (such as a bell). The neutral stimulus eventually becomes the conditioned stimulus, which brings about the conditioned response (salivation). The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more or less likely to exhibit the desired behavior in the future.
Stimulus timing The stimulus occurs immediately before the response. The stimulus (either reinforcement or punishment) occurs soon after the response.

Psychologist B. F. Skinner realized that classical conditioning is limited to existing behaviors that are mostly reflexive; it does not account for new behaviors such as riding a bike. He proposed a theory about how such behaviors are acquired. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward Thorndike. According to the law of effect, behaviors engaged in by the organism (instead of an external neutral stimulus) that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about a desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up—even if we love our job.

Working with Thorndike’s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a Skinner box (Figure 4). A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. Because the rat has no natural association between pressing a lever and getting food, the rat has to learn this connection. At first, the rat may simply explore its cage, climbing on top of things, burrowing under things, in search of food. Eventually while poking around its cage, the rat accidentally presses the lever, and a food pellet drops in. This voluntary behavior is called an operant behavior because it operates on the environment (i.e., it is an action that the animal itself makes). A consequence that increases the frequency of a behavior is called reinforcement or a reinforcer, while consequences that decrease how often a behavior occurs are referred to as punishment or punishers.

Now, once the rat recognizes that it receives a piece of food every time it presses the lever, the behavior of lever-pressing becomes reinforced. That is, the food pellets serve as reinforcers because they strengthen the rat’s desire to engage with the environment in this particular manner.

A photograph shows B.F. Skinner. An illustration shows a rat in a Skinner box: a chamber with a speaker, lights, a lever, and a food dispenser.

Figure 4. (a) B. F. Skinner developed operant conditioning for the systematic study of how behaviors are strengthened or weakened according to consequences. (b) In a Skinner box, a rat presses a lever in an operant conditioning chamber to receive a food reward. (credit a: modification of work by “Silly rabbit”/Wikimedia Commons)

In an example related to humans, imagine that you are playing a street-racing video game. As you drive through one city course multiple times, you try a number of different streets to get to the finish line. On one of these trials, you discover a shortcut that dramatically improves your overall time, and you feel excited and pleased. You have learned this new path through operant conditioning. That is, by engaging with your environment (operant responses), you performed a sequence of behaviors that that was positively reinforced (i.e., you found the shortest distance to the finish line) and experienced satisfaction and pleasure. And now that you’ve learned how to drive this course, you will perform that same sequence of driving behaviors (just as the rat presses on the lever) to receive your reward of a faster finish and feeling good.

Another relevant example is striving for a good grade in your class—which could be considered a reward for students (i.e., it produces a positive emotional response). In order to get that reward (similar to the rat learning to press the lever), a student needs to modify their behavior. For example, a student may learn that speaking up in class gets them participation points (a reinforcer), so the student speaks up repeatedly. However, the student also learns that they shouldn’t speak up about just anything; talking about topics unrelated to school actually costs points (punishment). Therefore, through the student’s freely chosen behaviors, they learn which behaviors are reinforced and which are punished. Keep in mind that when discussing operant conditioning, the terms reinforcement and punishment are not used the same way they are in everyday speech. Also, the use of the terms positive and negative are also used distinctly, in a mathematical way, with positive meaning a reinforcer or punisher being an added consequence while negative (think of negative numbers in math) is a situation or event that is subtracted or removed.

An important distinction of operant conditioning is that it provides a method for studying how consequences influence voluntary behavior. The rat’s decision to press the lever is voluntary in the sense that the rat is free to make and repeat that response whenever it wants. Classical conditioning, on the other hand, is just the opposite—depending instead on involuntary behavior (e.g., the dog doesn’t choose to drool; it just does). So, whereas the rat must actively participate and perform some kind of behavior to attain its reward, the dog in Pavlov’s experiment is a passive participant. One of the lessons of operant conditioning research, then, is that voluntary behavior is strongly influenced by its consequences.

Interactions between Classical and Operant Conditioning

Classical and operant conditioning are usually studied separately. But outside of the laboratory, they frequently occur at the same time. For example, a person who is reinforced for drinking alcohol or eating excessively learns these behaviors in the presence of certain stimuli—a pub, a set of friends, a restaurant, or possibly the couch in front of the TV. These stimuli are also available for association with the reinforcer. In this way, classical and operant conditioning are often intertwined. Generally speaking, any operant behavior that is reinforced or punished usually occurs in the presence of some stimulus or set of stimuli such as the behaviors of others (such as parents), specific environments or situations, stressors, etc. For one thing, the stimulus will come to evoke a system of responses (remember systems means interdependent elements) that help the organism prepare for the consequence. For example, a drinker may undergo changes in body temperature when exposed to cues related to drinking alcohol; the eater may salivate and have an increase in insulin secretion in preparing to digest the food. In addition, the stimulus will evoke approach behaviors, moving to continue or complete the behaviors if the outcome is positive, or retreat or avoidance if the outcome is negative/punishing.

Another effect of classical cues is that they motivate ongoing operant behavior (see Balleine, 2005). For example, if a rat has learned via operant conditioning that pressing a lever will give it a drug, in the presence of cues that signal the “drug is coming soon” (like the sound of the lever squeaking), the rat will work harder to press the lever than if those cues weren’t present (i.e., there is no squeaking lever sound). Similarly, in the presence of food-associated cues (e.g., smells), a rat (or an overeater) will work harder for food. And finally, even in the presence of punishment cues (like something that signals fear), a rat, a human, or any other organism will work harder to avoid those situations that might lead to trauma. Classical CSs thus have many effects that can contribute to significant patterns of operant behavior.

Extinction: Diminishing Learned Associations

Although the term extinction is often associated with dinosaurs or the disappearance of species, it has a specific meaning in behavior theory. After conditioned learning has occurred, the response to the CS can be eliminated if the CS is presented repeatedly without the US. This effect is called extinction, and the response is said to become “extinguished.” For example, if Pavlov kept ringing the bell but never gave the dog any food afterward, eventually the dog’s CR (drooling) would no longer happen when it heard the CS (the bell), because the bell would no longer be a predictor of food. Extinction is important for many reasons. For one thing, it is the basis for many therapies that clinical psychologists use to eliminate maladaptive and unwanted behaviors.

Take the example of a person who has a debilitating fear of spiders: one approach might include systematic exposure to spiders. Whereas, initially the person has a CR (e.g., extreme fear) every time they see the CS (e.g., the spider), after repeatedly being shown pictures of spiders in neutral conditions, pretty soon the CS no longer predicts the CR (i.e., the person doesn’t have the fear reaction when seeing spiders, having learned that spiders no longer serve as a cue for that fear). Here, repeated exposure to spiders without an aversive consequence causes extinction.

Psychologists must accept one important fact about extinction, however: it does not necessarily destroy the original learning (see Bouton, 2004). For example, imagine you strongly associate the smell of chalkboards with the agony of middle school detention. Now imagine that, after years of encountering chalkboards, the smell of them no longer recalls the agony of detention (an example of extinction). However, one day, after entering a new building for the first time, you suddenly catch a whiff of a chalkboard and WHAM!, the agony of detention returns. This is called spontaneous recovery: following a lapse in exposure to the CS after extinction has occurred, sometimes re-exposure to the CS (e.g., the smell of chalkboards) can evoke the CR again (e.g., the agony of detention).

Another related phenomenon is the renewal effect: After extinction, if the CS is tested in a new context, such as a different room or location, the CR can also return. In the chalkboard example, the action of entering a new building—where you don’t expect to smell chalkboards—suddenly renews the sensations associated with detention. These effects have been interpreted to suggest that extinction inhibits rather than erases the learned behavior, and this inhibition is mainly expressed in the context in which it is learned.

This does not mean that extinction is a bad treatment for behavior disorders. Instead, clinicians can increase its effectiveness by using basic research on learning to help defeat these relapse effects (see Craske et al., 2008). For example, conducting extinction therapies in contexts where patients might be most vulnerable to relapsing (e.g., at work), might be a good strategy for enhancing the therapy’s success.

Observational Learning and Social Learning

The third major area of the behavioral model is related to observational learning, which paved the way to the introduction of cognition into behavioral theory, an approach that is now termed social learning theory (Bandura, 1977) because of its emphasis on social interactions and context.[2] Observational learning is essentially a cognitive process involving perception and interpretation of behaviors and is important because not all forms of learning are accounted for entirely by classical and operant conditioning. Imagine a child walking up to a group of children playing a game on the playground. The game looks fun, but it is new and unfamiliar. Rather than joining the game immediately, the child opts to sit back and watch the other children play a round or two. Through watching others, the child takes note of the ways in which they behave while playing the game. This helps the child figure out the rules of the game and even some strategies for doing well at the game. The child may then be able to join in and participate in the game, continuing to learn as they do so.

Alfred Bandura’s key contribution to learning theory was the idea that much learning is vicarious. Observational learning does not necessarily require reinforcement, but instead hinges on the presence of others referred to as social models. Social models are typically of higher status or authority compared to the observer, examples of which include parents, teachers, and police officers. In the example above, the children who already know how to play the game could be thought of as being authorities—and are therefore social models—even though they are the same age as the observer. By observing how the social models behave, an individual is able to learn how to act in a certain situation. Other examples of observational learning might include a child learning to place her napkin in her lap by watching her parents at the dinner table, or a customer learning where to find the ketchup and mustard after observing other customers at a hot dog stand. Drawing on the behaviorists’ ideas about reinforcement, Bandura suggested that whether we choose to imitate a model’s behavior depends on whether we see the model reinforced or punished, a process he called vicarious reinforcement. Through observational learning, we come to learn what behaviors are acceptable and rewarded in our culture, and we also learn to inhibit deviant or socially unacceptable behaviors by seeing what behaviors are punished.

A related concept that also demonstrates that learning can occur outside of classical or operant conditioning is latent learning. In latent learning, an animal or a person observes behavior or gathers information without classical associations or operant rewards or punishers and is capable of demonstrating the behavior later when there is motivation to do so. For example, a young boy may watch his father go out and mow the lawn several times. His parents later buy him a toy lawnmower, and the next time his father goes out to mow, the boy takes his toy mower and follows his father who laughs and smiles at him (reinforcer). The boy had learned the behavioral pattern for mowing the lawn even though he could not use the equipment, was not given formal instruction in what to do, and received no rewards for mowing behavior. However, given an opportunity later with his toy, he successfully imitated the behavior which was now reinforced through play/fun and his father’s attention. Some psychologists have suggested this is a significant factor, for instance, in how animals and human beings learn to navigate their physical environment.

Bandura also proposed the concept of reciprocal determinism, which is the idea that behavior is controlled or determined by the individual through cognitive processes and by the environment through external social stimulus events. We can see the principles of reciprocal determinism at work in observational learning. For example, personal factors determine which behaviors in the environment a person chooses to imitate, and those environmental events in turn are processed cognitively according to other personal factors. One person may experience receiving attention as reinforcing, and that person may be more inclined to imitate behaviors such as boasting when a model has been reinforced for similar behavior. For others, boasting may be viewed negatively, despite the attention that might result—or receiving heightened attention may be perceived as being scrutinized. In either case, the person may be less likely to imitate those behaviors even though the reasons for not doing so would be different. An example of Bandura’s reciprocal determinism could occur when a child is acting out in school. The child doesn’t like going to school; therefore, they act out in class. This behavior results in teachers and administrators of the school disliking having the child around. When confronted by the situation, the child admits they hate school and other peers don’t like them. This results in the child acting inappropriately, forcing the administrators who dislike having the child around to create a more restrictive environment for children of this stature. Each behavioral and environmental factor coincides with the child and so forth resulting in a continuous battle on all three levels.

Watch It

Watch this video to review the differences between classical and operant conditioning.

You can view the transcript for “The difference between classical and operant conditioning – Peggy Andover” here (opens in new window).

Try It


neutral stimulus (NS): a stimulus that does not naturally generate a reaction from the animal or person

unconditioned stimulus (US): a stimulus that naturally generates a reaction from an organism, usually a reflexive reaction

unconditioned response (UCR): a natural and expected reaction to an unlearned stimulus, such as dogs salivating to food or Little Albert crying at a loud noise

conditioned stimulus (CS): a previously neutral stimulus, that through the process of learning in classical conditioning, has now become a learned stimulus that the organism reacts to (a tone, footsteps, or a white rat)

conditioned response (CR): a learned response to the previously neutral stimulus (now a conditioned stimulus) such as crying and trying to avoid the white rat

stimulus generalization: a learned reaction where the organism expands the conditioned stimulus to include related or similar stimuli; the organism has generalized their emotional reaction to multiple stimuli beyond the original CS

law of effect: a principle elaborated by Edward Thorndike stating that behaviors that result in a “satisfying effect” are more likely to occur again in that situation while behaviors that result in a “discomforting effect” are less likely to occur again

operant behavior: a behavior voluntarily emitted by an organism in response to its environment (it operates on the environment)

reinforcement/reinforcer: a consequence of an operant behavior that increases the frequency of its occurrence in the future

punishment/punisher: a consequence of an operant behavior that decreases the frequency of that behavior in the future

extinction: the process by which a previously learned association is weakened to the point that it no longer regularly occurs

spontaneous recovery: following a period of extinction, exposure to the stimulus or environment that was originally associated with the behavior causes the behavior to occur again

observational learning: a process of vicarious learning that occurs through the observation of others and the consequences of their behavior

latent learning: a form of learning that occurs through observation and experience but is not manifested until later when there is motivation to do so

reciprocal determinism: the principle that a person’s behaviors are influenced both by a person’s cognitive attitudes and skills as well as by the reactions or consequences of others and the environment

  1. Watson, John B. (1913). "Psychology as the Behaviorist Views It". Psychological Review. 20 (2): 158–177. doi:10.1037/h0074428
  2. Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice Hall