Behaviorism is a perspective on learning that focuses on changes in individuals’ observable behaviors— changes in what people say or do. At some point we all use this perspective, whether we call it “behaviorism” or something else. The first time that I drove a car, for example, I was concerned primarily with whether I could actually do the driving, not with whether I could describe or explain how to drive. For another example: when I reached the point in life where I began cooking meals for myself, I was more focused on whether I could actually produce edible food in a kitchen than with whether I could explain my recipes and cooking procedures to others. And still another example—one often relevant to new teachers: when I began my first year of teaching, I was more focused on doing the job of teaching—on day-to-day survival—than on pausing to reflect on what I was doing.
Note that in all of these examples, focusing attention on behavior instead of on “thoughts” may have been desirable at that moment, but not necessarily desirable indefinitely or all of the time. Even as a beginner, there are times when it is more important to be able to describe how to drive or to cook than to actually do these things. And there definitely are many times when reflecting on and thinking about teaching can improve teaching itself. (As a teacher-friend once said to me: “Don’t just do something; stand there!”) But neither is focusing on behavior which is not necessarily less desirable than focusing on students’ “inner” changes, such as gains in their knowledge or their personal attitudes. If you are teaching, you will need to attend to all forms of learning in students, whether inner or outward.
In classrooms, behaviorism is most useful for identifying relationships between specific actions by a student and the immediate precursors and consequences of the actions. It is less useful for understanding changes in students’ thinking; for this purpose, we need theories that are more cognitive (or thinking-oriented) or social, like the ones described later in this chapter. This fact is not a criticism of behaviorism as a perspective, but just a clarification of its particular strength or usefulness, which is to highlight observable relationships among actions, precursors, and consequences. Behaviorists use particular terms (or “lingo,” some might say) for these relationships. One variety of behaviorism that has proved especially useful to educators is operant conditioning, described in the next section.
Have you ever had the experience of taking a shower when suddenly someone in the apartment above you, or in a nearby bathroom, flushes the toilet? The shower’s relaxing warmth turns to scalding heat! You flinch, tense up, maybe even scream in pain. But soon the water returns to its former temperature, and you relax once again—but this time your ears are alert to the sound. When you hear the flush again, you anticipate the burning water and jump back even before the temperature changes. You have learned an important lesson—that there is a predictable relationship or association between two events, a sound and a change in water temperature. It has learned this association through a process called classical conditioning.
Ivan Pavlov, a Russian physiologist, discovered the phenomenon of classical conditioning nearly a century ago. He did this by demonstrating that dogs could “learn” to salivate at the sound of a bell that was rung before they were fed, even before they could see or smell the food.
Before a dog undergoes the conditioning process, the bell is a neutral stimulus (NS). In other words, a bell does not automatically elicit a physiological response from a dog. Food, on the other hand, automatically causes a dog to salivate. The food, therefore, is an unconditioned stimulus (UCS), meaning “naturally conditioned” or “conditioned by nature.” Salivation is an unconditioned response (UCR), a reaction that automatically follows an unconditioned stimulus.
Figure 4.3.1. Ivan Pavlov (1849–1936).
In Pavlov’s experiments, the dogs salivated each time food was presented to them. Before conditioning, think of the dogs’ stimulus and response like this:
Food (UCS) → Salivatin (UCR)
In classical conditioning, a neutral stimulus is presented immediately before an unconditioned stimulus. Pavlov would sound a tone (like ringing a bell) and then give the dogs the food. The bell was the neutral stimulus (NS), which is a stimulus that does not naturally elicit a response. Prior to conditioning, the dogs did not salivate when they just heard the bell because the bell had no association for the dogs.
Bell (UCS) + Food (CS) → Salivation (UCS)
When Pavlov paired the tone with the meat powder over and over again, the previously neutral stimulus (the tone) also began to elicit salivation from the dogs. Thus, the neutral stimulus became the conditioned stimulus (CS), which is a stimulus that elicits a response after repeatedly being paired with an unconditioned stimulus. Eventually, the dogs began to salivate to the tone alone, just as they previously had salivated at the sound of the assistants’ footsteps. The behavior caused by the conditioned stimulus is called the conditioned response (CR). In the case of Pavlov’s dogs, they had learned to associate the tone (CS) with being fed, and they began to salivate (CR) in anticipation of food.
Bell (CS) → Salivation (CR)
Figure 4.3.2 Before conditioning, an unconditioned stimulus (food) produces an unconditioned response (salivation), and a neutral stimulus (bell) does not produce a response. During conditioning, the unconditioned stimulus (food) is presented repeatedly just after the presentation of the neutral stimulus (bell). After conditioning, the neutral stimulus alone produces a conditioned response (salivation), thus becoming a conditioned stimulus.
Video 4.3.1. Classical Conditioning explains the process used in creating an association between stimuli and response.
Many learning theorists use the classical conditioning paradigm to explain how we learn relationships between environmental stimuli and behavioral, cognitive, and emotional responses. For example, how do we account for the following phenomena?
- The smell of a certain perfume reminds you of a close friend or loved one.
- You recoil at the sight of a snake when you’ve never encountered one before except in pictures or stories.
- As a first-grader, you became anxious at the sound of the school bell.
- Your professor utters the word “exam” and you get a funny feeling in your stomach.
- A familiar song on the radio creates mental images that change your mood.
What these events have in common is that a neutral stimulus (an odor, the sight of an animal, a sound, a spoken word, a song) has developed the power to evoke an emotional (affective), physiological (a muscle contraction), behavioral (running away), psychological (a shiver), or cognitive (an image) response. Thus, classical conditioning theorists propose that many of our behavioral, emotional, and cognitive responses to people, places, and things have been acquired through a process of classical conditioning.
For example, how might a learner develop a fear of math? Math, in and of itself, is a neutral stimulus. There is no natural connection between it and the emotional responses associated with fear (increased adrenalin flow, constriction of blood vessels, increased blood pressure, rapid breathing). However, there is a natural (unconditioned) association between being reprimanded (UCS) by a teacher or parent and the fear (UCR) that might immediately follow answering a question incorrectly or receiving a failing test grade. Such events, repeated over time, can condition a learner to respond with intense fear at the sight of a math test—or even the announcement that one is forthcoming.
Relevance for Teachers
As a teacher, you will want your learners to acquire positive attitudes toward you and your subject. Initially, you and your learning activities will be neutral stimuli, but over time you and how you teach can become conditioned stimuli that elicit emotions (or conditioned responses) of interest and joy, evoke approach behaviors such as studying and asking questions, and even arouse physiological responses of comfort and naturalness.
Learning theorists remind us that classical conditioning processes go on in classrooms all the time. Your role is to be aware of the classical conditioning paradigm and use it to build positive associations between your teaching activities and learning. We will offer some specific recommendations to help you achieve this goal.
While the classical conditioning paradigm can explain how children learn certain emotional, behavioral, and cognitive responses to neutral stimuli, it is not as successful in explaining how children learn to be successful in the classroom: to read and solve problems, follow directions, and work productively with others. Let’s look at a second learning paradigm, which can explain how learners develop these skills in their learners.
Operant conditioning focuses on how the consequences of a behavior affect the behavior over time. It begins with the idea that certain consequences tend to make certain behaviors happen more frequently. If I compliment a student for a good comment made during discussion, there is more of a chance that I will hear further comments from the student in the future. If a student tells a joke to classmates and they laugh at it, then the student is likely to tell more jokes in the future and so on.
Psychologist B. F. Skinner saw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn’t account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward Thorndike. According to the law of effect, behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about the desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up—even if we love our job.
Working with Thorndike’s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a “Skinner box.” A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal.
|Table 4.3.1. Positive and Negative Reinforcement and Punishment|
|Positive||Something is added to increase the likelihood of a behavior.||Something is added to decrease the likelihood of a behavior.|
|Negative||Something is removed to increase the likelihood of a behavior.||Something is removed to decrease the likelihood of a behavior.|
The most effective way to teach a person or animal a new behavior is with positive reinforcement. In positive reinforcement, a desirable stimulus is added to increase a behavior.
For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome quickly cleans his room because he wants a new art set. Let’s pause for a moment. Some people might say, “Why should I reward my child for doing what is expected?” But in fact we are constantly and consistently rewarded in our lives. Our paychecks are rewards, as are high grades and acceptance into our preferred school. Being praised for doing a good job and for passing a driver’s test is also a reward. Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most effective ways to increase achievement in school districts with below-average reading scores was to pay the children to read. Specifically, second-grade students in Dallas were paid $2 each time they read a book and passed a short quiz about the book. The result was a significant increase in reading comprehension (Fryer, 2010). What do you think about this program? If Skinner were alive today, he would probably think this was a great idea. He was a strong proponent of using operant conditioning principles to influence students’ behavior at school. In fact, in addition to the Skinner box, he also invented what he called a teaching machine that was designed to reward small steps in learning (Skinner, 1961)—an early forerunner of computer-assisted learning. His teaching machine tested students’ knowledge as they worked through various school subjects. If students answered questions correctly, they received immediate positive reinforcement and could continue; if they answered incorrectly, they did not receive any reinforcement. The idea was that students would spend additional time studying the material to increase their chance of being reinforced the next time (Skinner, 1961).
In negative reinforcement, an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go “beep, beep, beep” until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure—by pulling the reins or squeezing their legs—and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove.
Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast, punishment always decreases a behavior. In positive punishment, you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). In negative punishment, you remove a pleasant stimulus to decrease behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a stimulus (the toy) is removed in order to decrease the behavior.
Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example, imagine your four-year-old son, Brandon, hit his younger brother. You have Brandon write 100 times “I will not hit my brother” (positive punishment). Chances are he won’t repeat this behavior. While strategies like this are common today, in the past children were often subject to physical punishment, such as spanking. It’s important to be aware of some of the drawbacks in using physical punishment on children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may become fearful of the person who delivered the punishment—you, his parent. Similarly, children who are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010). Consequently, most schools in the United States have banned corporal punishment. Second, punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency (Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in turn, they may act out this same behavior when they become angry and frustrated. For example, because you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends when they won’t share their toys.
While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today’s psychologists and parenting experts favor reinforcement over punishment—they recommend that you catch your child doing something good and reward her for it.
Video 4.3.2. Operant Conditioning explains the processes of positive and negative reinforcement and punishment.
Key Concepts of Conditioning
Operant conditioning is made more complicated, but also more realistic, by several additional ideas. They can be confusing because the ideas have names that sound rather ordinary, but that have special meanings with the framework of operant theory. Among the most important concepts to understand are the following:
- intermittent schedules
The paragraphs below explain each of these briefly, as well as their relevance to classroom teaching and learning.
In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, in shaping, we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following:
- Reinforce any response that resembles the desired behavior.
- Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response.
- Next, begin to reinforce the response that even more closely resembles the desired behavior.
- Continue to reinforce closer and closer approximations of the desired behavior.
- Finally, only reinforce the desired behavior.
Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov’s dogs—he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior.
It’s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with humans? Let’s consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room.
Extinction refers to the disappearance of an operant behavior because of lack of reinforcement. A student who stops receiving gold stars or compliments for prolific reading of library books, for example, may extinguish (i.e. decrease or stop) book-reading behavior. A student who used to be reinforced for acting like a clown in class may stop clowning once classmates stop paying attention to the antics.
Generalization and Discrimination
Generalization refers to the incidental conditioning of behaviors similar to an original operant. If a student gets gold stars for reading library books, then we may find her reading more of other material as well—newspapers, comics, etc.–even if the activity is not reinforced directly. The “spread” of the new behavior to similar behaviors is called generalization. Generalization is a lot like the concept of transfer discussed early in this chapter, in that it is about extending prior learning to new situations or contexts. From the perspective of operant conditioning, though, what is being extended (or “transferred” or generalized) is a behavior, not knowledge or skill.
Discrimination means learning not to generalize. In operant conditioning, what is not overgeneralized (i.e. what is discriminated) is the operant behavior. If I am a student who is being complimented (reinforced) for contributing to discussions, I must also learn to discriminate when to make verbal contributions from when not to make them—such as when classmates or the teacher are busy with other tasks. Discrimination learning usually results from the combined effects of reinforcement of the target behavior and extinction of similar generalized behaviors. In a classroom, for example, a teacher might praise a student for speaking during discussion, but ignore him for making very similar remarks out of turn. In operant conditioning, the schedule of reinforcement refers to the pattern or frequency by which reinforcement is linked with the operant. If a teacher praises me for my work, does she do it every time, or only sometimes? Frequently or only once in awhile? In respondent conditioning, however, the schedule in question is the pattern by which the conditioned stimulus is paired with the unconditioned stimulus. If I am student with Mr Horrible as my teacher, does he scowl every time he is in the classroom, or only sometimes? Frequently or rarely?
Behavioral psychologists have studied intermittent schedules extensively (for example, Ferster, et al., 1997; Mazur, 2005), and found a number of interesting effects of different schedules. For teachers, however, the most important finding may be this: partial or intermittent schedules of reinforcement generally cause learning to take longer, but also cause extinction of learning to take longer. This dual principle is important for teachers because so much of the reinforcement we give is partial or intermittent. Typically, if I am teaching, I can compliment a student a lot of the time, for example, but there will inevitably be occasions when I cannot do so because I am busy elsewhere in the classroom. For teachers concerned both about motivating students and about minimizing inappropriate behaviors, this is both good news and bad. The good news is that the benefits of my praising students’ constructive behavior will be more lasting, because they will not extinguish their constructive behaviors immediately if I fail to support them every single time they happen. The bad news is that students’ negative behaviors may take longer to extinguish as well, because those too may have developed through partial reinforcement. A student who clowns around inappropriately in class, for example, may not be “supported” by classmates’ laughter every time it happens, but only some of the time. Once the inappropriate behavior is learned, though, it will take somewhat longer to disappear even if everyone—both teacher and classmates—make a concerted effort to ignore (or extinguish) it.
Video 4.3.4. Schedules of Reinforcement explains the various intermittent schedules.
Finally, behavioral psychologists have studied the effects of cues. In operant conditioning, a cue is a stimulus that happens just prior to the operant behavior and that signals that performing the behavior may lead to reinforcement. In the original conditioning experiments, Skinner’s rats were sometimes cued by the presence or absence of a small electric light in their cage. Reinforcement was associated with pressing a lever when, and only when, the light was on. In classrooms, cues are sometimes provided by the teacher deliberately, and sometimes simply by the established routines of the class. Calling on a student to speak, for example, can be a cue that if the student does say something at that moment, then he or she may be reinforced with praise or acknowledgment. But if that cue does not occur—if the student is not called on—speaking may not be rewarded. In more everyday, non-behaviorist terms, the cue allows the student to learn when it is acceptable to speak, and when it is not.
Primary and Secondary Reinforcers
Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let’s go back to Skinner’s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer.
What would be a good reinforcer for humans? For your child Chris, it was the promise of a toy when they cleaned their room. How about Sydney, the soccer player? If you gave Sydney a piece of candy every time Sydney scored a goal, you would be using a primary reinforcer. Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as well as provide pleasure.
A secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out “Great shot!” every time Sydney made a goal. Another example, money, is only worth something when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What about the stickers on the behavior chart? They also are secondary reinforcers.
Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known as token economies, are built around the use of these kinds of token reinforcers. Token economies have been found to be very effective at modifying behavior in a variety of settings such as schools, prisons, and mental hospitals. For example, a study by Cangi and Daly (2013) found that use of a token economy increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the children in the study exhibited appropriate behavior (not hitting or pinching), they received a “quiet hands” token. When they hit or pinched, they lost a token. The children could then exchange specified amounts of tokens for minutes of playtime.
Skinner and other behavioral psychologists experimented with using various reinforcers and operants. They also experimented with various patterns of intermittent reinforcement, as well as with various cues or signals to the animal about when reinforcement was available. It turned out that all of these factors—the operant, the reinforcement, the schedule, and the cues—affected how easily and thoroughly operant conditioning occurred. For example, reinforcement was more effective if it came immediately after the crucial operant behavior, rather than being delayed, and reinforcements that happened intermittently (only part of the time) caused learning to take longer, but also caused it to last longer.
Relevance for Teaching
Since the original research about operant conditioning used animals, it is important to ask whether operant conditioning also describes learning in human beings, and especially in students in classrooms. On this point the answer seems to be clearly “yes.” There are countless classroom examples of consequences affecting students’ behavior in ways that resemble operant conditioning, although the process certainly does not account for all forms of student learning (Alberto & Troutman, 2005). Consider the following examples. In most of them the operant behavior tends to become more frequent on repeated occasions:
- A seventh-grade boy makes a silly face (the operant) at the girl sitting next to him. Classmates sitting around them giggle in response (the reinforcement).
- A kindergarten child raises her hand in response to the teacher’s question about a story (the operant). The teacher calls on her and she makes her comment (the reinforcement).
- Another kindergarten child blurts out her comment without being called on (the operant). The teacher frowns, ignores this behavior, but before the teacher calls on a different student, classmates are listening attentively (the reinforcement) to the student even though he did not raise his hand as he should have.
- A twelfth-grade student—a member of the track team—runs one mile during practice (the operant). He notes the time it takes him as well as his increase in speed since joining the team (the reinforcement).
- A child who is usually very restless sits for five minutes doing an assignment (the operant). The teaching assistant compliments him for working hard (the reinforcement).
- A sixth-grader takes home a book from the classroom library to read overnight (the operant). When she returns the book the next morning, her teacher puts a gold star by her name on a chart posted in the room (the reinforcement).
These examples are enough to make several points about operant conditioning. First, the process is widespread in classrooms—probably more widespread than teachers realize. This fact makes sense, given the nature of public education: to a large extent, teaching is about making certain consequences (like praise or marks) depend on students’ engaging in certain activities (like reading certain material or doing assignments). Second, learning by operant conditioning is not confined to any particular grade, subject area, or style of teaching, but by nature happens in every imaginable classroom. Third, teachers are not the only persons controlling reinforcements. Sometimes they are controlled by the activity itself (as in the track team example), or by classmates (as in the “giggling” example). This leads to the fourth point: multiple examples of operant conditioning often happen at the same time.
Because operant conditioning happens so widely, its effects on motivation are a bit complex. Operant conditioning can encourage intrinsic motivation, to the extent that the reinforcement for an activity is the activity itself. When a student reads a book for the sheer enjoyment of reading, for example, he is reinforced by the reading itself, and we we can say that his reading is “intrinsically motivated.” More often, however, operant conditioning stimulates both intrinsic and extrinsic motivation at the same time. The combining of both is noticeable in the examples in the previous paragraph. In each example, it is reasonable to assume that the student felt intrinsically motivated to some partial extent, even when reward came from outside the student as well. This was because part of what reinforced their behavior was the behavior itself—whether it was making faces, running a mile, or contributing to a discussion. At the same time, though, note that each student probably was also extrinsically motivated, meaning that another part of the reinforcement came from consequences or experiences not inherently part of the activity or behavior itself. The boy who made a face was reinforced not only by the pleasure of making a face, for example, but also by the giggles of classmates. The track student was reinforced not only by the pleasure of running itself, but also by knowledge of his improved times and speeds. Even the usually restless child sitting still for five minutes may have been reinforced partly by this brief experience of unusually focused activity, even if he was also reinforced by the teacher aide’s compliment. Note that the extrinsic part of the reinforcement may sometimes be more easily observed or noticed than the intrinsic part, which by definition may sometimes only be experienced within the individual and not also displayed outwardly. This latter fact may contribute to an impression that sometimes occurs, that operant conditioning is really just “bribery in disguise,” that only the external reinforcements operate on students’ behavior. It is true that external reinforcement may sometimes alter the nature or strength of internal (or intrinsic) reinforcement, but this is not the same as saying that it destroys or replaces intrinsic reinforcement. But more about this issue later!
Behavior Modification in Children
Parents and teachers often use behavior modification to change a child’s behavior. Behavior modification uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several behaviors are listed. Sticker charts are a form of token economies, as described in the text. Each time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize, or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a wide range of behaviors, from students raising their hands, to walking quietly in the hall, to turning in their homework. At home, parents might create a behavior chart that rewards children for things such as putting away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be done consistently.
There are several important points that you should know if you plan to implement time-out as a behavior modification technique. First, make sure the child is being removed from a desirable activity and placed in a less desirable location. If the activity is something undesirable for the child, this technique will backfire because it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important. The general rule of thumb is one minute for each year of the child’s age. Sophia is five; therefore, she sits in a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally, as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your child to time-out; ignore your child during time-out (because caregiver attention may reinforce misbehavior); and give the child a hug or a kind word when time-out is over.