Learning is defined by the Merriam‐Webster Dictionary as “knowledge or skill acquired by instruction or study” as well as “modification of a behavioral tendency by experience” (Merriam‐Webster n.d.). Learning leads to a semi‐permanent and long‐lasting change in an animal’s response. Learning can happen in a variety of ways, and it is important to consider how an animal’s past experiences may affect their behavior. Further, understanding the fundamental aspects of learning theory can facilitate treatment of behavior problems through teaching animals new skills to cope with their environments, thus strengthening the human–animal bond between clients and patients. Although teaching new skills and training specific behaviors are critical aspects of treating problem behaviors, it is equally important to understand the underlying emotional states that are contributing to the development and continuation of those behaviors. While working in conjunction with positive reinforcement‐based trainers can be beneficial, not all cases should be referred to trainers. Trainers are not substitutes for medical care. Therefore, it is imperative that veterinarians understand the basics of learning theory in order to appropriately handle problem cases. Nonassociative learning is a change in an animal’s behavior that results from repeated exposure to a single type of stimulus. It involves both habituation and sensitization. The gradual decrease in an animal’s response to repeated exposure to a stimulus is known as habituation (Mazur 2016; McGreevy and Boakes 2011). It is important to note that habituation occurs “naturally”; that is, this change in behavior occurs regardless of whether you want it to or not. For instance, a dog might initially startle at a loud noise. Over time, if the noise has no significant consequences (neither positive nor negative), the dog may learn to ignore the noise and stop reacting. Imagine a puppy living in a busy urban environment. Initially the puppy might be startled by the frequent sounds of sirens from emergency vehicles. However, over time, as the puppy is repeatedly exposed to this sound without any direct consequence to themself, they begin to realize that the sirens pose no threat. Gradually, the puppy’s reaction to the sirens diminishes over time. This process, where the puppy learns to ignore the sirens because they are irrelevant to daily life, is an example of habituation. In contrast, sensitization involves an increase in an animal’s response following repeated exposure to a stimulus (Mazur 2016; McGreevy and Boakes 2011). Unlike habituation, the stimulus in sensitization is often (but not necessarily) aversive. Sensitization can develop more quickly than habituation. It is imperative to understand that sensitization can lead to an increase in reactivity and serious behavior problems in the future. Consider a cat that experiences a loud, unexpected noise from a vacuum cleaner every time it is used. Instead of becoming accustomed to the noise, the cat becomes increasingly anxious and fearful each time the vacuum is turned on. Even the sight of the vacuum might start to trigger a fear response. This heightened reaction to the noise and presence of the vacuum cleaner over time is an example of sensitization. In this case, the cat’s response escalates because they perceive the vacuum noise as a potential threat or discomfort, even if the vacuum has never harmed them. Associative learning involves a change in behavior that results from the association between two stimuli. There are two main types of associative learning: classical and operant conditioning. Classical conditioning is a type of associative learning that relies on reflexes rather than conscious effort. This learning process occurs when an unconditioned stimulus, which naturally elicits an unconditioned response, is paired with a neutral stimulus. Over time, the neutral stimulus alone begins to trigger the response, even in the absence of the unconditioned stimulus. The precise order and timing of the presentation of these stimuli are critical for classical conditioning to take place (Mazur 2016; McGreevy and Boakes 2011). Classical conditioning can also be undone through a process called extinction. Extinction happens when the conditioned stimulus is no longer followed by the unconditioned stimulus. After enough instances where the conditioned stimulus is not reinforced, the learned behavior diminishes, and the conditioned stimulus loses its ability to predict the unconditioned response. This process should not be confused with habituation, as the mechanisms and outcomes are distinct. A well‐known example of classical conditioning is Pavlov’s experiments with dogs. In these experiments, Pavlov paired the sound of a bell (neutral stimulus) with the presentation of food (unconditioned stimulus). Initially the bell had no effect on the dogs, but the food naturally caused them to salivate (unconditioned response). As Pavlov repeatedly presented the food alongside the ringing bell, the dogs began to associate the two. Eventually, the sound of the bell alone was enough to make the dogs salivate, demonstrating that the bell had become a conditioned stimulus eliciting a conditioned response. Consider a young puppy starting their training sessions. In these sessions the trainer uses a clicker, a small device that makes a clicking sound, followed by giving the puppy a treat. At first, the clicker’s sound is a neutral stimulus to the puppy, meaning it does not naturally elicit any specific response, as the puppy has no prior association with this sound. Figure 3.1 Using classical conditioning, a clicker can be conditioned to use for marking behaviors as part of positive reinforcement training. Source: Duncan Andison/Adobe Stock Photos. As the training continues a pattern emerges: each time the trainer clicks the clicker, a treat is promptly given to the puppy. The treat acts as an unconditioned stimulus since it naturally brings about engaging emotions from the puppy, such as expectation of food or salivation (the unconditioned response). After repeated pairings of the clicker’s sound with the treat, the puppy begins to expect the treat as soon as they hear the click. The clicker’s sound has now transformed into a conditioned stimulus (Figure 3.1). It elicits a conditioned response in the puppy, which is now used to mark new and desirable behaviors. Counterconditioning involves modifying an animal’s emotional reaction to a stimulus, particularly transforming a negative emotion into a positive one (Mazur 2016; McGreevy and Boakes 2011). In other words, the goal of counterconditioning is to change an established conditioned emotional response (CER) to an alternative one. The process requires the pairing of the original conditioned stimulus, which initially elicits a negative emotional response (CER–), with a new unconditioned stimulus that triggers a strong, positive emotional response (CER+). This positive response should be incompatible with the original negative one. Over time, the association that forms between the original conditioned stimulus and the unconditioned stimulus will decrease the magnitude of the original conditioned response, and it may eventually eliminate the conditioned response entirely. Consider a dog named Spot who has a habit of barking loudly every time someone knocks on the door. In this scenario, the sound of the knock is the conditioned stimulus that triggers Spot’s barking (the conditioned response), which is likely a mix of alertness, fear, anticipation, and anxiety. To modify Spot’s behavior through counterconditioning, the goal is to change his reaction from barking to a calmer and quieter behavior by associating the knocking with a piece of chicken, which Spot absolutely loves and naturally responds to with excitement and anticipation. The counterconditioning process begins by establishing a new routine. Each time someone knocks on the door, instead of responding to Spot’s barking, his caregiver immediately gives him a piece of chicken. The key is to do this consistently and promptly, so Spot starts to anticipate the chicken as soon as he hears the knock. Over time, and with consistent repetition, Spot begins to form a new association: instead of barking, he starts to sit and wait for his piece of chicken whenever he hears someone knocking. The sound of the knock, which used to trigger barking, now predicts something enjoyable, leading to a change in Spot’s behavior. Through repeated and consistent practice of this new routine, Spot’s response to door knocks shifts dramatically. His initial conditioned response of barking is replaced with a more composed behavior of sitting and waiting for a treat. This change in behavior is a clear indication of successful counterconditioning, where the undesirable behavior (barking) has been transformed into a more desirable one (sitting quietly and waiting for a treat). Fear conditioning is a particular type of classical conditioning where an individual learns to associate a neutral stimulus with a fear‐inducing stimulus, resulting in a fear response to the previously neutral stimulus. During this learning process, a neutral stimulus (e.g., a sound or visual cue) is paired with an aversive or fear‐inducing stimulus (e.g., a loud noise or an electric shock). Over time, the neutral stimulus becomes a conditioned stimulus that triggers the fear response without the need for the aversive stimulus. One example of fear conditioning involves using a shock collar to stop barking. The shock collar itself serves as the neutral stimulus. When the dog barks excessively (the undesired behavior), the caregiver activates the shock collar, delivering an electric shock to the dog as the aversive stimulus. This action is intended to deter the dog from barking. Through repeated experiences, the dog learns to associate the presence of the shock collar with the painful electric shocks they receive when they bark excessively. Over time, the shock collar becomes a conditioned stimulus that predicts the aversive stimulus, even when it is not actively delivering a shock. As a result of this fear conditioning, the dog may display signs of fear and anxiety when the caregiver is attempting to put the shock collar on the dog or in a particular location where the shock had previously occurred (e.g., in the yard, in the front hall, on the deck). The dog might even show signs of fear while wearing the shock collar, even when not actively in use. This fear response reflects a learned association between the presence of the collar and the aversive experiences they have endured. This is why shock collars are considered inhumane and are never recommended for training or behavior modification (Blackwell and Casey 2006; China et al. 2020; Cooper et al. 2014; Fernandes et al. 2017; Masson et al. 2018; Overall 2007; Ziv 2017). Operant conditioning involves a process where an animal’s voluntary actions control their learning experience. This type of conditioning is centered around the concept that the consequences of an animal’s behavior determine how frequently that behavior is performed in the future. In other words, the behavior itself triggers a response, which then influences the probability of the behavior’s recurrence (Mazur 2016; McGreevy and Boakes 2011). This form of conditioning can involve both positive and negative forms of punishment and reinforcement, which play a role in either increasing or decreasing the likelihood of a behavior. B.F. Skinner’s experiments with mice in a box are classic examples of operant conditioning. In his experiments, a mouse placed in a box with a lever learned that pressing the lever resulted in receiving food. This positive reinforcement (receiving food) increased the likelihood of the lever‐pressing behavior. Conversely, when the mice scratched the walls of the box, they were subjected to a loud noise. This positive punishment (introduction of an unpleasant stimulus) decreased the frequency of the wall‐scratching behavior. Through these experiments, Skinner demonstrated the fundamental principles of operant conditioning: behaviors that are rewarded (reinforced) are more likely to be repeated, and behaviors that result in an unpleasant outcome (punished) are less likely to occur again. This principle is a cornerstone of learning and is widely applied in various settings, including animal training. Operant conditioning is based on four contingencies, and in this context “negative” refers to the removal of a stimulus, while “positive” indicates the addition of a stimulus and not necessarily “good” or “bad.” See Table 3.1. Punishment describes methods used to decrease the frequency of a given behavior. Punishment is commonly thought to be “mean,” but it does not necessarily indicate that the methodology is strongly aversive. There are specific rules for punishment to be effective, and it can have significant side effects. This occurs when an additional element is introduced to decrease the frequency of a behavior. For instance, spraying a cat with water to prevent them from jumping on the counter is an example of positive punishment. Here, the water spray (an added factor) is meant to decrease the cat’s counter‐jumping behavior. (See “Effective Punishment.”) Negative punishment happens when something is taken away to decrease a behavior’s frequency. For example, withdrawing attention from a dog that jumps on someone entering the home. In this case, the removal of attention (the subtracted factor) is meant to reduce the jumping behavior. Table 3.1 The four contingencies of operant conditioning. Effective punishment in training and behavior modification should follow specific rules and guidelines to be both ethical and maximally effective. In general, punishment is never recommended for behavior modification, but here are some key guidelines that must be met for punishment to be effective: When using punishment, the targeted behavior should decrease immediately. If the behavior persists after one or two instances of punishment, then reconsider the approach; further punishment will not be effective. The goal of punishment is to decrease the frequency of a behavior by either adding or removing a stimulus. There is often confusion between what is positive punishment and negative reinforcement. While they may seem similar, the outcome is different. Positive punishment decreases behavior by adding an aversive stimulus, whereas negative reinforcement increases behavior through the removal of an unpleasant stimulus. Reinforcement is used to increase the frequency of a behavior. Reinforcement does not necessarily indicate that the method is “nice” or “kind.” This involves adding something to increase the frequency of a behavior. For example, giving a dog a treat after they sit is positive reinforcement. Negative reinforcement occurs when removing a factor increases the frequency of a behavior. For example, easing leash pressure when a dog stops pulling on the leash is negative reinforcement. The reduction of leash tension (the removed element) encourages the dog to walk without pulling. Escape conditioning is a specific form of operant conditioning in which an animal learns to engage in particular behaviors to remove themselves or escape from an unpleasant situation. This type of conditioning can progress into avoidance conditioning, where the animal anticipates the unpleasant situation and proactively displays behaviors to avoid it entirely. Distinct from fear conditioning, negative reinforcement rather than positive punishment often leads to this type of behavior. In avoidance conditioning, the animal recognizes a specific stimulus as a predictor of an unpleasant experience and alters their behavior to prevent encountering that situation. For example, as noted by Mills (1997), a dog that has a strong dislike of car rides might initially resist or struggle when being put into a car. Over time, this dog may learn to associate certain cues, such as the caregiver picking up car keys or putting on shoes, with the unpleasant experience of car rides. As a result, the dog might start to hide whenever these cues appear in an attempt to avoid the car‐ride experience altogether. Reinforcement schedules are an essential concept in the field of behavior psychology, particularly in operant conditioning. These schedules determine how and when a response will be followed by a reinforcer and are used to shape and maintain behavior (Table 3.2). There are two primary types of reinforcement schedules: continuous and intermittent (or partial) (Lindsay 2013). Table 3.2 Different reinforcement schedules for teaching and maintaining behavior.
3
Basic Learning Theory and Choosing a Trainer
Types of Learning
Nonassociative Learning
Habituation
Example of Habituation
Sensitization
Example of Sensitization
Associative Learning
Classical Conditioning
Example of Classical Conditioning with a Puppy, a Clicker, and Food
Counterconditioning
Example of Counterconditioning: A Dog Barking at the Door
Fear Conditioning
Operant Conditioning
The Four Contingencies of Operant Conditioning
Punishment
Positive Punishment (P+)
Negative Punishment (P−)
Contingency
Description
Example
Positive reinforcement (R+)
Adding (+) something to increase the frequency of a behavior
Giving a treat to a dog after they sit
Negative reinforcement (R−)
Removing (−) something to increase the frequency of a behavior
Releasing leash pressure when a dog is in the heel position
Positive punishment (P+)
Adding (+) something to decrease the frequency of a behavior
Spraying water on a cat to stop them while scratching furniture
Negative punishment (P−)
Removing (−) something to decrease the frequency of a behavior
Turning away (removing attention) as a dog jumps up to discourage jumping
Effective Punishment
Reinforcement
Positive Reinforcement (R+)
Negative Reinforcement (R−)
Escape Conditioning
Reinforcement Schedules
Reinforcement schedule
Description
Response rate
Extinction resistance
Continuous reinforcement
Behavior is reinforced every single time it occurs
High
Low ![]()
Stay updated, free articles. Join our Telegram channel
Full access? Get Clinical Tree
Get Clinical Tree app for offline access