Группа авторов

Animal Behavior for Shelter Veterinarians and Staff


Скачать книгу

other behaviors get associated with the consequence instead.

      The problem of timing is a common one with pet owners. The following scenario might be familiar: Many dog owners come home to find that their dog has rummaged through the trash. In an attempt to punish trash‐rummaging behavior, the owner scolds the dog, perhaps by yelling or confining the dog to a crate. The problem, though, is that it is likely the dog rummaged through the trash hours before the owner came home. Then, even though the dog was peacefully chewing on its dog bone upon the owner’s return, it experienced an aversive consequence. Subsequently, the scolding was associated with appropriate behavior instead of the trash‐rummaging behavior that the owner attempted to punish. Timing, or more specifically, immediacy, is crucial for the development of a behavior‐consequence association.

      The second major factor that determines the effectiveness of a reinforcer or punisher in establishing a new or eliminating an unwanted behavior is how often the behavior is followed by the consequence. Formally, how often a consequence follows a behavior is called a schedule. If a consequence follows every instance of behavior, then the consequence is on a continuous schedule. In contrast, if a consequence does not follow every time a behavior occurs, then the consequence is on an intermittent schedule. For a strong association between a behavior and a consequence to develop, the consequence needs to follow the behavior every time it occurs. This is especially true when attempting to teach a new behavior with reinforcement or when attempting to reduce an unwanted behavior with punishment (Zimmerman and Ferster 1963).

      In fixed schedules, the number of responses needed to obtain reinforcement or the amount of time that needs to pass is the same every time. With fixed ratio schedules, the number of responses that need to occur for reinforcement to be delivered stays the same after each delivery. The number of responses can be 1, 10, or more. Regardless, the same number of responses is required for reinforcement to occur. For example, in scent detection dogs might not get reinforced with the target scent until the 10th bag they smell. With fixed interval schedules, the amount of time that must pass before a response is reinforced is the same across deliveries. Whether the interval is one minute or one hour, the same amount of time must pass before a response is reinforced. For example, a dog begging at the table will not be reinforced for the begging behavior until after the owner is done with dinner and gives the dog a handout.

Reinforcement schedule Definition Example
Fixed interval Reinforcement is delivered at a predictable time interval Letting animals out in the play yard: every morning at 9 a.m. the animal caregiver opens the enclosure door, but the animal’s behavior of checking the door to go outside isn’t reinforced until it checks the door after 9 a.m.
Variable interval Response is reinforced after an interval of time that varies but centers around some average amount of time Animal feedings: the time of feeding an animal may vary from day to day, but on average a caregiver provides food every eight hours. Therefore, the animal’s response to checking the bowl will not be reinforced until an average of eight hours has passed.
Fixed ratio Response is reinforced only after a specified number of responses Multiple repetitions: a trainer wants an animal to do multiple repetitions of the same behavior. Therefore, the trainer delivers reinforcement after every two correct responses.
Variable ratio Response is reinforced after an average number of responses Opening the door: an animal might paw at the door several times to be let through. The owner lets the animal in after the animal paws on average five times.

      Though intermittent schedules don’t work as well as continuous reinforcement for establishing a new behavior, they work really well in maintaining an already established behavior (Jenkins and Stanley 1950). Typically, after a dog is trained to sit, trainers reduce the number of reinforcers she receives for sitting. The trainer gradually transitions the continuous schedule of reinforcement to an intermittent schedule. As long as the dog receives a treat once in a while, she reliably sits on cue. Changing a continuous schedule of reinforcement to an intermittent one is often called “schedule thinning.” This procedure is beneficial for trainers because not only does it reduce the number of reinforcers needed to maintain behavior, but it also causes the animal to perform consistently. Intermittent schedules result in unpredictable deliveries of reinforcers that essentially teach the animal to be a devoted “gambler.” Without knowing when a response will be reinforced, the animal performs the behavior consistently and reliably! Based on laboratory research, once a behavior is maintained intermittently, it can be very hard to eliminate (Harper and McLean 1992).

      Training Dogs to Sit Using Variable Ratio Reinforcement

      An animal trainer is training dogs in the shelter to sit when someone walks by their kennel. The trainer decides to deliver food on a variable ratio 5 (written as VR 5). This means that on average, every fifth response will receive a food reward when someone walks by. The dog might receive a piece of food on the first response (sitting when the first person walks by), sixth response, second response, eighth response, fifth response,