PSY322-Operant Conditioning

From PsychWiki - A Collaborative Psychology Wiki

Jump to: navigation, search

Operant Conditioning is defined as the study of reversible behavior maintained by the reinforcement schedules (Cerruti, & Staddon, 2003). But how does one distinguish operant conditioning from classical conditioning? Simple, in classical conditioning, the presentation of two stimuli, CS and US, is determined by the experimenter and is completely independent of the subjects behavior, whereas in operant conditioning, the organisms behavior is instrumental in bringing about the occurrence of the US. CS meaning of course a conditioned stimulus and US meaning an unconditioned stimulus (Williams, 1973). In operant learning, a stimulus is a reinforcer if it increases the frequency of occurrence of the operant (for example, the giving of meat powder to the dog after he sits up or speaks results in increasing the occurrence of such response). When a particular stimulus (for instance, a shock) leads to a reduction in the frequency of the operant, it is termed an aversive or punishing stimulus (Williams, 1973). For this to function, the reinforcement is done in schedules. A schedule describes the stimuli in which a reinforcing event is delivered following and contingent upon the occurrence of the recorded response (Henton, & Iversen, 1978). This schedule of reinforcement is made available to the subject only some of the time and according to certain rules and these rules define the schedule. Different schedules gives rise to characteristically different patterns of operant behavior. The simplest ones are as follows: Fixed ratio. Every nth response is reinforced. The ratio, in other words, is the ratio of responses to reinforcements. Fixed interval, when a reinforcement becomes available after a fixed period of time following the previous reinforcement. Variable interval, is a reinforcement is made available at variable intervals following a reinforcement, meaning that that reinforcement could become available at any time and there is no cue that tells the subject when it is available (Mook, 2004). Like any other experiment there are variables to consider. The dependent variables in operant conditioning are the rate, duration, force, and latency of the recorded response (Henton, & Iversen, 1978). This however does have a different effect between males and females. Sex differences in operant conditioning are at least in part attributable to sex differences in performance (Davisson, M. T., Schmidt, C., & Wenger, G. R. 2004). Female rats respond actively to aversive stimulation such as a footshock, whereas male rats behavioral inhibition by reacting passively and freezing (Davisson, M. T., Schmidt, C., & Wenger, G. R. 2004). This however, may be due to the fact that female rats are far more active than male rats.

Example – Research

To further explain operant conditioning, Spira A. P. & Edelstein B. A. 2007, conducted a study in older adults with Alzheimer’s disease (AD). They predicted that those with AD would acquire the experimental task and that they would demonstrate less sensitivity to changes in schedules of reinforcement than participants without AD. The method of the research study was designed to evaluate the sensitivity of participants responding to transitions between Fixed Ratio (FR), Fixed Interval (FI), and Extinction (EXT) schedules. Experimental sessions began when subjects were cued that they could begin button pressing and the backlit button on the experimental console was illuminated. Sessions ended when the light behind the button was turned off. The button pressing was reinforced by the delivery of nickels according to an FR schedule or an FI schedule. If participants did not begin pressing the button independently within 1 min, the experimenter manually placed their hands on the manipulandum and completed a response. In addition, verbal prompts were provided to increase the likelihood that the subjects contacted the programmed contingencies (e. g., “Please press the button”). The experimenter gradually increased the number of responses required for reinforcement to occur, with the goal of obtaining a mean IRI of approximately 5 s by the time responding stabilized. The FI condition began after responding stabilized. The length of the FI during this condition was programmed to be equal to the mean IRI obtained by subjects during stable responding in the FR condition. The EXT condition began when responding stabilized, it continued until the mean response rate over two consecutive blocks fell to 25% or less of the mean response rate during stable blocks of the FI condition. The results of the experiment confirmed the hypothesis. Comparison participants performed in the normal range on all of these measures. However two participants with AD scored above the clinical cutoff on the Geriatic Depression Scale.

Example – Real Life

Over the past several months, my dog has been jumping into the arms of people who enter my home. In order to stop him, I had to use a little shocking device (which was non-lethal, only startled him) to prevent him from continuing such behavior. The shocking would be done immediately after jumping towards a person to associate the shock with the leaping and negate the continuation of the jumping on people. To make this work faster, I would invite friends over and have my dog do what naturally came to mind which would be jumping on top of them. After such behavior, the shocking device would follow. After several days or weeks, not sure how long it took; my dog finally began associating the punishment with his behavior. This began inhibiting his jumping desire and gradually reduced his behavior. Now, since there was less jumping, naturally there was less shocking as well. When he noticed that there was less shocking, extinction began occurring because the behavior that had been previously reinforced was no longer being reinforced. The jumping behavior reappeared slowly until the reinforcer was reestablished. Finally he associated such behavior with the shock and completely stopped the jumping.

References

Cerruti, D. T., & Staddon, J. E. R. (2003). Operant Conditioning. Annual Review of Psychology, 54, 115-144.

Dalla, C, & Shors, T. J. (2009). Sex differences in learning processes of classical and operant conditioning. Physiology & Behavior, 97(2), 229-238.

Davisson, M. T., Schmidt, C., & Wenger, G. R. (2004). Operant Conditioning in the Ts65Dn mouse: Learning. Behavior Genetics, 34(1), 105-119.

Henton, W. W., & Iversen, I. H. (1978). Classical Conditioning and Operant Conditioning. New York: Springer-Verlag.

Mook, D. G. (2004). Classic Experiments in Psychology. West Port CT: Green Wood Publishing Inc..

Williams, J. L. (1973). Operant Conditioning: Procedures for Changing Behavior. Monterey, CA: Brooks/Cole Publishing Company.






◄ Back to Fall 2009 PSY 322 Assignment page

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox