Wednesday, August 19, 2020
Instrumental Conditioning in Psychology
Instrumental Conditioning in Psychology Theories Behavioral Psychology Print Instrumental Conditioning in Psychology Another Term for Operant Conditioning By Kendra Cherry facebook twitter Kendra Cherry, MS, is an author, educational consultant, and speaker focused on helping students learn about psychology. Learn about our editorial policy Kendra Cherry Updated on February 24, 2020 PhotoTalk / Getty Images More in Theories Behavioral Psychology Cognitive Psychology Developmental Psychology Personality Psychology Social Psychology Biological Psychology Psychosocial Psychology Instrumental conditioning is another term for operant conditioning, a learning process first described by B. F. Skinner. In instrumental conditioning, reinforcement or punishment are used to either increase or decrease the probability that a behavior will occur again in the future. Examples of Operant Conditioning For example, if a student is rewarded with praise every time she raises her hand in class, she becomes more likely to raise her hand again in the future. If she is also scolded when she speaks out of turn, she becomes less likely to interrupt the class. In these examples, the teacher is using reinforcement to strengthen the hand-raising behavior and punishment to weaken the talking out of turn behavior. Instrumental conditioning is often used in animal training as well. For example, training a dog to shake hands would involve offering a reward every time the desired behavior occurs. History of Operant Conditioning Psychologist E.L. Thorndike was one of the first to observe the impact of reinforcement in puzzle box experiments with cats. During these experiments, Thorndike observed a learning process that he referred to as âtrial-and-errorâ learning. The experiments involved placing a hungry cat in a puzzle box and in order to free itself, the cat had to figure out how to escape. Thorndike then noted how long it took the cats to free themselves on each experimental trial. Initially, the cats engaged in ineffective escape methods, scratching and digging at the sides or top of the box. Eventually, trial-and-error would lead the cats to successfully push or pull the escape route. After each successive trial, the cats engaged less and less in the ineffective escape behaviors and more quickly responded with the correct escape actions. Thorndike referred to his observations as the Law of Effect. The strength of a response increases when it is immediately followed by a satisfier (reinforcer).?? On the other hand, actions that are followed by unpleasant effects are more likely to be weakened. In Thorndikes puzzle box experiments, escaping the box was the satisfier. Every time the cats successfully escaped the box, the behavior that immediately preceded the escape was reinforced and strengthened. Thorndikes work had a tremendous effect on B.F. Skinners later research on operant conditioning. Skinner even created his own version of Thorndikes puzzle boxes which he referred to as an operant chamber, also known as a Skinner box. How Operant Conditioning Works Skinner identified two key types of behaviors. The first type is respondent behaviors. These are simply actions that occur reflexively without any learning. If you touch something hot, you will immediately draw your hand back in response. Classical conditioning focuses on these respondent behaviors. In Pavlovs classic experiments with dogs, salivating to the presentation of food was the respondent behavior. By forming an association between the sound of a bell and the presentation of food, however, Pavlov was able to train dogs to actually salivate simply at the sound of that bell. Skinner realized that while classical conditioning could explain how respondent behaviors could lead to learning, it could not account for every type of learning.?? Instead, he suggested that it was the consequences of voluntary actions that lead to the greatest amount of learning. The second type of behaviors is what Skinner referred to as operant behaviors. He defined these as any and every voluntary behavior that acts upon the environment to create a response. These are the voluntary behaviors that are under our conscious control. These are also actions that can be learned. The consequences of our actions play an important role in the learning process. Reinforcement and Punishment Skinner identified two key aspects of the operant conditioning process. Reinforcement serves to increase the behavior while punishment serves to decrease the behavior.?? There are also two different types of reinforcement and two different types of punishment. Positive reinforcement involves presenting a favorable outcome, such as giving a child a treat after she cleans her room. Negative reinforcement involves the removal of an unpleasant stimulus, like telling a child that if she eats all her potatoes then she wonât have to eat her broccoli. Since the child considers broccoli an unpleasant consequence and eating the potatoes leads to the removal of this undesirable consequence, eating the potatoes is then negatively reinforced. Positive punishment means applying an unpleasant event after a behavior. Spanking, for example, is a common example of positive punishment. This type of punishment is often referred to as punishment by application. A negative consequence is directly applied to reduce the unwanted behavior. Negative punishment involves taking away something pleasant after a behavior occurs. For example, if a child fails to clean her room, her parents might tell her that she cannot go to the mall with her friends. Taking away the desirable activity acts as a negative punisher on the preceding behavior.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.