This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
1. T or F A lot of the first “real” scientific Psychology was done on learning.
2. T or F Psychologists draw inferences about learning from changes in observable behavior.
3. Learning may be defined as:1a. A relatively permanent change in behavior
1b. An absolutely permanent change in behavior
2a. Acquired through inheritance
2b. Acquired through experience
3a. Can be attributed to illness, injury, or maturation.
3b. Can not be attributed to illness, injury, or maturation.
4. The ability to observe behavior can depend upon which of the following (3)a. Motivationb. Maturity c. Strength of the reinforcer d. Context e. Capability f. The size of the bell used in conditioning
5. Pavlov started his research in what Century? 1600s 1700s 1800s 1900s 2000s
6. Provide on example of how I said that Pavlov was a careful researcher.
7. How do organisms learn? (4)a. Habituation/Extinctionb. Neoclassic Conditioningc. Laboratory Researchd. Classical Conditioninge. Non-Classical Conditioningf. Instrumental or operant conditioningg. Operant Behaviorh. Cognitive learning (Observational Learning)
8. Circle the first step of Classical conditioning:[“” means elicited or evoked and “=“ means paired]
a. NS=USURb. UR=USCSc. CSUR
9. Another name for Classical conditioning is:___________________________10. Stimulus: must be which of the following? (1 correct answer)Perceived | Sensed | Not an object | Brightly colored | Emotional11. Reflex is a(n) __________response to a particular stimulus.12. Reflexes are made up of both a ______________ and a _________________.
The ability to observe behavior can depend upon which of the following Motivation Maturity Strength of the reinforcer Context Capability The size of the bell used in conditioning
The first step of Classical conditioning: NS=USUR UR=USCS CSUR “” means elicited or evoked and “=“ means paired Another name for Classical conditioning is______________
Stimulus (STIM-yu-lus): must be which of the following? Perceived | Sensed | Not an object | Brightly colored | Emotional Reflex is a(n) __________response to a particular stimulus. Reflexes are made up of both a ____ and a _________.
Classical conditioning is not limited to just two procedures: (1) by the pairing of a conditioned stimulus with an unconditioned stimulus (2) through generalization.
Classical conditioning can occur in another way: Higher-order conditioning.
Higher-order conditioning takes place when: a neutral stimulus is paired with an existing conditioned stimulus a neutral stimulus becomes associated with an existing conditioned stimulus a neutral stimulus gains the power to elicit the same conditioned response.
Higher-order conditioning can account for many of the positive and negative feelings toward stimuli that people associate with other people or situations.
spontaneous recovery: The reappearance of an extinguished response (in a weaker form) when an organism is exposed to the original conditioned stimulus following a rest period.
generalization: In classical conditioning, the tendency to make a conditioned response to a stimulus similar to the original conditioned stimulus.
discrimination: The learned ability to distinguish between similar stimuli so that the conditioned response occurs only to the original conditioned stimulus but not to similar stimuli.
higher-order conditioning: Conditioning that occurs when a neutral stimulus is paired with an existing conditioned stimulus, becomes associated with it, and gains the power to elicit the same conditioned response.
The ideal time between conditioned and unconditioned stimuli is about 1/2 second varies according to the type of response being conditioned the nature and intensity of the conditioned stimulus the nature and intensity of the unconditioned stimulus
Some studies indicate that the age of the subject may also be a variable affecting the optimal time interval
If the conditioned stimulus occurs too long before the unconditioned stimulus, an association will not form.
The one notable exception to this general principle is conditioning of taste aversions.
The consequence, or effect, of a response will determine whether the tendency to respond in the same way in the future will be strengthened or weakened.
Responses closely followed by satisfying consequences are more likely to be repeated.
Connections between a stimulus and a response will be strengthened if the response is followed by a satisfying consequence and weakened if the response is followed by discomfort.
Operant Vs Classical Conditioning In classical conditioning
Organism does not learn a new response O learns to make an old or existing response to a new stimulus. O cannot help but respond in expected ways. Classically conditioned responses are involuntary or reflexive Process begins with stimulus to evoke a reflexive response.
In operant conditioning O learns a new response O learns to apply voluntary responses. Response comes first, the consequence that follows tends to modify this response in
the future. Consequences of behavior manipulated to increase or decrease response frequency or
to shape an entirely new response. Behavior that is reinforced tends to be repeated. Process does not begin with a stimulus to elicit a response.
A reinforcer is anything that strengthens or increases the probability of the response it follows. Behavior that is ignored or punished is less likely to be repeated.
shaping: An operant conditioning technique that consists of gradually molding a desired behavior (response) by reinforcing responses that become progressively closer to it.
Skinner box: A soundproof chamber with a device for delivering food and detecting behavior - usually automatically.
B. F. Skinner developed this technique Particularly effective in conditioning complex behaviors. Process:
Don’t wait for desired response and then reinforcing it Reinforce any movement in the direction of desired response Gradually guide responses closer and closer to the goal.
Skinner designed a soundproof apparatus, commonly called a Skinner box, with which he conducted his experiments in operant conditioning. One type of box is equipped with a lever, or bar, that a rat presses to gain a reward of food pellets or water from a dispenser. A record of the animal’s bar-pressing is registered on a device called a cumulative recorder, also invented by Skinner. Through the use of shaping, a rat in a Skinner box is conditioned to press a bar for rewards. It may be rewarded first for simply turning toward the bar. The next reward comes only when the rat moves closer to the bar. Each step closer to the bar is rewarded. Next the rat must touch the bar to receive a reward; finally, it is rewarded only when it presses the bar.
One pleasant Saturday afternoon I surveyed my supply of dry pellets and, appealing to certain elemental theorems in arithmetic, deduced that unless I spent the rest of that afternoon and evening at the pill machine, the supply would be exhausted by ten-thirty Monday morning. Since I do not wish to deprecate the hypothetico-deductive method, I am glad to testify here to its usefulness. It led me to apply our second principle of unformalized scientific method and to ask myself why every press of the lever had to be reinforced. I was not then aware of what had happened at the Brown Laboratories, as Harold Schlosberg later told the story. A graduate student had been given the task of running a cat through a cat through a difficult discrimination experiment. One Sunday, the student found the supply of cat food exhausted. The stores were closed, and so, with a beautiful faith in the frequency-theory of learning, he ran the cat as usual and took it back to its living cage unrewarded. Schlosberg reports that the cat howled its protest continuously for nearly forty eight hours. Unaware of this, I decided to reinforce a response only once every minute and to allow all other responses to go unreinforced There were two results: (a) my supply of pellets lasted almost indefinitely, and (b) each rat stabilized at a fairly constant rate of responding. Now, a steady state was something I was familiar with from physical chemistry, and I therefore embarked upon the study of periodic reinforcement. I soon found that the constant rate at which the rat stabilized depended upon how hungry it was. Hungry rat, high rate; less hungry rat, lower rate. At that time I was bothered by the practical problem of controlling food deprivation. I was working half time at the Medical School (on chronaxie of subordinations) and could not maintain a good schedule in working with the rats. The rate of responding under periodic reinforcement suggested a scheme for keeping a rat at a constant level of deprivation. The argument went like this: Suppose you reinforce the rat, not at the end of a given period, but when it has completed the number of responses ordinarily emitted in that period. And suppose you use substantial pellets of food and give the rat continuous access to the lever. Except for periods when the rat sleeps, it should operate the lever at a constant rate around the clock. For, whenever it grows hungrier, it will work faster, get food faster, and become less hungry, while whenever it grows slightly less hungry, it will respond at a lower rate, get less food, and grow hungrier. By setting the reinforcement at a given number of responses, it should even be possible to hold the rat at any given level of deprivation. I visualized a machine with a dial which one could set to make available, at any time of day or night, a rat in a given state of deprivation. Of course, nothing of the sort happens. This is fixed-ratio rather than fixed- interval' reinforcement and, as I soon found out, it produces a very different type of performance. This is an example of a fifth unformalized principle of scientific c practice, but one which has at least been named. Walter Cannon described it with a word invented by Horace Walpole:serendipity the art of finding one thing while looking for something else.
Responses followed by reinforcers tend to be repeated and responses no longer followed by reinforcers will occur less and less frequently and eventually die out.
In operant conditioning, extinction occurs when reinforcers are withheld.
Spontaneous recovery also occurs in operant conditioning.
Generalization occurs in operant conditioning. Generalization occurs when a reinforced stimuli causes
response to non reinforced stimuli which have common characteristics.
Discrimination in operant conditioning involves learning to distinguish between a stimulus that has been reinforced and other stimuli that may be very similar.
Discrimination is learned when the response to the original stimulus is reinforced but responses to similar stimuli are not reinforced.