Steven I. Dworkin, Ph.D. 1 Choice and Matching Chapter 10.

Post on 14-Dec-2015

219 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

Transcript

Steven I. Dworkin, Ph.D.

1

Choice and Matching

Chapter 10

Steven I. Dworkin, Ph.D.

2

Choice

• Can you think of a situation or behavior that does not involve choice?

Steven I. Dworkin, Ph.D.

3

Choice and Matching

• Concurrent schedule of reinforcement– Simultaneous presentation of two or more

independent schedules, each of which leads to a reinforcer.

• Conc– VR 20 VR 50– VI 30 VI 60

Steven I. Dworkin, Ph.D.

4

The Matching Law

• Hernstein’s 1961• Concurrent VI

schedules (eg. VI 135” VI 270” – 13 versus 27 reinforcers/hr)

Steven I. Dworkin, Ph.D.

5

The Matching Law

• Proportion of responses emitted on a particular schedule matches the proportion of reinforcers obtained on that schedule.

• Ra/Ra+Rb = SRa/SRa+SRb

Steven I. Dworkin, Ph.D.

6

Matching Law

• Data from pigeon on a conc VI 30 VI 60

• Reinforcers VI 30 – 119, VI 60 – 58

• Responses VI 30 – 2800, VI 60 – 1450

• Proportion of reinforcers -119/177 = .67

• Proportion of responses – 2800/4250 = .66

Steven I. Dworkin, Ph.D.

7

Matching

• Not just in the laboratory

• Not just rats and pigeons

Steven I. Dworkin, Ph.D.

8

Deviations from Matching

• Undermatching – proportion of responses on the richer alternative versus poorer alternative is less different then those predicted from matching (less different)

• Little cost for switching from one alternative to another.

Steven I. Dworkin, Ph.D.

9

Deviations from Matching

• Overmatching – proportion of responses on the richer versus poorer of two alternatives is more different than would be predicted by matching (overmatching more different)

• Cost to move is high

Steven I. Dworkin, Ph.D.

10

Deviations from Matching

• Bias – one alternative attracts a higher proportion of responses than would be predicted by matching, regardless of whether that alternative is the richer or poorer of the two alternatives.

Steven I. Dworkin, Ph.D.

11

Deviations from Matching

Steven I. Dworkin, Ph.D.

12

Quality and Amount

• Matching holds

Steven I. Dworkin, Ph.D.

13

Application to Single Schedules

• As relative reinforcement for operant response increases responding will increase

• Context is important– Outdated magazines in doctors office– Covering JEABs when graduate applications

are due

Steven I. Dworkin, Ph.D.

14

Melioration

• To make better

• The distribution of behavior in a choice situation shifts toward those alternatives that have a higher value regardless of the effect on the overall amount of reinforcement

Steven I. Dworkin, Ph.D.

15

Problems with Melioration

• Tendency towards richer alternatives can result in reduction in obtained reinforcements

• VR 100 VI 30– Allocation of study time to different courses

Steven I. Dworkin, Ph.D.

16

Problems with Melioration

• Over indulgence in a highly reinforcing alternative can often result in long-term habituation to that alternative, thus reducing its value as a reinforcer.– Too much of a good thing……– Be careful of what you wish for….

Steven I. Dworkin, Ph.D.

17

Problems with Melioration

• Often the result of behavior being too strongly governed by immediate versus delayed consequences.

Steven I. Dworkin, Ph.D.

18

Optimization Theory

• Make decisions that maximize satisfaction

• Matching law –description

• Optimization theory – explanation

• Matching occurs when it is optimal thing to do

• Concurrent VIs matching maximizes the rate of reinforcement

Steven I. Dworkin, Ph.D.

19

Optimization versus Matching

• Mazur 1981– Concurrent chained

schedules

– Matching over optimization

• Concurrent VI VR– Matching

• Other studies optimization

Steven I. Dworkin, Ph.D.

20

Molar versus Molecular Control

• Context

• History

• Bias

Steven I. Dworkin, Ph.D.

21

Momentary Maximization Theory

• Selection of alternative with highest value at the moment– Size and quality of reinforcer– State of deprivation

• Momentary best choice not always best in long run (self-control)

• Order in moment to moment patterns?• Gambling experiment• Data suggest absence of momentary maximization if

number of responses considered• Maybe maximization if time considered

Steven I. Dworkin, Ph.D.

22

Delay Reduction

• Choice related to reduction in delay to reinforcement

Steven I. Dworkin, Ph.D.

23

Self-Control Choices

• Small immediate reinforcer versus delayed larger reinforcer.– Impulsivity versus self-control

Steven I. Dworkin, Ph.D.

24

Self-Control

• Controlling Responses (Skinner)• Controlled response

– Physical restraint– Deprivation satiation– Doing something else– Self-reinforcement and self-punishment

Steven I. Dworkin, Ph.D.

25

Self-Control

• Temporal Issue– Lack of self-control arises from the fact that our

behavior is more heavily influenced by immediate consequences as opposed to delayed consequences.

Immediate Consequence Delayed Consequence

quitting withdrawal Improved health

smoking Nicotine high Deterioration of health

Steven I. Dworkin, Ph.D.

26

Self-Control

• Self-control – preference for larger later reward

• Impulsiveness – preference for smaller sooner reward

Steven I. Dworkin, Ph.D.

27

Anslie-Rachlin Model

• Preference for self-control versus impulsive choice shifts over time.

• Value of reward is hyperbolic function of delay– Value of reward increases more sharply as

delay decreases and reward becomes more imminent.

Steven I. Dworkin, Ph.D.

28

Steven I. Dworkin, Ph.D.

29

Which do you prefer

• $500 now or $1,000 in two years

• $500 in four years or $2,000 in six years

Steven I. Dworkin, Ph.D.

30

Changing the Shape of Delay

• Biological factors

• Behavioral disorders

• Age

• Drugs

• History of delayed rewards

• Availability of other reinforcers

• Chaining or setting up subgoals

Steven I. Dworkin, Ph.D.

31

Improving Self-Control

• Precommitment

• Self-reinforcement

• Punishment for impulsive option

Steven I. Dworkin, Ph.D.

32

Other Choice Situations

• Preference for variability– Pigeons - fixed delay has to be 3-4 secs longer

to be preferred over a variable delay.– VR 60 over FR 30

Steven I. Dworkin, Ph.D.

33

Preference for Variable Delays

• Delay discounting– Longer delay less

value of reinforcer

Steven I. Dworkin, Ph.D.

34

Tragedy of the Commons

• Freedom in a commons brings ruin to all.

• precommitment

• Punishers more immediate reinforcers

top related