Ldb Convergenze Parallele_trueblood_01

Post on 28-Aug-2014

75 Views

Preview:

Click to see full reader

DESCRIPTION

 

Transcript

Dynamic Quantum Decision Models

Jennifer S. TruebloodUniversity of California, Irvine

Thursday, September 5, 13

Outline

1. Disjunction Effect

2. Comparing Quantum and Markov Models with Prisoner’s

Dilemma Game

Thursday, September 5, 13

Disjunction Effect

Thursday, September 5, 13

Savage’s Sure Thing Principle

• Suppose

• when is the state of the world, you prefer action A over B

• when is the state of the world, you also prefer action A over B

• Therefore you should prefer A over B even when S is unknown

S

• People violate the Sure Thing Principle (Tversky & Shafir, 1992)

Thursday, September 5, 13

Disjunction Effect using Tversky & Shafir (1992) Gambling Paradigm

• Chance to play the following gamble twice:

• Even chance to win $250 or lose $100

• Condition Win:

• Subjects told ‘Suppose you won the first play’

• Result: 69% choose to gamble

• Condition Lost:

• Subjects told ‘Suppose you lost the first play’

• Result: 59% choose to gamble

• Condition Unknown:

• Subjects told: ‘Don’t know if you won or lost’

• Result: 35% choose to gamble

Thursday, September 5, 13

Failure of a 2-D Markov Model

Law of Total Probability:

p(G|U) = p(W |U)p(G|W ) + p(L|U)p(G|L)

Thursday, September 5, 13

Failure of a 2-D Markov ModelLaw of Total Probability:

p(G|U) = p(W |U)p(G|W ) + p(L|U)p(G|L)

p(G|W ) = 0.69 > p(G|U) > p(G|L) = 0.59

But Tversky and Shafir (1992) found that p(G|U) = .35 < p(G | L) = 0.59 < p(G |W) = 0.69

violating the law of total probability

Thursday, September 5, 13

2-D Quantum Model

Law of Total Amplitude:

p(G|U) = || < W |U >< G|W > + < L|U >< G|L > ||2

amplitude for transitioning to the “lose” state from

the “unknown” state

Thursday, September 5, 13

Quantum Model Account Violation of Sure Thing Principle

= || < W |U > ||2|| < G|W > ||2 + || < L|U > ||2|| < G|L > ||2 + Int

p(G|U) = || < W |U >< G|W > + < L|U >< G|L > ||2

Int = 2 ·Re[< W |U >< G|W >< L|U >< G|L >]

To account for Tversky and Shafir (1992) we require Int < 0

Thursday, September 5, 13

Tversky and Shafir’s Intuition?

• If you win on first play, you play again because you have extra “house” money

• If you lose on first play, you play again because you need to make up for your losses

• If you don’t know, these two reasons interfere and leaving you without any reason coming to mind

Thursday, September 5, 13

Failure of 2-D Quantum Model!

• Quantum Model must satisfy Double stochasticity

• In particular

• ||<G | W>||2 + ||<G|L>||2 = 1

• But Tversky & Shafir found that

• p(G | W) = 0.69 and p(G|L) = 0.59

• Violates double stochasticity!

Thursday, September 5, 13

2-D Transition Matrix

General 2-D transition matrix

•Columns of T must sum to1•Rows of T do not have to sum to 1

Thursday, September 5, 13

Markov Process

•Obeys law of total probability, but allows for general transition matrix

Thursday, September 5, 13

Quantum Process

•Obeys law of total amplitude and not law of total probability. But U must transform a unit length vector Ψ(0) into another unit length vector Ψ(t)•To preserve lengths, U must be unitary

hN |SihG|Si

�=

hN |W i hN |LihG|W i hG|Li

�·hW |SihL|Si

=

hN |W i · hW |Si+ hN |Li · hL|SihG|W i · hW |Si+ hG|Li · hL|Si

Thursday, September 5, 13

Quantum Unitary Matrix

Unitary Matrix

Transition Matrix

•T must be Doubly stochastic: Both rows and columns of T must sum to unity

Thursday, September 5, 13

Disjunction Effect using Prisoner Dilemma Game (Shafir & Tversky, 1992)

Thursday, September 5, 13

• Condition 1: You know the other defected, and now you must decide whether to defect or cooperate

• Condition 2: You know the other cooperated, and you must decide whether to defect or cooperate

• Condition 3: You do not know, and you must decide whether to defect or cooperate

Disjunction Effect using Prisoner Dilemma Game (Shafir & Tversky, 1992)

Thursday, September 5, 13

Results from 4 Experiments(Entries show % to defect)

StudyKnown to

defectKnown to cooperate

Unknown

Shafir & Tversky (1992)

97 84 63

Croson (1999) 67 32 30

Li & Taplan (2002)

83 66 60

Busemeyer et al. (2006)

91 84 66

Violates the law of total probability

Violates the law of double stochasticity

Thursday, September 5, 13

Another Failure: Both 2-D Models fail to explain PD Game results

• The Markov model fails because the results once again violate the law of total probability

• The quantum model fails because the results once again violate the law of double stochasticity

Thursday, September 5, 13

Compatible vs. Incompatible Measures

• The failed QP model assumes beliefs and actions are incompatible

• Previously we assumed that beliefs and actions were represented by different bases within the same 2-D vector space

• Now we need to switch to a compatible representation which requires a 4-D space.

Thursday, September 5, 13

Inference-Action State Space

4 dimensional space

Thursday, September 5, 13

Classic EventsSuppose:Observe start at t=0 in state I1A1Do not observe during t=1Observe end at t=2 in state I2A2

Classic Events:I1A1➝ I1A1➝ I2A2 orI1A1➝ I2A2➝ I2A2 orI1A1➝ I2A1➝ I2A2 orI1A1➝ I1A2➝ I2A2

These 4 are the only possibilities in 2 steps; We just don’t know which is true

Thursday, September 5, 13

Quantum Events

Suppose:Observe start at t=0 in state I1A1Do not observe during t=1Observe end at t=2 in state I2A2

We cannot say there are only 4 possible ways to get there; At t=1, the state is a superposition of all four;There is deeper uncertainty

Thursday, September 5, 13

Compare 4-D Markov and Quantum Models for PD game

Thursday, September 5, 13

Markov Model Assumption 1Four basis states: {|DD⟩, |DC⟩, |CD⟩, |CC⟩ }e.g. |DC⟩ ➝ you infer that opponent will defect but you decide to cooperate

e.g. ΨDC = Initial probability that the Markov system starts in state |DC⟩

X i = 1

Thursday, September 5, 13

Initial inferences affected by prior information (Markov)

Condition 1Known Defect

Condition 2Known Coop

Condition 3Unknown

U = 0.5 D + 0.5 C

Thursday, September 5, 13

Quantum Model Assumption 1Four basis states: {|DD⟩, |DC⟩, |CD⟩, |CC⟩ }e.g. |DC⟩ ➝ you infer that opponent will defect but you decide to cooperate

e.g. ΨDC = Initial probability amplitude that the Quantum system starts in state |DC⟩

Probability = |ΨDC|2

|Ψ|2 = 1

Thursday, September 5, 13

Initial inferences affected by prior information (Quantum)

Condition 1Known Defect

Condition 2Known Coop

Condition 3Unknown

U =p0.5 D +

p0.5 C

Thursday, September 5, 13

Markov Model Assumption 2Strategy Selection

Thursday, September 5, 13

Strategies affected by game payoffs and processing time

dΨ(t)/dt = K·Ψ(t) (Kolmogorov Forward Equation)

Thursday, September 5, 13

Intensity MatrixK = KA +KB

KA =

KAd 00 KAc

�KAi =

�1 µi

1 �µi

KB =

2

664

�1 0 +� 00 0 0 0+1 0 �� 00 0 0 0

3

775+

2

664

0 0 0 00 �� 0 +10 0 0 00 +� 0 �1

3

775

!

µi depends on the pay-offs associated with different actions

transforms the state probabilities to favor either defection or cooperation depending on pay-offs

Cognitive dissonance - beliefs change to be consistent with actions

Thursday, September 5, 13

Quantum Model Assumption 2

Thursday, September 5, 13

Strategies affected by Game Payoffs and Processing Time

Thursday, September 5, 13

The HamiltonianH = HA +HB

HA =

HAd 00 HAc

�HAi =

1p1 + µ2

i

µi 11 �µi

µi depends on the pay-offs associated with different actions

HB =�p2

2

664

+1 0 +1 00 0 0 0+1 0 �1 00 0 0 0

3

775+

2

664

0 0 0 00 �1 0 +10 0 0 00 +1 0 +1

3

775

!

transforms the state probabilities to favor either defection or cooperation depending on pay-offs

Cognitive dissonance - beliefs change to be consistent with actions

Thursday, September 5, 13

Markov Model Assumption 3output vector

e.g. ϕDC = final probability that the Markov system ends in state |DC⟩.

measurement operator for decision to defect

Probability defect = L·ϕ

T · = � =

2

664

�DD

�DC

�CD

�CC

3

775

Thursday, September 5, 13

Markov PredictionIf the opponent is known to defect:

If the opponent is known to cooperate:

Under the unknown condition:

L·ϕD = L·TΨD

L·ϕC = L·TΨC

L·ϕU = L·TΨU = L·T(p·ΨD + q·ΨC)= p·L·TΨD + q·L·TΨC

= p· L·ϕD + q· L·ϕC

Known to defect

Known to cooperate

Unknown

Busemeyer et al. (2006)

91 84 66

Markov Model 91 84 between 91 and 84

Thursday, September 5, 13

Quantum Model Assumption 3output vector

e.g. ϕDC = final probability amplitude that the Quantum system ends in state |DC⟩.

measurement operator for decision to defect

Probability defect = |M·ϕ|2

U · = � =

2

664

�DD

�DC

�CD

�CC

3

775Probability = |ϕDC|2

Thursday, September 5, 13

Quantum PredictionIf the opponent is known to defect:

If the opponent is known to cooperate:

Under the unknown condition:

Known to defect

Known to cooperate

Unknown

Busemeyer et al. (2006)

91 84 66

Markov Model 91 84 69

Thursday, September 5, 13

Quantum Prediction

The probability of defection under the unknown condition minus the average for the two known conditions. (Negative values indicate an interference effect.

Thursday, September 5, 13

Thank You

• Want to learn more...

Thursday, September 5, 13

Bayesian Analysis of Individual Data

Thursday, September 5, 13

Model Complexity Issue

• Perhaps quantum probability succeeds where traditional models fail because it is more complex

• Bayesian model comparison provides a coherent method for comparing models with respect to both accuracy and parsimony

Thursday, September 5, 13

Dynamic Consistency• Dynamic consistency: Final decisions agree with planned decisions (Barkan

and Busemeyer, 2003)

• Two stage gamble

1. Forced to play stage one, but outcome remained unknown2. Made a plan and final choice about stage two

• Plan:

• If you win, do you plan to gamble on stage two?• If you lose, do you plan to gamble on stage two?

• Final decision

• After an actual win, do you gamble on stage two?• After an actual loss, do you now choose to gamble on stage two?

Thursday, September 5, 13

Two Stage Decision Task

Thursday, September 5, 13

Barkan And Busemeyer (2003) Results

Risk averse after a win

Risk seeking after a loss

Thursday, September 5, 13

Two Competing Models

1. Quantum Model

2. Markov model

• Reduction of the quantum model when one key parameter is set to zero

Thursday, September 5, 13

Quantum Model• Four outcomes: W = win first gamble, L = lose first gamble

T = take second gamble, R = reject second gamble

• 4-D vector space corresponding to the four possible events: W ∧ T, W ∧ R, L ∧ T, L ∧ R

F

I

F = U · I

• State of the decision maker:

1.Before first gamble

2.Before second gamble

• From first gamble to second gamble

Thursday, September 5, 13

Unitary Transformation• From first gamble to second gamble:

F = U · I

allows for changes in beliefs

using one free parameter calculates the utilities for taking the

gamble using two free parameters (loss aversion, , and risk aversion, ) �b

a

• The Markov model is a special case of the quantum model when � = 0

U = exp(�i · ⇡2· (HA +HB))

Thursday, September 5, 13

Comparing Fits

• Fit both models to the dynamic consistency data:

1. Quantum

• Three parameters: a and b to determine the utilities and for changing beliefs to align with actions

• R2 = .82

(� = 0)

2. Markov

• R2 = .78

Thursday, September 5, 13

Hierarchical Bayesian Parameter Estimation

• Used hierarchical Bayesian estimation to evaluate whether or not H0: for the quantum model

L(Di|✓i)

� = 0

q(✓i|⇡)

r(⇡)

Likelihood of data given model parms for person i

Prior probability of parms for person i dependent on hierarchical parms - binomial distribution

Prior probability over hierarchical parms - uniform distribution [0, 1]

Thursday, September 5, 13

Distributions

Thursday, September 5, 13

Estimates of Group Level Parameters

The risk aversion hierarchical parameter is located below 0.5 indicating somewhat strong risk aversion

The loss aversion hierarchical parameter is located above 0.5 indicating higher sensitivity to losses

Busemeyer, J. R., Wang, Z., Trueblood, J. S. (2012). Hierarchical Bayesian estimation of quantum decision model parameters. In J. R. Busemeyer et al. (Ed.), QI 2012, LNCS 7620. Berlin, Germany. Springer-Verlag.

Thursday, September 5, 13

Estimate of the Quantum Parameter

The hierarchical distribution of the quantum parameter lies below 0.5 implying the mean value is below zero

Thursday, September 5, 13

top related