Lecture 3: Broken Bayes - CompNeuroscicompneurosci.com/wiki/images/8/81/CoSMo2018_Module03_Lecture_Larry.pdfLecture 3: Broken Bayes CoSMo 2018 Minneapolis, MN Larry Maloney . Why are

Post on 27-Jun-2020

0 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Lecture 3: Broken Bayes

CoSMo 2018 Minneapolis, MN

Larry Maloney

Whyareyousowonderful?(Well,maybenotsowonderful…)

anomalocaridid

Cambrianexplosion

MFLand&D-ENilson,(2002)AnimalEyes.Oxford.

Now

500millionyears

eyesskeletonsmovementplanning

David Blackwell John von Neumann

Oskar Morgenstern 1954

Abraham Wald

M. A. Girschick

Statistical Decision Theory

W = w1, w2, ... ,wm{ }A = a1,a2, ...,ap{ }X = x1, x2..., xn{ }

possible states of the world

possible sensory events

possible actions

Three Elements of SDT

A X d ( x )

decision

W

Action

π(w) prior

Bayesian Decision Theory

D: X à A d : Xà A

A Bayesian Problem

G( a , w ) Gain π(w) Prior L ( w | x ) Likelihood Linking hypothesis

Maximize expected gain by choice of d(.)

Despite changes in gain function, likelihood, prior.

A Bayesian Game

:speededreaching

Expected value as function of mean movement end point (x,y):

0

<-30 -15

30

points per trial

x (mm)

y (m

m)

! = 4.83 mm

-10 -5 0 5 10 15 20

-10

-5

-0

5

10

target: 100 penalty: -500

15

c

Action selection

OurperformanceisclosetoopMmalinthereachingandMmingtasksconsideredsofar.

Athoughtproblem:whatifreachingerrorwerenotisotropic(round)?

Action selection

Action selection

Action selection

Anisotropydoesn’taffecttheop2malaimpointinthissimpletask.

?

$100Whichtargetwouldyouliketotry?

Add a gain function…

$100

>

Whichtargetwouldyouliketotry?

$100

>

Whichtargetwouldyouliketotry?

Bayesian

Prior

Likelihood

Gain

Action

Probability

Posterior

Objective Distributions

Prior

Likelihood

UMlity

Action

Probability

Posterior

Internal Representations

von Neumann & Morgenstern

We measure these

Wemeasurethese

Bayesian Computation Representing motor uncertainty

True distribution

Subject’s representation

Probability density function pdf

Bayesian Computation Representing motor uncertainty

True distribution

Subject’s representation

Probability density function pdf

Trommershäuser,Maloney,&Landy(2003),SpatVisTrommershäuser,Maloney,&Landy(2003),JOptSocAmA

Körding&Wolpert(2004),Nature

Najemnik&Geisler(2005),Nature

Trommershäuser,Gepshtein,Maloney,Landy,&Banks(2005),JNeurosci

Trommershäuser,Ma]s,Landy,&Maloney(2006),ExpBrainRes

Trommershäuser,Landy,&Maloney(2006),PsychSciBa`aglia&Schrater(2007),JNeurosci

Dean,Wu,&Maloney(2007),JOVHudson,Maloney,&Landy(2008),PLoSCompBiol

Faisal&Wolpert(2009),JNeurophysiol

Wei&Körding(2010),FrontComputNeurosci······

People are Good at Motor Decisions Movement planning: near optimal

Gain Function

Bayesian Computation

Optimal Choice

Experiment

Gain Function Subject’s Choice

Same? Near Optimal

Bayesian computation

Near Optimal

Bayesian Computation Bayesian computation

Gain Function

Bayesian Computation

Optimal Choice

Experiment

Gain Function Subject’s Choice

Same? Near Optimal

Gaussian

Uniform

Zhang, Daw, & Maloney, 2013

Bayesian Computation Bayesian computation

Gain Function

Bayesian Computation

Optimal Choice

Experiment

Gain Function Subject’s Choice

Same? Near Optimal

Uniform

Zhang, Daw, & Maloney, 2013

Many tasks may simply be insensitive to systematic deviations in the internal model of uncertainty

Gaussian

Bayesian Computation Bayesian computation

Inferring people’s internal models of uncertainty based on their choices

Gain Function Subject’s Choice

The Inverse Problem Bayesian Computation The inverse problem

Choice Task

$100

>

Whichtargetwouldyouliketotry?

Bayesian Computation Bayesian computation

Choice Task

?

$100 Whichtargetwouldyouliketotry?

Bayesian Computation Choice task

Choice Task

~

$100 Whichtargetwouldyouliketotry?

Bayesian Computation Bayesian computation

>

<

~

>

<

~

Zhang, Daw, Maloney (2013) PLoS CB

Bayesian Computation Experiment

Measuring subjects’ choices between targets of varying shapes and sizes allows us to infer their internal model of motor uncertainty

? Riesz-Fischer theorem, see Maloney & Mamassian, 2009

Bayesian computation

Measuring subjects’ choices between targets of varying shapes and sizes allows us to infer their internal model of motor uncertainty

? Riesz-Fischer theorem, see Maloney & Mamassian, 2009

Procedure

Bayesian Computation Zhang et al (2013)

Touchthetargetwithin400msec

300trials

+

?

500ms

500ms

500ms

500ms

500ms

500ms

Whichiseasiertohit?1stor2nd?

?

Whichiseasiertohit?

Definitelythecircle

?

Whichiseasiertohit?

Definitelytherectangle

?

Whichiseasiertohit?

10possiblerectangles

RadiusofthecirclewasadjustedbyadapMveprocedures(staircase)

~

Results: True error distribution

Vertically elongated, bivariate Gaussian

xσ (cm)

yσ(cm)

0 0.2 0.4 0.60

0.2

0.4

0.6

xσ (cm)

yσ(cm)

0 0.2 0.4 0.60

0.2

0.4

0.6

Median 1.44y xσ σ =

Vertically elongated

?

Results: Subjects' internal model of their own error distribution

Each subject’s internal model was fitted to the subject’s choices as a bivariate Gaussian distribution with two free parameters:

σ ′x σ ′

yand

Maximum likelihood estimates

True

in

Subjects’ Model

σ σy x

σ σ′ ′y x

0 0.5 1 1.50

0.5

1

1.5

True

in

Subjects’ Model

σ σy x

σ σ′ ′y x

0 0.5 1 1.50

0.5

1

1.5

True

in

Subjects’ Model

σ σy x

σ σ′ ′y x

0 0.5 1 1.50

0.5

1

1.5

true distribution subject's model

Summary

Zhang, Daw, Maloney (2015) Nature Neuroscience

Experiment

A1-Dversionof ?

Experiment

Touchthetargetwithin400msec

300trials

(target illustrated not in real scale)

Experiment

Experiment

?

Whichiseasiertohit?

Definitely the Triple

Experiment

?

Whichiseasiertohit?

Definitely the Single

Experiment

?

Whichiseasiertohit?

Experiment

Results: True error distribution

Experiment

Experiment

How do we estimate the participants’ subjective pdf?

?

Experiment

Maximum Likelihood Fitting

The data are choices between pairs of targets à 1 (second one)

à 0 (first one)

etc

Maximum Likelihood Fitting

The data are choices between pairs of targets à 1 (second one)

à 0 (first one)

etc For any motor pdf we can simulate the choices of a model participant who carries out the experiment. What is the probability that this model participant’s choices will match the responses of a given subject? Search through a “large” family of pdfs to find the one that has the highest likelihood of producing the subject’s data.

Warm up: So lets try some “large families of pdfs”!

Unimodal histograms

x

( )f x

Non-Parametric Analysis

1 2 3 4 5 6 7 8 9 10

Important constraint: monotone decreasing from center

x

( )f x

x

( )f x

constraint: monotone decreasing from center

x

( )f x

constrained to be unimodal

So what do we find?

x

( )f x

−1 0 10

1

2

−1 0 10

1

2

3

−1 0 10

1

2

3

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

x

f x( )S4 S5 S6

S1 S2 S3

S7 S8 S9

x

( )f x

Don’t get too excited. Remember: we constrained the distributions to be Non-increasing away from center.

“bumps”

U-mix

x

( )f x

(uniform mixtures)

Hypothesis

Parameters: number of steps location of steps

heights of steps

( )f x

x

( )f x

( )f x

( )f x

U1

U2

U3

U4

Nested-hypothesis tests (Mood, Graybill, & Boes, 1974)

U-mix Family

Model Fit: Possible pdf Models

t distributions

Mixture of N Gaussians

Linear decay

Mixture of Non-overlapping Uniforms (U-Mix)

And more

Model Fit: Possible pdf Models

t distributions

Mixture of N Gaussians

Linear decay

Mixture of Non-overlapping Uniforms (U-Mix)

And more

0

3

6

U1

N. o

f sub

ject

s

U2 U3 U4

−1 0 10

1

2

−1 0 10

1

2

3

−1 0 10

1

2

3

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

3

−1 0 10

1

2

3

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

S4 S4

Non-parametric analysis U-mix

Experiment

−1 0 10

1

2

−1 0 10

1

2

3

−1 0 10

1

2

3

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

−1 0 10

1

2

x

f x( )S4 S5 S6

S1 S2 S3

S7 S8 S9

Subjects’ u-mix

top related