Top Banner
ll can we learn what the stimulus is by looking neural responses? l discuss two approaches: ise and evaluate explicit algorithms for extracting timulus estimate ectly quantify the relationship between mulus and response using information theory Decoding
44

How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Dec 19, 2015

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

How well can we learn what the stimulus is by lookingat the neural responses?

We will discuss two approaches:

• devise and evaluate explicit algorithms for extracting a stimulus estimate

• directly quantify the relationship between stimulus and response using information theory

Decoding

Page 2: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Yang Dan, UC Berkeley

Reading minds: the LGN

Page 3: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Stimulus reconstruction

t

K

Page 4: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Stimulus reconstruction

Page 5: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Stimulus reconstruction

Page 6: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Other decoding approaches

Page 7: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Britten et al. ‘92: behavioral monkey data + neural responses

Two-alternative tasks

Page 8: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Behavioral performance

Page 9: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Discriminability: d’ = ( <r>+ - <r>- )/ sr

Predictable from neural activity?

Page 10: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

p(r|+)p(r|-)

<r>+<r>-

z

Decoding corresponds to comparing test, r, to threshold, z.a(z) = P[ r ≥ z|-] false alarm rate, “size”b(z) = P[ r ≥ z|+] hit rate, “power”

Signal detection theory

Find z by maximizing P[correct] = p(+) (b z) + p(-)(1 – a(z))

Page 11: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

summarize performance of test for different thresholds z

Want b 1, a 0.

ROC curves

Page 12: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Threshold z is the result from the first presentationThe area under the ROC curve corresponds to P[correct]

ROC curves: two-alternative forced choice

Page 13: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

The optimal test function is the likelihood ratio,

l(r) = p[r|+] / p[r|-].

(Neyman-Pearson lemma)

Note that

l(z) = (db/dz) / (da/dz) = db/da

i.e. slope of ROC curve

Is there a better test to use than r?

Page 14: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

If p[r|+] and p[r|-] are both Gaussian,

P[correct] = ½ erfc(-d’/2).

To interpret results as two-alternative forced choice, need simultaneous responses from “+ neuron” and from “– neuron”.

Simulate “- neuron” responses from same neuron in response to – stimulus.

Ideal observer: performs as area under ROC curve.

The logistic function

Page 15: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Again, if p[r|-] and p[r|+] are Gaussian,and P[+] and P[-] are equal,

P[+|r] = 1/ [1 + exp(-d’ (r - <r>)/ )].s

d’ is the slope of the sigmoidal fitted to P[+|r]

More d’

Page 16: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Close correspondence between neural and behaviour..

Why so many neurons? Correlations limit performance.

Page 17: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Penalty for incorrect answer: L+, L-

For an observation r, what is the expected loss?

Loss- = L-P[+|r]

Cut your losses: answer + when Loss+ < Loss-

i.e. L+P[-|r] > L-P[+|r].

Using Bayes’, P[+|r] = p[r|+]P[+]/p(r);P[-|r] = p[r|-]P[-]/p(r);

l(r) = p[r|+]/p[r|-] > L+P[-] / L-P[+] .

Likelihood as loss minimization

Loss+ = L+P[-|r]

Page 18: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

For small stimulus differences s and s + ds

comparing

to threshold

Likelihood and tuning curves

Page 19: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

• Population code formulation

• Methods for decoding:population vectorBayesian inferencemaximum likelihoodmaximum a posteriori

• Fisher information

Population coding

Page 20: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Cricket cercal cells coding wind velocity

Page 21: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Theunissen & Miller, 1991

RMS error in estimate

Population vector

Page 22: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Cosine tuning:

Pop. vector:

For sufficiently large N,

is parallel to the direction of arm movement

Population coding in M1

Page 23: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

The population vector is neither general nor optimal.

“Optimal”:

make use of all information in the stimulus/response distributions

Page 24: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

By Bayes’ law,

Bayesian inference

likelihood function

a posteriori distribution

conditional distribution

marginal distribution

prior distribution

Page 25: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Introduce a cost function, L(s,sBayes); minimize mean cost.

For least squares cost, L(s,sBayes) = (s – sBayes)2 ;solution is the conditional mean.

Bayesian estimation

Want an estimator sBayes

Page 26: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

By Bayes’ law,

Bayesian inference

likelihood function

a posteriori distribution

Page 27: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Maximum likelihood

Find maximum of P[r|s] over s

More generally, probability of the data given the “model”

“Model” = stimulus

assume parametric form for tuning curve

Page 28: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

By Bayes’ law,

Bayesian inference

a posteriori distribution

conditional distribution

marginal distribution

prior distribution

likelihood function

Page 29: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

ML: s* which maximizes p[r|s]

MAP: s* which maximizes p[s|r]

Difference is the role of the prior: differ by factor p[s]/p[r]

For cercal data:

MAP and ML

Page 30: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Decoding an arbitrary continuous stimulus

Work through a specific example

• assume independence• assume Poisson firing

Distribution: PT[k] = (lT)k exp(-lT)/k!

Page 31: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

E.g. Gaussian tuning curves

Decoding an arbitrary continuous stimulus

Page 32: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Assume Poisson:

Assume independent:

Need to know full P[r|s]

Population response of 11 cells with Gaussian tuning curves

Page 33: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Apply ML: maximize ln P[r|s] with respect to s

Set derivative to zero, use sum = constant

From Gaussianity of tuning curves,

If all s same

Page 34: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Apply MAP: maximise ln p[s|r] with respect to s

Set derivative to zero, use sum = constant

From Gaussianity of tuning curves,

Page 35: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Given this data:

Constant prior

Prior with mean -2, variance 1

MAP:

Page 36: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

For stimulus s, have estimated sest

Bias:

Cramer-Rao bound:

Mean square error:

Variance:

Fisher information

How good is our estimate?

(ML is unbiased: b = b’ = 0)

Page 37: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Alternatively:

Fisher information

Quantifies local stimulus discriminability

Page 38: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Fisher information for Gaussian tuning curves

For the Gaussian tuning curves w/Poisson statistics:

Page 39: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Do narrow or broad tuning curves produce better encodings?

Approximate:

Thus, Narrow tuning curves are better

But not in higher dimensions!

Page 40: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Recall d' = mean difference/standard deviation

Can also decode and discriminate using decoded values.

Trying to discriminate s and s+Ds:

Difference in ML estimate is Ds (unbiased)variance in estimate is 1/IF(s).

Fisher information and discrimination

Page 41: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

Limitations of these approaches

• Tuning curve/mean firing rate

• Correlations in the population

Page 42: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

The importance of correlation

Page 43: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

The importance of correlation

Page 44: How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.

The importance of correlation