Top Banner
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. Nicolelis MAL, editor. Methods for Neural Ensemble Recordings. 2nd edition. Boca Raton (FL): CRC Press; 2008. Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications Miriam Zacksenhouse and Simona Nemets. INTRODUCTION The advance of BMIs was largely motivated by investigations of velocity encoding in single neurons during stereotypical reaching experiments. However, BMIs are designed to decode neural activity from an ensemble of neurons and direct general reaching movements. Hence, neural data analysis strategies for BMIs are required to: (1) analyze the neural activity from ensemble of neurons, (2) account for the dynamical nature of the neural activity associated with general reaching movements, and (3) explore and exploit other relevant modulating signals. Decoding neural activity can be performed either in a single stage or two stages. Two-stage decoding relies on a preliminary encoding stage to determine how the neurons are tuned to the relevant biological signals. Based on the estimated tuning curves, the neural activity across an ensemble of neurons can be decoded using either a population-vector, maximum likelihood estimation, or Bayesian inference ( Pouget et al., 2003). The population- vector approach results in a linear relationship between the spike counts and the estimated biological signal, which can be estimated directly in a single stage using linear regression ( Brown et al., 2004). This chapter focuses on single-stage decoding with linear regression, and in particular on two special challenges facing the application of linear regression to neural ensemble decoding during reaching movements (see “Movement Prediction”). First, given the dynamic nature of the decoded signal, it is necessary to include the history of the neural activity, rather than just its current spike count. Second, due to the correlation between the activities of different neurons (see “Ensemble Analysis”) and the activities in different time lags, the resulting regression problem is ill-posed and requires regularization techniques (see “Linear Regression”). Although neural decoding can be performed in a single stage, neural encoding is still important for investigating which signals are encoded in the neural activity. For this purpose, the notion of tuning curves is generalized to characterize how the neural activity represents the spatiotemporal profile of the movement. This analysis quantifies both the spatiotemporal tuning curves and the percent variance of the neural activity that is accounted by the movement profile (see “Neuronal Encoding and Tuning Curves”). For comparison, the percent variance in the neural activity that might be related to general neural modulations is assessed independently under the Poisson assumption (see “Neuronal Modulations”). These two-faced variance analyses provide a viable tool for quantifying the extent to which the neural code is effectively decoded, and the potential contribution of yet undecoded modulating signals. The strategies and algorithms described in this chapter are demonstrated using the neural activity recorded from an ensemble of cortical neurons in different brain areas during a typical target-hitting experiment with pole control as described in Carmena et al., 2003. NEURONAL ENCODING AND TUNING CURVES The firing rates of cortical motor neurons represent a diversity of motor, sensory, and cognitive signals, and most notably the direction and speed of movement ( Georgopoulos, 2000; Johnson et al., 2001; Georgopoulos et al., 1989; Paz et al., 2004). Neuronal encoding of specific movement-related signals, including movement direction, and speed, has been characterized in terms of the tuning properties of the neurons ( Georgopoulos et al., 1986, Ashe and Georgopoulos 1994). Most prominently, center-out reaching experiments indicated that the firing rates of single cortical motor neurons are broadly “tuned” to the direction of movement. Tuning curves represent the firing rate as a function of the direction of movement, and are well described by a cosine function of the angle between the movement direction and the preferred direction of the neuron. Detailed investigations suggest that the activity of directionally tuned cortical motor neurons is also modulated by the speed of movement, both independently from and interactively with the direction of movement ( Moran and Schwartz, 1999). Furthermore, other scalar signals, including the amplitude of movement, its accuracy, the location of the target, and the applied force may also contribute to the firing rate modulation of cortical motor neurons ( Johnson, et al., 2001, Alexander & Crutcher, 1990,
28

Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

Feb 21, 2023

Download

Documents

Israel Prado
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Nicolelis MAL, editor. Methods for Neural Ensemble Recordings. 2nd edition. Boca Raton (FL): CRC Press; 2008.

Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–MachineInterface (BMI) ApplicationsMiriam Zacksenhouse and Simona Nemets.

INTRODUCTIONThe advance of BMIs was largely motivated by investigations of velocity encoding in single neurons duringstereotypical reaching experiments. However, BMIs are designed to decode neural activity from an ensemble ofneurons and direct general reaching movements. Hence, neural data analysis strategies for BMIs are required to:(1) analyze the neural activity from ensemble of neurons, (2) account for the dynamical nature of the neural activityassociated with general reaching movements, and (3) explore and exploit other relevant modulating signals.

Decoding neural activity can be performed either in a single stage or two stages. Two-stage decoding relies on apreliminary encoding stage to determine how the neurons are tuned to the relevant biological signals. Based on theestimated tuning curves, the neural activity across an ensemble of neurons can be decoded using either apopulation-vector, maximum likelihood estimation, or Bayesian inference (Pouget et al., 2003). The population-vector approach results in a linear relationship between the spike counts and the estimated biological signal, whichcan be estimated directly in a single stage using linear regression (Brown et al., 2004). This chapter focuses onsingle-stage decoding with linear regression, and in particular on two special challenges facing the application oflinear regression to neural ensemble decoding during reaching movements (see “Movement Prediction”). First,given the dynamic nature of the decoded signal, it is necessary to include the history of the neural activity, ratherthan just its current spike count. Second, due to the correlation between the activities of different neurons (see“Ensemble Analysis”) and the activities in different time lags, the resulting regression problem is ill-posed andrequires regularization techniques (see “Linear Regression”).

Although neural decoding can be performed in a single stage, neural encoding is still important for investigatingwhich signals are encoded in the neural activity. For this purpose, the notion of tuning curves is generalized tocharacterize how the neural activity represents the spatiotemporal profile of the movement. This analysis quantifiesboth the spatiotemporal tuning curves and the percent variance of the neural activity that is accounted by themovement profile (see “Neuronal Encoding and Tuning Curves”). For comparison, the percent variance in theneural activity that might be related to general neural modulations is assessed independently under the Poissonassumption (see “Neuronal Modulations”). These two-faced variance analyses provide a viable tool for quantifyingthe extent to which the neural code is effectively decoded, and the potential contribution of yet undecodedmodulating signals.

The strategies and algorithms described in this chapter are demonstrated using the neural activity recorded from anensemble of cortical neurons in different brain areas during a typical target-hitting experiment with pole control asdescribed in Carmena et al., 2003.

NEURONAL ENCODING AND TUNING CURVESThe firing rates of cortical motor neurons represent a diversity of motor, sensory, and cognitive signals, and mostnotably the direction and speed of movement (Georgopoulos, 2000; Johnson et al., 2001; Georgopoulos et al.,1989; Paz et al., 2004). Neuronal encoding of specific movement-related signals, including movement direction, andspeed, has been characterized in terms of the tuning properties of the neurons (Georgopoulos et al., 1986, Asheand Georgopoulos 1994). Most prominently, center-out reaching experiments indicated that the firing rates of singlecortical motor neurons are broadly “tuned” to the direction of movement. Tuning curves represent the firing rate as afunction of the direction of movement, and are well described by a cosine function of the angle between themovement direction and the preferred direction of the neuron. Detailed investigations suggest that the activity ofdirectionally tuned cortical motor neurons is also modulated by the speed of movement, both independently fromand interactively with the direction of movement (Moran and Schwartz, 1999). Furthermore, other scalar signals,including the amplitude of movement, its accuracy, the location of the target, and the applied force may alsocontribute to the firing rate modulation of cortical motor neurons (Johnson, et al., 2001, Alexander & Crutcher, 1990,

Edval Rodrigues de Viveiros
(1)analisar a atividade neural que provém do conjunto de neurônios(2) para a natureza dinâmica da atividade neural associada com os movimentos.(3)explorar e exploit outros sinais relevantes.
Edval Rodrigues de Viveiros
A decodificação neural pode ser desenvolvida em uma ou duas etapas simples.
Edval Rodrigues de Viveiros
Na decodificação em dois estágios, uma decodificação preliminar determina quais neurônios foram sintonizados (recrutados) para o sinal biológico relevante.
Edval Rodrigues de Viveiros
Fundamentado na curva estimada de neurônios recrutados, a atividade neural atravessa um conjunto de neurônios que podem ser decodificados utilizando uma população de vetores, estimativa máxima LIKELIHOOD, our inferência Baysian.
Edval Rodrigues de Viveiros
Esta população vetorial resulta numa relação linear entre os picos (SPIKE COUNTS) e o sinal biológico estimado, que pode ser estimado diretamente com uma regressão linear, e em particular em sobre duas mudanças (SPECIAL CHALLENGES FACING)
Page 2: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

Georgopoulos et al., 1992, Scott 2003).

BMI experiments can be used to further investigate (Nicolelis 2001; 2003): (1) how individual neurons represent thespatiotemporal profile of the movement during free arm movements; (2) the potential modulations by yet undecodedsignals; and (3) the distribution of correlated activity across an ensemble of neurons. This section focuses on thefirst issue whereas the last two are addressed in the sections “Neuronal Modulation” and “Ensemble Analysis,”respectively.

Velocity Tuning Curves

It is customary to determine the tuning of motor neurons to the velocity of movement during planar center-outreaching movements, where the direction of velocity is approximately constant during each reaching movement(Georgopoulos, 1986; Moran and Schwartz, 1999). Tuning curves that account for both the cosine-directionalsensitivity and the effect of the speed of movement are of the form

4.1

where N is the number of spike counts during the movement, V and θ are the magnitude (speed) and direction ofthe velocity, a and θ are the magnitude and preferred-direction of the directional tuning, b is the magnitude of thetuning to the speed, c is the mean spike count across different directions, and is the modeling error. Thisformulation implies that the neural activity depends linearly on the x- and y-components of the velocity,V = V sinθand V V = cosθ, as:

4.2

where a = asinθ and a = a cosθ are the tuning to the components of the movement. Equation 4.2 is in theform of a linear regression, so the tuning coefficients a and a can be estimated directly using linear regressionbetween the neural activity and the velocity. The resulting coefficient of determination R ( N,V ) describes thefraction of the total variance in the spike counts that is attributed to the velocity and provides a measure of thegoodness of fit. Because the neural activity is highly noisy (see the section “Neuronal Modulation”), the resultingcoefficients of determination are small, even if the tuning to the velocity is significant.

Free arm movements involve temporal patterns, which cannot be captured only by spatial features. The formulationof the directional tuning should be generalized to describe the spatiotemporal features of the movement profile. Theneural activity is binned (typically with 100 ms long bins), and the mean velocity in each bin is computed. The tuningof the neural activity to the velocity at a particular lag is assumed to follow the cosine tuning of Equation 4.2(Lebedev et al., 2005) as:

4.3

where N(k) is the spike counts in the kth time-bin,V ( k + l),V ( k + l), and V ( k + l)are the mean components andspeed of the velocity in the (k+l)th time-bin, l is the relative lag between the velocity and the spike counts (positiveor negative lcorresponds to rate-modulations preceding or succeeding the velocity measurement, respectively), A =[ a (l) a (l)] is the vector of directional tuning parameters with respect to the lagged velocity, b(l) is the speedtuning parameter, c(l) is a bias parameter, and (l,k ) is the residual error. The coefficient of determination of thesingle-lag regressions R (N,V ([l])) for l = −L ,…L , and the strength of the tuning |A(l)| can be used to evaluate thestrength of tuning and determine the most significant lag, i.e., the lag between the neural activity and the velocitythat it is tuned to the most.

Spatiotemporal Tuning Curves

To account for the dependence of the neural activity on the velocity profile across several lags, the model isextended according to the multi-lag regression given by (Figure 4.1):

N = a cos(θ − ) + b + c + ɛVm θPD Vm

mPD

x my m

N = + + b + c + ɛaxVx ayVy Vm

x PD y PDx y

2

N(k) = (l) (k + l) + (l) (k + l) + b(l) (k + l) + c(l) + ɛ(l, k),ax Vx ay Vy Vm

x y m

lx y

2 1 2

Page 3: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

4.4

where L and L are the number of preceding and succeeding lags between the spike counts and the velocity,respectively. This model describes the tuning of the neural activity to the complete velocity profile in the surroundingwindow of [−L L ]. The coefficient of determination of the multi-lag regression R ( N,V ([−L , L ])) can beinterpreted as the fraction of variance in the binned spike counts that is attributed to modulation by thespatiotemporal velocity profile. Expressed as a percentage, it is referred to as the percent velocity modulation(PVM).

It is noted that because the velocity is slowly varying (compared with the bin size), the velocities in different lags arehighly correlated. Thus, the multi-lag regression analysis of Equation 4.4 and the lag-by-lag analysis of Equation 4.3would yield different regression parameters, as demonstrated below. The multi-lag regression analysis accounts forthe correlation between the velocities in different lags and describes the tuning to the complete velocity profile.Furthermore, R (N, V ([−L , L ])) cannot be approximated by the sum of the individual coefficients of determinationof the single-lag regressions R ( N,V ([l])) for l= − L ,…L . Thus, it is necessary to perform the multi-lag regressionin order to quantify the PVM.

The multi-lag regression of Equation 4.4 can be formulated in a matrix notation

4.5

where N = [N(L + 1)…N(T − L ) ] , and V (l) = [V (L + 1 + l)...V (T − L + l)] (the index k = x, y, m indicatesthe x- and y- components of the velocity and its magnitude, respectively) are (T − L − L ) × 1 vectors of spikecounts and velocity components, respectively, 1 is a (T − L − L ) × 1 vector of 1’s, and C = [a (−L ) a (−L + 1)…a (L ) a (−L )…a (L ) b(−L )…b(L )c] is a vector of regression coefficients.

The optimal least-square solution of Equation 4.5 is sensitive to measurement noise and become unstable whenthe condition number of the matrix is large (see the section “Least Square Solution”). During typical experiments thecondition number of the matrix X is on the order of 10 (Figure 4.4). Thus proper evaluation of Equation 4.5requires regularization methods as detailed in the section “Regularization Methods.” The regression parameters ofa typical spatiotemporal tuning curve, computed using regularization is shown in Figure 4.2, and compared with thelag-by-lag tuning curve. The parameters a and a are shown separately as a function of the lag (top left and right,respectively), where negative lags are preceding and thus predictive, whereas positive lags are succeeding andthus reflective. The magnitude of the directional tuning, i.e.,

and the estimated preferred direction (PD) at each lag, PD(l) = tan (a /a ) are depicted in the bottom left and rightpanels, respectively. The results based on the lag-by-lag tuning analysis defined by Equation 4.3 are shown forcomparison (the values of the resulting coefficients are scaled down by a factor of 5 to compensate for the smallernumber of coefficients in each regression). It is evident that the lag-by-lag analysis provides only a coarse andhighly smoothed estimate of the underlying multi-lag tuning curve. The estimates of the PDs are reliable only in lagswhere the magnitude of the directional tuning is large and, thus, the fluctuations in other lags are meaningless. Forthe lags in which the directional tuning is significant, the estimated PD is the same independent of the method andrelatively constant across the lags.

The spatiotemporal tuning curve expressed by Equation 4.4 describes how the neural activity is modulated by thevelocity profile, and the associated regression analysis quantifies the percent variance attributed to the velocity

N(k) = (l) (k + l) + (l) (k + l) + b(l) (k + l) + c + ɛ(k)∑l=−L1

L2

ax Vx ∑l=−L1

L2

ay Vy ∑l=−L1

L2

Vm

1 2

1 2 2 1 2

2 1 22 1 2

= [ (− ) … ( )��� (− ) … ( )��� (− ) … ( ) ] +N⎯ ⎯⎯ V⎯⎯⎯ x L1 V⎯⎯⎯ x L2 V⎯⎯⎯ y L1 V⎯⎯⎯ y L2 V⎯⎯⎯ m L1 V⎯⎯⎯ m L2 1⎯⎯ C⎯⎯⎯ ɛ⎯⎯= +XVC⎯⎯⎯ ɛ⎯⎯

1 2 T k k 1 k 2 T

1 21 2 x 1 x 1

x 2 y 1 y 2 1 2 T

V 6

x y

,(l) + (l)a2x a2y‾ ‾‾‾‾‾‾‾‾‾‾√−1 y x

Page 4: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

modulation (PVM). The distributions of the PVM across an ensemble of (183) neurons recorded during a typicaltarget-hitting experiment are depicted in Figure 4.3. Only few neurons exhibit velocity modulations in excess of 20%of their variance. On average, the velocity-profile accounts for only 3.7% of the total variance, and for half of theneurons it accounts for less than 1.8%.

Tuning to Spatiotemporal Patterns of Velocity

The regularization methods used to compute the spatiotemporal tuning curve are based on decomposing thespatiotemporal velocity profile, given by the matrix X , into its principal components (PCs) (see the section“Principal Component Analysis (PCA)”). The PCs of the velocity are uncorrelated linear combinations of the laggedvelocity components. The first PC accounts for maximum fraction of the variance in the velocity profile, and thesucceeding PCs account, in order, for the maximum fraction of the remaining variance. Figure 4.4 shows thevariance accounted for by individual PCs, and the accumulated variance accounted by all the initial PCs. It isevident that the late PCs account for a negligible fraction of the total variance. In particular, the first 25 PCs in thisexample already account for 95% of the variance.

The weights of the linear combinations that define each of the PCs, are the principal directions of the covariancematrix of the velocity profile, and represent the principal velocity-patterns (see the section “Principal ComponentAnalysis (PCA)”). The initial principal velocity-patterns in a 1.9 s window during a typical target-hitting experimentare shown in Figure 4.5. Some of the principal velocity-patterns are easily interpreted in terms of directional velocityand acceleration: The first two principal patterns describe low-frequency directional acceleration and directionalvelocity, respectively. The third principal pattern describes a higher-frequency directional acceleration, with zerocenter velocity preceded and succeeded by negative and positive directional velocity, respectively. The fourthprincipal pattern describes a pause between movements in the same direction. The fifth principal pattern describeslow-frequency speed, whereas the sixth describes a change in direction of movement. Other components describehigher frequencies of the velocity and acceleration profiles, including high-frequency speed (8th and 16thcomponents) and higher-frequency acceleration (12th component).

The correlated velocity components that compose the matrix X (Equation 4.5) can be described alternatively bythe uncorrelated velocity PCs. Each velocity PC describes the temporal evolution of the corresponding principalvelocity-pattern defined. The correlations of the spike counts with these PCs represent tuning to spatiotemporalpatterns of velocity. For example, the tuning of an M1 neuron to the principal velocity-patterns depicted in Figure4.5, is shown in Figure 4.6. This neuron seems to be tuned to low-frequency directional acceleration (1stcomponent) and velocity (2nd component), and high-frequency speed (12th component). However, the neuron isnot selectively tuned to any single spatiotemporal velocity pattern.

NEURONAL MODULATIONSThe velocity profile accounts for a fraction of the total variance in the spike counts of cortical neurons during armmovements, as suggested for example in Figure 4.3. The remainder of the variance may be attributed to either (1)the neuronal noise associated with the underlying firing activity or (2) other modulating signals, including nonlinearvelocity effects. These two possible affects can be differentiated by quantifying the percent variance that isattributed to the neuronal noise, while the remaining variance, is attributed to the other modulating signals.

Statistically, neural spike trains are customary analyzed using the mathematics of point processes (Perkel andBullock, 1968; Dayan and Abbott, 2001), where each point represents the timing of a spike. The simplest pointprocess, the Poisson process is memory-less, i.e., the probability of spike occurrence is independent of the historyof the spike train. The simplest Poisson process, the homogenous Poisson process, is characterized by a constantinstantaneous spike rate, and thus is inadequate for describing firing rate modulations. Thus, the simplest pointprocess that can describe rate modulations is the inhomogeneous Poisson process, which is characterized by time-varying instantaneous spike rate that is independent of the history of the spike train (Dayan and Abbott, 2001,Snyder 1975; Johnson 1996). Inhomogeneous Poisson processes in which the instantaneous spike rate is itself astochastic process are referred to as doubly stochastic Poisson processes (Snyder 1975; Johnson 1996). In doublystochastic Poisson processes, the probability of spike occurrence given the instantaneous rate is described byPoisson statistics. The spike trains recorded during arm movements can be considered as realizations of doublystochastic Poisson processes because the instantaneous rate depends on a number of biologically relevantstochastic signals. These signals include, for example, the position and velocity of the arm and the muscle forces.These multiple signals affect the instantaneous rate according to the individual tuning of each neuron.

Assuming that the spike trains are generated by doubly stochastic Poisson processes facilitates the analysis of their

V

V

Page 5: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

statistics, which is determined by two factors: stochastic changes in the instantaneous firing rate and Poissonprobability of spike occurrence. The distribution of spike counts, N in bins of size b, is determined by the averageinstantaneous spike rate during the bin, Λ , and its statistics are related to the statistics of Λ according to (Snyder1975; Appendix):

4.6

The last relationship can be interpreted as a decomposition of the total variance in the spike counts into thevariance of the underlying information bearing parameter, or rate-modulating signal, Var [Λ ], and the variance thatwould occur if N was generated by a homogenous Poisson process, E[N ]. Thus, the variance of the rate-modulating signal is the excess variance of the spike counts beyond that of a homogeneous Poisson process.Furthermore, as the instantaneous rate of a homogeneous Poisson process does not vary in time, it cannot bemodulated by any relevant biological signal and consequently its variance may be considered as noise. Thus, thesignal-to-noise ratio (SNR) can be defined as:

4.7

where F is the Fano factor, defined as the ratio between the variance and the mean of the spike count (Dayan andAbbott, 2001). In addition, the percent overall modulation (POM) is defined by expressing the variance of the rate-modulating signal as a percentage of the variance in the spike counts:

4.8

Compared to the Fano factor, the POM emphasizes the variability due to the underlying stochastic modulations anddistinguishes it from the high variability inherent in the Poisson process (Zacksenhouse et al., 2007a).

The POM is zero for the homogeneous Poisson process and positive for the inhomogeneous Poisson process.However, if the underlying point process is not Poisson, the variance of the spike counts may be smaller than themean, and the POM may be negative. It should be noted that negative values may also result from finite-lengthspike trains due to the variance of the estimate, as demonstrated in the following text.

The distribution of the POM in the ensemble of 183 neurons analyzed in Figure 4.3 is depicted in Figure 4.7 (top).The POM of the recorded neurons was positive for 83% of the neurons in this ensemble. The distribution of thePOM for simulated homogeneous Poisson processes having the same mean spike counts as the recorded neuronsis shown in the bottom left panel, indicating that the standard deviation of the estimated POM is σ = 1.2%. ThePOM of the recorded neurons (Figure 4.7, top) was above 2σ = 2.4% for 65% of the neurons, whereas only6.5% of the neurons exhibited negative POM below −2σ = −2.4%.

The POM provides a scale against which the PVM can be compared in order to determine the relative role of thevelocity profile in modulating the firing rate. The scatter plot in Figure 4.8 depicts the correlation between the PVMand POM for the same ensemble of (183) neurons during the experiment analyzed in Figure 4.3 and Figure 4.7.The high correlation indicates that the activity of cortical neurons, which exhibited larger rate modulations, was, ingeneral, better correlated with the velocity profile. As expected, the PVM is usually smaller than the POM, inagreement with the interpretation that the POM describes the percent variance attributed to overall modulations,including the velocity modulation. The slope of the linear relationship is only 0.22, suggesting that the PVMaccounts for only a small fraction of the POM, and that additional signals, other than the velocity profile, modulatethe neural activity (Zacksenhouse et al., 2007).

bb b

E[ ] = E[ ]Nb ΛbVar[ ] = Var[ ] + E[ ]Nb Λb Λb= Var[ ] + E[ ]Λb Nb

bb b

SNR = = = F − 1Var[ ]ΛbVar[noise]

Var[ ] − E[ ]Nb NbE[ ]Nb

POM = · 100% = · 100% = (1 − ) · 100%Var[ ]ΛbVar[ ]Nb

Var[ ] − E[ ]Nb NbVar[ ]Nb

1F

POMPOM

POM

Page 6: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

The POM of the recorded neurons can be also compared to the POM of simulated neurons that are modulated onlyby the velocity profile (bottom right). The simulated neural activity is generated using inhomogeneous Poissonprocesses with a rate parameter derived from Equation 4.4 based on the estimated multi-lag tuning curves of therecorded neurons. When the activity is modulated only by the velocity profile, the statistics of the resulting POMdistribution is similar to that of the PVM (Figure 4.3), but smaller than that for the recorded neurons. Thiscomparison supports the above conclusion that the POM captures additional signals that modulate the neuralactivity.

MOVEMENT PREDICTIONNeuronal analysis indicates that the spike counts generated by individual neurons encode the planned velocity, asdetailed in the section “Neuronal Encoding and Tuning Curves.” BMI technology is based on decoding the neuronalactivity and extracting the movement related signals, and in particular the planned velocity. Several decodingtechniques are possible, including, linear (Weiner) filter, Kalman filter, and nonlinear filters. Typical BMI experimentsindicate that the Kalman and nonlinear filters do not consistently outperform the linear filter (Carmena et al., 2003;Wessberg et al., 2000). Here we concentrate on describing the linear filter and its improvement using regularizationmethods, and demonstrated its superior performance.

The movement signal of interest, like the velocity components, can be predicted by a linear filter of the spike countsrecorded from an ensemble of neurons during the preceding time lags, according to:

4.9

where (k) is the predicted movement signal (e.g., the components of the velocity V and V , or the grip force),ω is the bias term, and ω (l) is the weight given to the spike counts elicited by the j-th neuron during the precedingl-th lag. The filter described in Equation 4.9 is of the form of a moving average across multiple neurons, with awindow determined by the number of lags L.

The bias and weights are determined from the training section of the BMI experiment, in which both the neuralactivity and the movement signals are recorded, using the multi-variables regression given by:

4.10

where M(k) is the recorded movement signal, and (k) is the residual error.

The multi-lag multineuron regression of Equation 4.10 can be formulated in a matrix notation as:

4.11

where, M = [M(L + 1)…M(T)] and N (l) = [N (l)…N (T − + l − L)] are (T − L) × 1 vectors of the measuredmovement-signal and the properly lagged spike counts of the jth neuron, respectively, 1 is a (T − L) × 1 vector of1’s, W = [ω (L) ω (L − 1)… ω (1) ω (L)… ω (1) ω (L)… ω (1) ω ] is a vector of regression coefficients, and n isthe number of neurons.

The coefficient of determination of the multi-variable regression of Equation 4.11, R ( M, N ([−L,1])), describes thefraction of variance in the measured movement-signal that is correlated with the linear filter and provides a measurefor the fidelity of the reconstruction. However, in movement prediction we are interested in the ability to predict the

(k) = + (l) (k + l)M ω0 ∑j=1

N

∑l=−L

0ωj Nj

x y0 j

M(k) = + (l) (k − l) + ɛ(k)ω0 ∑j=1

N

∑l=1

Lωj Nj

= [ (1) … (L)��� (1) … (L)��� (1) … (L) ] +M⎯ ⎯⎯⎯ N⎯ ⎯⎯ 1 N⎯ ⎯⎯ 1 N⎯ ⎯⎯ 2 N⎯ ⎯⎯ 2 N⎯ ⎯⎯ n N⎯ ⎯⎯ n 1⎯⎯ C⎯⎯⎯ ɛ⎯⎯= +XN,LW⎯ ⎯⎯⎯ ɛ⎯⎯

T j j j T

1 1 1 2 2 n n 0 T

2

Page 7: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

movement-signal from new measurements of neural activity. Thus, the performance of the filter is assessed usingtesting records, which were not used to determine the filter coefficients. The quality of the prediction is evaluated ona testing record using the filter coefficients determined from a nonoverlapping training record. The fidelity of the

prediction can be assessed using either (1) the coefficient of regression R( M, ) between the predicted

movement-signal and the measured signal during the testing record M, or (2) The variance reduction VR( M, ) given by

4.12

The optimal least-square solution of Equation 4.11 is sensitive to measurement noise and may become unstablewhen the condition number of the matrix is large (see the section “Least Square Solution”). During typical BMIexperiments, the condition number of the matrix X is on the order of 2000. The effect of different regularizationparameters on the fidelity of velocity prediction, based on a 10 min training record and a 2 min testing record takenfrom a typical BMI experiment, is shown in Figure 4.9. It is evident that prediction can be improved by using proper

regularization. The best coefficient of determination is obtained with λ = 94, resulting in R ( M, ) = 0.676

(i.e., R ( M, ) = 0.82), implying that the predicted velocity (depicted in Figure 4.10, middle panel) accounts(explains) 67% of the variance of the measured velocity (Figure 4.10, top panel). In comparison, the least square

(LS) regression, corresponding to λ = 0, results in R ( M, ) = 0.593 (i.e.,R ( M, ) = 0 77). Thus, withproper regularization, the prediction can capture an additional 8% of the variations in the velocity.

The performance of a Kalman filter (Brown and Hwang, 1997; Wu et al., 2004) that was trained and tested on thesame records of data is shown for comparison in the bottom panel of Figure 4.10. The resulting coefficient of

correlation of R ( M, ) = 0.6 (i.e., R( M, ) = 0 77) indicates that the Kalman filter performs as well as theleast-squares linear regression but underperforms the regularized linear regression.

ENSEMBLE ANALYSIS

Principal Neurons

The activity of the recorded neurons may be correlated either due to common modulating signals or due tocorrelation in the neural noise. Ensembles of neurons with correlated activity can be identified using principalcomponents analysis (PCA), as detailed in the section “Principal Component Analysis (PCA).” PCA transforms thesequences of normalized spike counts recorded from n neurons into an ordered set of n uncorrelated sequences,known as principal components (PCs). Each PC is the weighed linear combination of the n normalized (zero meanand unit variance) spike count sequences, and thus may be considered as the normalized activity of a principalneuron. The PCs are ordered according to their variance, with the first PC accounting for most of the variance. Theassociated unit-length weight vector is the first eigen vector of the covariance matrix of the neural activity, anddescribes the ensemble of neurons whose superposition carries most of the variance. Each subsequent PCaccounts for the maximum of the remaining variance in the neural activity.

The percent variance carried by the different PCs, or principal neurons, during a typical session of a target-hittingexperiment is described in Figure 4.11. The percent variance drops significantly for the first few principalcomponents before reaching an approximately constant, nonzero, level. This structure agrees well with theassumption that the correlated signals in the spike counts are embedded in a largely uncorrelated noise. If theneural activities from different neurons are assumed to be conditionally independent, i.e., any correlation in thespike counts are attributed only to correlation in the underlying firing rate, the covariance matrix of the normalizedneural activity can be decomposed as (see [A.17]):

VR(M, ) = 1 −M∑test ( − )Mi M i

2

∑test ( − )Mi M⎯ ⎯⎯⎯i

2

N,L

2λ = 94

λ = 94

2LS LS

2

Page 8: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

4.13

where, = [ Ñ Ñ ...Ñ ] is the matrix of the normalized spike counts (Equation A.16) of all the neurons, is thecorresponding matrix of normalized rate-parameters (Equation A.18), and Diag(·) is a diagonal matrix with F = E[N ]/var(N ) i = 1, ..., n on the diagonal. Thus, the gradually slopping level of the percent variance carried by the PCscan be attributed to the gradually varying Fano factors F . In contrast, the excess variance of the initial PCs abovethe background level reflects correlated activity, which can be attributed to common signals that contribute to ratemodulations.

Ensemble Permanence

Under the above assumptions, the initial principal neurons define the neural ensembles that carry the commonmodulating signals. Thus, their identity and in particular their permanence with time reflects the dynamics of neuralcomputation and impacts the ability to extract the relevant modulating signals (Zacksenhouse et al., 2005).

In order to evaluate the permanence of the principal neurons with time, their identity is determined using PCA onsmall windows of time, and compared across time. Each principal neuron is defined by a single vector v in the n-dimensional neural space, which describes the relative weight given to each recorded neuron. Similarly, the initial mprincipal neurons define an m-dimensional subspace. The permanence of the m-dimensional subspaces defined bythe initial m principal neurons can be assessed using either: (1) the cosine angle between the subspaces, or (2)changes in the variance carried by these subspaces.

Specifically, let v (k) be the first principal neuron in the kth window, as depicted in Figure 4.12 (left panel) for theexperiment analyzed in Figure 4.11. Visual inspection suggests that the same ensemble of neurons contribute tothe first principal neuron along the experiment. This conclusion is quantified by the percent variance carried by v (k)in the l th window (top right) and the cosine angle between the first principal neurons cos(v (k), v (l)) = v (k) v (l)(bottom right).

LINEAR REGRESSIONThe regression problem of Equations 4.5 and 4.11 can be stated in vector form as:

4.14

Where, the data matrix X is a K × J matrix with K > J. For proper analysis, the data matrix includes a column ofones, which account for the bias term, whereas each other column is normalized for zero mean (and optionally forunit variance).

Principal Component Analysis (PCA)

The normalized data matrix X can be decomposed using singular value decomposition (SVD), as (Hansen 1997):

4.15

where,U = [ u ,...u ] and V = [ v ,...v ] are orthonormal matrices, Σ =diag(σ ,...σ ) withσ ≥σ ≥...σ ≥ 0, and the rank r ≥ J is the number of strictly positive singular values . Note that:

= + Diag( )X TN X N Λ T Λ E[ ]Ni

var( )Ni

N 1 2 ni−1

i ii

i

1

11 1 1 1

y = Xc

= U ∑ =XT V T ∑i−1

ruiσivi

1 k K×K 1 J J×J 1 J K × J

1 2 J i

Page 9: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

4.16

Because U U = I the correlation matrix of the data is given by:

4.17

Equation 4.17 defines the principal component analysis (PCA) of the data. The vectors v are theeigenvectors of the covariance matrix of the data, with a corresponding eigenvalue σ . The eigenvectors arereferred to as principal velocity patterns when considering high velocity profile or principal neurons whenconsidering neural data. The principal components (PCs) of the data, are the projection of the data on theeigenvector v i = 1, …, J using Equation 4.15, i.e., p = X v = UΣV u = σ u . The PCs are referred to as thevelocity PCs, in the case of velocity data, or the activity of the principal neurons in the case of neural data. Thevariance of each PC is determined by the corresponding eigenvalue as var(p ) = σ , and the percent variancecarried by each PC is given by:

4.18

Least-Square Solution

The least-square solution can be expressed in terms of the PCA (or SVD) as (Hansen 1997; Fierro et al., 1997):

4.19

The resulting mean square error of the regression (MSE) is given by:

4.20

Where the vector y was expanded in the orthonormal basis defined by the vectors u as:

The LS optimal solution is highly sensitive to measurement errors or uncertainties because, from Equation 4.19, itdepends on the inverse of the singular values. Small singular values can dominate the solution and magnify errors

= U ∑ =XTvi VTui σiui

X = V =ui ∑T UTui σivi

T K×K

R = X = V ∈ ���where = ∑ = diag( , … ) ∈XT ∑2 V T ℜJ×J ∑

2∑

T σ21 σ2

J ℜJ×J

i J×1

i

i i T i T i i i

i i 2

% var( ) =pi ∑j=1

σ 2iN

σ2j

=cLS ∑i=1

r yuTiσi

vi

= ) = =MSELS MSE(cLS y − X‖‖ cLS‖‖2 ∑i=r+1

K( y)uT

i2

i

y = ( y) .∑i=1

ruT

i ui

Page 10: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

in the measurement vector y. Thus, it is common to use regularization to stabilize the solution. Two commonregularization methods are (Hansen 1997; Fierro et al., 1997) (1) Truncated SVD (tSVD), and (2) Tikhonovregularization.

Regularization Methods

The truncated-LS regression is obtained by truncating the LS regression of Equation 4.19 at k ≤ r ≤ J:

4.21

The resulting MSE is:

4.22

which increases as more terms in the LS regression are truncated.

Tikhonov regularization stabilizes the optimal LS solution by minimizing the combination of the MSE and the size ofthe regression vector:

When L = I, the regularized regression vector with a given λ is (Elden 1982):

4.23

The resulting MSE is:

4.24

Because

the MSE increases with λ and the minimum is achieved for the optimal LS solution c = c .

=ck ∑i=1

k yuTiσi

vi

= MSE( ) = = = +MSEk ck y − X‖‖ ck‖‖2 ∑i=k+1

K( y)uT

i2 MSELS ∑

i=k+1

r( y)uT

i2

min[ + ‖Lc‖].y − c‖‖ XT ‖‖2 λ2

2

=cλ ∑i=1

r σ2i

+σ2i λ2

yuTiσi

vi

= MSE( ) = = +MSEλ cλ y −‖‖ XTcλ‖‖2

∑i=1

r λ4

( + )σ2i λ2 2 ( y)uT

i2 ∑

i=r+1

K( y)uT

i2

= 2 > 0,∂Sλ

∂λ2λ2 ∑

i=1

r ⎛⎝⎜⎜

σ2i

( + )σ2i λ2 3 ( y)uT

i2⎞⎠⎟⎟

2 λ=0 LS

Page 11: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

By varying the Tikhonov parameter λ, it is possible to trade-off between robustness to uncertainties (large λ) andperformance on training data (small λ).

CONCLUSIONSThe neural activity during general, free reaching movements represents the spatiotemporal profile of the movement.This representation can be captured by a multi-lag regression of the neural activity on the velocity profile. However,given the significant inter-lag correlations, this representation cannot be captured well by combining single-laganalysis. Interestingly, the neural activity is represented better in terms of the multi-lag tuning curves, which revealhow the preferred direction and depth of modulation changes with the lag, rather than in terms of tuning to thePrincipal components of the velocity profile.

Likewise, velocity prediction involves the spatiotemporal activity across an ensemble of neurons at multiple lags.The resulting high-dimensional regression problem requires regularization techniques for stabilizing the solution inthe presence of neural noise and uncertainties due to the potential contributions of other modulating signals.

A critical issue for future BMI improvements is whether the neural activity encodes other relevant signals, aside fromthe spatiotemporal profile of the movement velocity. To assess this issue we developed a measure, termed thepercent overall modulations, which quantifies the percent variance that can be attributed to neural modulationsunder the Poisson assumption. Comparing the percent overall modulations with the percent variance that can beattributed to the velocity profile suggests that the neural activity is modulated by additional signals, whose exactnature is still under investigation.

REFERENCES1. Alexander GE, Crutcher MD. Neural representations of the target (goal) of visually guided arm movements

in three motor areas of the monkey. J Neurophysiol. 1990;64:164–178. [PubMed: 2388063]2. Ashe J, Georgopoulos AP. Movement parameters and neural activity in motor cortex and area 5. Cereb

Cortex. 1994;4:590–600. [PubMed: 7703686]3. Brown EN, Kass RE, Mitra PP. Multiple neural spike train data analysis: state-of-the-art and future

challenges. Nat Neurosci. 2004;7(5):456–461. [PubMed: 15114358]4. Brown RG, Hwang PYC. Introduction to random signals and applied kalman filtering. 3. Wiley; New York:

1997.5. Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov D, Patil PJ, Henriquez CS,

Nicolelis MAL. Learning to control a brain–machine interface for reaching and grasping by primates. PLoSBiol. 2003;1:193–208. [PMC free article: PMC261882] [PubMed: 14624244]

6. Dayan P, Abbott LP. Theoretical neuroscience. MIT Press; Cambridge, MA: 2001.7. Elden L. A weighted pseudoinverse, generalized singular values, and constrained least squares problems.

BIT. 1982;22:487–502.8. Fierro RD, Golub GH, Hansen PC, O’Leary DP. Regularization by truncated total least square. SIAM J.

Sci. Comput. 1997;18:1223–1241.9. Georgopoulos AP. Neural aspects of cognitive motor control. Curr Opin Neurobiol. 2000;10:238–241.

[PubMed: 10753794]10. Georgopoulos AP, Ashe J, Smyrnis N, Taira M. The motor cortex and the coding of force. Science.

1992;256:1692–1695. [PubMed: 1609282]11. Georgopoulos AP, Lurito JT, Petrides M, Schwartz AB, Massey JT. Mental rotation of the neuronal

population vector. Science. 1989;243:234–236. [PubMed: 2911737]12. Georgopoulos AP, Schwartz AB, Kettner RE. Neuronal population coding of movement direction. Science.

1986;26:1416–1419. [PubMed: 3749885]13. Hansen PC. Rank-deficient and discrete Ill-posed problems. SIAM; Philadelphia: 1997.14. Johnson DH. Point Process models of single neuron discharge. J Comp Neurosci. 1996;3:275–299.

[PubMed: 9001973]15. Johnson MTV, Mason CR, Ebner TJ. Central processes for the multiparametric control of arm movements

in primates. Curr Opin Neurobiol. 2001;11:684–688. [PubMed: 11741018]16. Lebedev MA, Carmena JM, O’Doherty JE, Zacksenhouse M, Henriquez CS, Principe J, Nicolelis MAL.

Cortical ensemble adaptation to represent velocity of an artificial actuator controlled by a brain machineinterface. J Neurosci. 2005;25(19):4681–4693. [PubMed: 15888644]

17. Moran DW, Schwartz A. Motor cortical representation of speed and direction during reaching. JNeurophysiol. 1999;82:2676–2692. [PubMed: 10561437]

Page 12: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

18. Nicolelis MAL. Actions from thoughts. Nature. 2001;409(18):403–407. [PubMed: 11201755]19. Nicolelis MAL. Brain-machine interfaces to restore motor functions and probe neural circuits. Nat Rev.

2003;4:417–422. [PubMed: 12728268]20. Paz R, Wise SP, Vaadia E. Viewing and doing: similar cortical mechanisms for perceptual and motor

learning. Trends Neurosci. 2004;27:496–503. [PubMed: 15271498]21. Perkel DH, Bullock TH. Neural coding. Neurosc Res Progr Bull. 1968;6:221–248.22. Pouget A, Dayan P, Zemel RS. Inference and computation with population codes. Annu Rev Neurosci.

2003;26:381–410. [PubMed: 12704222]23. Scott SH. The role of primary motor cortex in goal directed movements: insights from neurphysiological

studies on non-human primates. Curr Opin Neurobiol. 2003;13:671–677. [PubMed: 14662367]24. Snyder DL. Point Processes. Wiley; New York: 1975.25. Wessberg J, Stambaugh CR, Kralik JD, Beck PD, Laubach M, Chapin JK, Kim J, Biggs SJ, Srinivasan

MA, Nicolelis MAL. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates.Nature. 2000;408:361–365. [PubMed: 11099043]

26. Wu W, Shaikhouniy A, Donoghue JP, Blackz MJ. Closed-loop neural control of cursor motion using aKalman filter. Proc 26th Int Conf IEEE EMBS; San Francisco, CA. September.2004. [PubMed: 17271209]

27. Zacksenhouse M, Lebedev MA, Camena JM, O’Dohery JE, Henriquez CS, Nicolelis MAL. Correlatedensemble activity increased when operating a brain machine interface, extended abstract. Comp. Neuro-Science. 2005CNS–05

28. Zacksenhouse M, Lebedev MA, Camena JM, O’Dohery JE, Henriquez CS, Nicolelis MAL. Corticalmodulations increase in early sessions brain machine interfaces. PLoS-ONE. 2007a;2(7):e619. [PMC freearticle: PMC1919433] [PubMed: 17637835]

APPENDIX—DERIVATION OF THE VARIANCE RELATIONSHIPThe relationship between the statistics of the spike counts and those of the underlying stochastic informationprocess are stated in (Snyder, 1975), with a proof based on the moment generating function. Here we provide adirect proof for the variance relationship, and for the cross-covariance between the spikes counts elicited betweentwo doubly stochastic Poisson processes.

Variance Relationship For Doubly Stochastic Poisson Processes

Let {N } be a doubly stochastic Poisson process with intensity {λ (x )}, where x is the underlying informationprocess. The mean and variance of the number of spikes in bins of width b, {N }, are related to the mean andvariance of the underlying stochastic Poisson parameter:

according to:

A.1

Proof

By definition:

A.2

t t t tb

= ( )dσΛb ∫+bt0

t0λσ xσ

E[ ] = E[ ]Nb ΛbVar[ ] = Var[ ] + E[ ] = Var[ ] + E[ ]Nb Λb Λb Λb Nb

E[ ] n Pr( = n)�and�E[ ] = Pr( = n)Nb ∑n=0

∞Nb N 2

b ∑n=0

∞n2 Nb

Page 13: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

Using the method of conditioning (Snyder 1975, Equation 6.1 there)

A.3

where the expectation is with respect to the stochastic parameter Λ .

Substituting Equation A.3 in Equation A.2:

and

A.4

Because the expectation is with respect to Λ it can be moved outside the summation, so the first part of EquationA.4 implies that:

A.5

where m = n − 1, and the last step is based on the equality

A.6

This proves the relationship between the mean of the spike counts and the underlying rate parameter.

The second part of Equation A.4 implies that:

Pr( = n) = E[Pr( = n )] = E[ exp(− )]Nb Nb ∣∣Λb (n!)−1Λnb Λb

b

E[ ] = nE[ exp(− )]Nb ∑n=0

∞(n!)−1Λn

b Λb

E[ ] = E[ exp(− )]N 2b ∑

n=0

∞n2 (n!)−1Λn

b Λb

b

E[ ] = E[ n exp(− )]Nb ∑n=0

∞(n!)−1Λn

b Λb

= E[exp(− ) ] = E[ ]Λb Λb ∑m=0

∞(m!)−1Λm

b Λb

= exp( ).∑m=0

∞ Λ2b

m! Λb

Page 14: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

A.7

where l = n − 1.

Finally, the last two equations can be manipulated to derive the following relationship between the respectivevariances:

A.8

This completes the proof of the variance relationship stated in (A.1).

The cross-correlation and co-variance relationship between two spike trains that are generated by two doublystochastic Poisson processes are derived next.

Cross Variance Relationship for Doubly Stochastic Poisson Processes

Let {N } and {N } be the number of spikes in bins of width b that were elicited by two doubly stochastic Poissonprocesses with underlying stochastic Poisson parameters Λ and Λ (the index b, indicating the bin-width is omittedfor simplicity, but is the same for both processes). If the two processes are conditionally independent the cross-covariance between the spike counts is

A.9

Proof

By definition:

A.10

Using the method of conditioning

A.11

Where the expectation is with respect to stochastic parameters Λ and Λ . Conditional independence implies thatgiven the two rate parameter, the two spike counts are independent, i.e., any correlation between the two spike

E[ ] = E[ exp(− )]N 2b ∑

n=0

∞n2(n!)−1Λn

b Λb

= E[exp(− ) ] + E[exp(− ) ]Λb Λ2b ∑

l=0

∞(l!)−1Λl

b Λb Λb ∑m=0

∞(m!)−1Λm

b

= E[ ] + E[ ]Λ2b Λb

Var[ ] = E[ ] = E[ ] − ENb { − E[ ]}Nb Nb2 N 2

b [ ]Nb2

= E[ ] + E[ ] − E = E[ ] + E[ ]Λ2b Λb [ ]Λb

2 { − E[ ]}Λb Λb2 Λb

= Var[ ] + E[ ]Λb Λb

1 21 2

E[ ] = E[ ]N1 N2 Λ1Λ2cov[ , ] = cov[ , ]N1 N2 Λ1 Λ2

E[ ] = Pr( = & = )N1 N2 ∑=0n1

∑=0n2

∞n1n2 N1 n1 N2 n2

Pr( = & = ) = E[Pr( = & = , )]N1 n1 N2 n2 N1 n1 N2 n2∣∣Λ1 Λ2

1 2

Page 15: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

counts is generated only by correlation between the underlying rate parameters. Hence:

A.12

Substituting Equation A.12 in Equation A.10, and making the change of indices m = n −1 and m = n − 1, resultin:

A.13

Hence (using also Equation A.1),

A.14

In summary, Equations A.14 and A.8 imply that:

A.15

The spike counts can be normalized to zero-mean and unit variance according to:

A.16

Thus, the covariance of the normalized spike counts is:

A.17

Where the normalized rate-parameter is:

Pr( = & = ) = E[Pr( = ) · Pr( = )]N1 n1 N1 n1 N1 n1∣∣Λ1 N2 n2∣∣Λ2

= E[ exp(− ) exp(− )]( !)n1−1Λn1

1 Λ1 ( !)n2−1Λn2

2 Λ2

1 1 1 1

E[ ] = E[ exp(− ) exp(− )]N1 N2 ∑=0n1

∑=0n2

∞n1n2 ( !)n1

−1Λn11 Λ1 ( !)n2

−1Λn22 Λ2

= E[ exp(− ) exp(− ) ]Λ1Λ2 Λ1 Λ2 ∑=0m1

∞( !)m1

−1Λm11 ∑

=0m2

∞( !)m2

−1Λm22

= E[ ].Λ1Λ2

cov[ , ] = cov[ , ]N1 N2 Λ1 Λ2

cov[ , ] = cov[ , ] + E[ ]Ni Nj Λi Λj δij Λi

=N i− E( )Ni Ni

var( )Ni‾ ‾‾‾‾‾‾√

cov[ , ] =N i N jcov[ , ] + E[ ]Λi Λj δij Λi

var( ) + E( )Λi Λi‾ ‾‾‾‾‾‾‾‾‾‾‾‾‾‾√ var( ) + E( )Λj Λj‾ ‾‾‾‾‾‾‾‾‾‾‾‾‾‾√

= cov[ , ] +Λ i Λ j δijE[ ]Ni

var( )Ni

Page 16: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

A.18=Λ i− E[ ]Λi Λi

var( ) + E( )Λi Λi‾ ‾‾‾‾‾‾‾‾‾‾‾‾‾‾√

Page 17: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

Figures

FIGURE 4.1

Tuning to the multi-lag velocity profile: the neural activity is related to the velocity trajectory in the surroundingwindow. Top panels: the x (right) and y (middle) components of the velocity, and the speed (left) in 100 ms bins.Bottom panel: the binned neural spike counts.

Page 18: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.2

Tuning to velocity signals (upper left–Vx, upper right–Vy) and corresponding velocity tuning index (bottom left) andpreferred direction (bottom right) based on lag-by-lag (dashed), and multi-lag velocity profile (using t-SVD capturing95% of the variance). Each lag is 100 ms long.

Page 19: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.4

Percent variance accounted for by individual PCs (top) and accumulated percent variance accounted for by theinitial PCs (bottom) of the movement during one session of a target-hitting experiment with pole control. Theindicated 95% accumulated variance (dashed line) is accounted for by the initial 25 PCs.

Page 20: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.3

Distribution of percent velocity modulation (PVM) across an ensemble of neurons recorded during one experimentalsession. PVM was computed using a multi-lag tuning curve estimated by regularization.

Page 21: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.5

Principal patterns of velocity defined by the initial principal directions of the velocity and speed trajectories in 1.9 swindows during a typical target-hitting experiment.

Page 22: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.6

Tuning to the spatiotemporal velocity patterns defined in Figure 4.4. Example based on the same neuron, whichmulti-lag tuning is depicted in Figure 4.2.

Page 23: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.7

Distribution of percent overall modulation (POM) across an ensemble of neurons recorded during one experimentalsession (top). Distribution of POM for simulated neurons, simulated as homogeneous Poisson processes having thesame mean rate as the recorded neurons (left), or inhomogeneous Poisson processes having the same velocitytuning as the recorded neurons (right).

Page 24: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.8

Correlation between percent velocity modulation (PVM) and percent overall modulation (POM). Scatter plot of PVMand POM computed from an ensemble of neurons during the same experiment analyzed in Figure 4.3 and 4.7.Regression line (solid) and unit slope line (dashed) are superimposed.

Page 25: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.9

Effect of regularization on the fidelity of movement reconstruction and prediction on training (solid) and testing(dashed) records, respectively. The fidelity of the prediction is assessed by the coefficient of determination R . Theleast square (LS) solution is obtained with a regularization parameter zero.

2

Page 26: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.10

Measured and predicted velocity using a linear filter with Tikhonov regularization (middle) or a Kalman filter(bottom). The coefficient of determination between the actual velocity and predicted velocity, is R = 0.676 and R =0.593, respectively.

2 2

Page 27: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.11

Distribution of variance across different neural ensembles. Percent variance attributed to the principal componentsof the normalized neural activity recorded in one experimental session.

Page 28: Chapter 4 Strategies for Neural Ensemble Data Analysis for Brain–Machine Interface (BMI) Applications

FIGURE 4.12

(See color insert following page 140.) Ensemble permanence—first principal neuron during 30 s windows alongone experimental session (left). The variance carried by the first principal neurons computed at one window (verticalaxis) during another window (x axis) (top right), and the cosine-angle between the first principal neurons from thetwo window (bottom right).

Copyright © 2008, Taylor & Francis Group, LLC.

Bookshelf ID: NBK1984 PMID: 21204438