Dynamic instabilities in scalar neural field equations with space-dependent delays N A Venkov and S Coombes and P C Matthews School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD, UK. Abstract In this paper we consider a class of scalar integral equations with a form of space- dependent delay. These non-local models arise naturally when modelling neural tissue with active axons and passive dendrites. Such systems are known to support a dynamic (oscillatory) Turing instability of the homogeneous steady state. In this paper we develop a weakly nonlinear analysis of the travelling and standing waves that form beyond the point of instability. The appropriate amplitude equations are found to be the coupled mean-field Ginzburg–Landau equations describing a Turing–Hopf bifurcation with modulation group velocity of O(1). Importantly we are able to obtain the coefficients of terms in the amplitude equations in terms of integral transforms of the spatio-temporal kernels defining the neural field equation of interest. Indeed our results cover not only models with axonal or dendritic delays but those which are described by a more general distribution of delayed spatio- temporal interactions. We illustrate the predictive power of this form of analysis with comparison against direct numerical simulations, paying particular attention to the competition between standing and travelling waves and the onset of Benjamin–Feir instabilities. Key words: neuronal networks, integral equations, space dependent delays, dynamic pattern formation, travelling waves, amplitude equations. Preprint submitted to Elsevier Science 27 April 2007
34
Embed
Dynamic instabilities in scalar neural field equations with space …eprints.nottingham.ac.uk/497/01/Venkov1D-submitted.pdf · 2014-10-13 · Dynamic instabilities in scalar neural
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Dynamic instabilities in scalar neural field
equations with space-dependent delays
N A Venkov and S Coombes and P C Matthews
School of Mathematical Sciences, University of Nottingham, Nottingham, NG7
2RD, UK.
Abstract
In this paper we consider a class of scalar integral equations with a form of space-
dependent delay. These non-local models arise naturally when modelling neural
tissue with active axons and passive dendrites. Such systems are known to support
a dynamic (oscillatory) Turing instability of the homogeneous steady state. In this
paper we develop a weakly nonlinear analysis of the travelling and standing waves
that form beyond the point of instability. The appropriate amplitude equations
are found to be the coupled mean-field Ginzburg–Landau equations describing a
Turing–Hopf bifurcation with modulation group velocity of O(1). Importantly we
are able to obtain the coefficients of terms in the amplitude equations in terms of
integral transforms of the spatio-temporal kernels defining the neural field equation
of interest. Indeed our results cover not only models with axonal or dendritic delays
but those which are described by a more general distribution of delayed spatio-
temporal interactions. We illustrate the predictive power of this form of analysis with
comparison against direct numerical simulations, paying particular attention to the
competition between standing and travelling waves and the onset of Benjamin–Feir
instabilities.
Key words: neuronal networks, integral equations, space dependent delays,
Preprint submitted to Elsevier Science 27 April 2007
1 Introduction
The ability of neural field models to exhibit complex spatio-temporal dynamics has been
studied intensively since their introduction by Wilson and Cowan [1]. They have found
wide application in interpreting experiments in vitro e.g. electrical stimulation of slices of
neural tissue [2–5] and phenomena in vivo such as the synchronisation of cortical activity
during epileptic seizures [6] or uncovering the mechanism of geometric visual hallucina-
tions [7–9]. The sorts of dynamic behaviour that are typically observed in neural field
models includes, spatially and temporally periodic patterns (beyond a Turing instability)
[7,8], localised regions of activity (bumps and multi-bumps) [10,11] and travelling waves
(fronts, pulses, target waves and spirals) [12,13]. The equations describing the evolution
of the activity in the neural field typically take the form of integro-differential or integral
equations. A variety of modifications have been put forward adding various biological
mechanisms to the original model. For a recent review see [14].
It is the purpose of this paper to consider in more detail the role of axonal and den-
dritic delays in generating novel spatio-temporal patterns. Specifically, we are interested
in patterns emerging via Turing-type instabilities of the homogeneous steady state. There
are four different types of instability that generically occur, giving rise to i) shift to a
new uniform stationary state, ii) stationary periodic patterns, iii) uniform global (bulk)
oscillations and iv) travelling (oscillatory) periodic patterns. We shall refer to i) and ii) as
static instabilities and iii) and iv) as dynamic. Note that in the original two-population
model (without space-dependent delays) developed by Wilson and Cowan [1] there are
two time scales and two space-scales: the different membrane time-constants for the ex-
citatory and inhibitory synapses, and the associated spatial interaction scales. However,
in order to decrease the dimensionality of the system it is common to assume that in-
hibitory synapses act much faster than the excitatory. The unequal finite spatial scales
are preserved in the form of the connectivity kernel of the new single equation (often of
Mexican hat shape), but one of the temporal scales is lost. This is the type of reduced
system we consider here, with the notable exception that we bring back another time-scale
associated with a space-dependent delay. Space-dependent delays arise naturally through
two distinct signal processing mechanisms in models of neural tissue. Axonal delays are
associated with the finite speed of action potential propagation. In models with dendrites
there is a further distributed delay associated with the arrival of input at a synapse away
from the cell-body. It is precisely the inclusion of these biological features that can give
2
rise to not only static, but dynamic Turing instabilities. For examples of the treatment of
truly two-population models and the possibility of oscillatory pattern formation without
space-dependent delays we refer the reader to Tass [8] and Bressloff and Cowan [9].
Axonal delays arise due to the finite speed of action potential propagation in transferring
signals between distinct points in the neural field and are modelled in the work of [15,1,16]
as simple space-dependent delays. Delays arising from the processing of incoming synaptic
inputs by passive dendritic trees may also be incorporated into neural field models, as in
the work of Bressloff [17]. In both cases it is now known that these space-dependent delays
can lead to a dynamic Turing instability of a homogeneous steady state. These were first
found in neural field models by Bressloff [17] for dendritic delays and more recently by
Hutt et al. [18] for axonal delays. Both these studies show that a combination of short
range inhibition and longer range excitation with a space-dependent delay may lead to a
dynamic instability. This choice of connectivity, which we shall call inverted Mexican hat,
is natural when considering cortical tissue and remembering that principal pyramidal cells
i) are often enveloped by a cloud of inhibitory interneurons, and ii) that long range cortical
connections are typically excitatory [19–21]. Detailed examination by linear analysis on the
relation between the connectivity shape (the balance between excitation and inhibition)
and the dominant pattern type has been done by Hutt [22]. Roxin et al. did similar work
for a model with fixed discrete delay [23].
Our main result will be to go beyond the linear analysis of Bressloff [17] and Hutt et al.
[18] and to develop amplitude equations for a one dimensional scalar neural field equation
with space-dependent delays. Although borrowing heavily from techniques in the PDE
community for the weakly nonlinear analysis of states beyond a Turing bifurcation [24–26],
our analysis is complicated by the fact that it deals with pattern forming models described
by integral equations. Previously amplitude equations in the context of neural field models
have been derived also in [27–29]. Importantly we work with integral equations describing
neural fields with both axonal and dendritic processing. Although they have existed as
models for some time they are far less studied than models lacking such biologically
realistic terms. Our formulation is general and encompasses a number of such models.
When deriving the amplitude equations we consider also the effects of space-dependant
modulation and show that the appropriate equations are the mean–field Ginzburg–Landau
equations [30].
In Section 2 we introduce the class of models we shall consider and describe how they can
3
be written as integral equations with a particular spatio-temporal convolution structure.
Next in Section 3 we analyse the linear stability of the homogeneous steady state and de-
rive the conditions for the onset of a dynamic Turing (Turing–Hopf) instability point. In
Section 4 we derive the amplitude equations via a multiple-scales analysis. These ampli-
tude equations are analysed in Section 5 to determine the selection process for travelling
as opposed to standing waves. Moreover, we also consider Benjamin–Feir modulational
instabilities in which a periodic travelling wave (of moderate amplitude) loses energy to a
small perturbation of other waves with nearly the same frequency and wavenumber. Nu-
merical experiments are presented to illustrate and support our analysis of the amplitude
equations. In Section 6 we consider the generalisation of our approach to tackle models
with a form of spike frequency adaptation. We show that this can strongly influence the
selection process for travelling versus standing waves. Finally in Section 7 we discuss other
natural extensions of the work we have presented in this paper.
2 The model
In many continuum models for the propagation of electrical activity in neural tissue it
is assumed that the synaptic input current is a function of the pre-synaptic firing rate
function [31,15,10,32]. When considering a simple one-dimensional system of identical neu-
rons communicating through excitatory and inhibitory synaptic connections it is typical
to arrive at models of the form
u(x, t) =∫ ∞
−∞dy w(x − y)
∫ t
−∞ds η(t − s)f(u(y, s − |x − y|/v)). (1)
Here, u(x, t) is identified as the synaptic activity at position x ∈ R at time t ∈ R+. The
firing rate activity generated as a consequence of local synaptic activity is simply f(u),
where the firing rate function f is prescribed purely in terms of the properties of some
underlying single neuron model or chosen to fit experimental data. A common choice for
the firing rate function is the sigmoid
f(u) = (1 + exp(−β(u − h)))−1, (2)
with steepness parameter β > 0 and threshold h > 0. The spatial kernel w(x) = w(|x|)describes not only the anatomical connectivity of the network, but also includes the sign
of synaptic interaction. For simplicity it is also often assumed to be isotropic and homo-
geneous, as is the case here. The temporal convolution involving the kernel η(t) (η(t) = 0
4
xy
z0
dendrite
synapse
soma
axons xy
z0
z0+φ|x-y|
Fig. 1. Examples of two neural field models with dendritic cable that we consider in this paper.
Left: All synaptic inputs impinge on the same site at a distance z0 from the soma. Right: Here
the distance of the synapse from the soma is linearly correlated with the spatial separation |x−y|between neurons.
for t < 0) represents synaptic processing of signals within the network, whilst the delayed
temporal argument to u under the spatial integral represents the delay arising from the
finite speed of signals travelling between points x and y; namely |x − y|/v where v is the
velocity of action potential. In models with dendrites there is a further space-dependent
delay associated with the processing of inputs at a synapses away from the cell-body. A
simple way to incorporate this in a model is to represent the dendritic tree as a single
passive cable with diffusive properties (see below).
By introducing the kernel K(x, t) = w(x)δ(t − |x|/v) we can re-write (1) in the succinct
form
u = η ∗ K ⊗ f ◦ u, (3)
where we have introduced the two-dimensional convolution operation ⊗:
(K ⊗ g)(x, t) =∫ ∞
−∞ds
∫ ∞
−∞dy K(x − y, t − s)g(y, s), (4)
for g = g(x, t), and the temporal convolution operation ∗:
(η ∗ h)(t) =∫ ∞
0ds η(s)h(t − s), (5)
for h = h(t). The form of (3) allows us to naturally generalize a model with axonal delays
to one with dendritic delays or any other type of time-dependent connectivity. For example
the model of Bressloff [17,33] incorporating a neuron with a semi-infinite dendritic cable
5
with potential v = v(z, t), z ∈ R+ is written
∂tv = −v/τD + D∂zzv + I(x, z, t),
I(x, z, t) =∫ ∞
−∞dy w(x − y, z)
∫ ∞
−∞ds η(t − s)f(u(y, s)). (6)
Here w(x, z) is an axo-dendritic connectivity function depending not only upon cell-cell
distances |x|, but on dendritic distances z also. Assuming that there is no flow of current
back from the cell body (soma) at z = 0 to the dendrite then the neural field equation is
simply given by u(x, t) = v(x, 0, t). Hence, this dendritic model is recovered by (3) with
the choice K(x, t) =∫ ∞0 dz w(x, z)E(z, t), where E(z, t) = e−t/τD e−z2/4Dt /
√πDt is the
Green’s function of the semi-infinite cable equation (and E(z, t) = 0 for t < 0). With a
single synapse at a fixed distance z0 > 0 from the cell body w(x, z) = w(x)δ(z − z0), the
generalized connectivity function is separable, taking the form K(x, t) = w(x)E(z0, t). We
will also look at non-separable models with space-dependent dendritic delays w(x, z) =
w(x)δ(z−z0−φ|x|). In these models axons from more distant neurons arborize further up
the dendritic cable in accordance with anatomical data [34]. See Fig. 1 for an illustration.
In fact throughout this paper we shall consider the model as written in the form of (3), and
allow for arbitrary choices of K = K(x, t), subject to K(x, t) = K(|x|, t) and K(x, t) = 0
for t < 0, such that the classic axonal and dendritic delay models are recovered as special
cases. Later in Section 6 we shall consider further generalizations of (3) to include the
effects of neuronal modulation and adaptation.
3 Turing instability analysis
In this paper we are primarily interested in the analysis of pattern formation beyond a
Turing instability for the neural field equation given by (3). It is the dependence of K on
the pair (x, t) rather than just x, as would be the case in the absence of space-dependent
delays, that can give rise to not only static, but dynamic Turing instabilities. Here we
briefly re-visit the standard linear Turing instability analysis, along similar lines to that of
Bressloff [17] and Hutt et al. [18]. For a general review of pattern formation and pattern
forming instabilities we refer the reader to [35–38]. For a discussion of pattern formation
within the context of neural field models we suggest the articles by Ermentrout [39] and
Bressloff [40].
Let u(x, t) = u be the spatially-homogeneous steady state of equation (3). The conditions
6
for the growth of inhomogeneous solutions can be obtained through simple linear stability
analysis of u. Namely, we look for instabilities to spatial perturbations of globally periodic
type eikx and determine the intrinsic wavelength 2π/k of the dominant growing mode.
From (3) the homogeneous steady state satisfies
u = η(0)K(0, 0)f(u), (7)
where we have introduced the following Laplace and Fourier-Laplace integral transforms:
η(λ) =∫ ∞
0ds η(s) e−λs, K(k, λ) =
∫ ∞
−∞dy
∫ ∞
0ds K(y, s) e−(iky+λs) . (8)
We Taylor-expand the firing rate function, which is the only nonlinearity in the system,
around the steady state f(u) = f(u)+γ1(u−u)+γ2(u−u)2 + . . . to obtain the linearised
model
u − u = γ1η ∗ K ⊗ (u − u). (9)
The control parameter for our bifurcation analysis is therefore γ1 = f ′(u). To obtain the
conditions for linear stability we consider solutions of the form u − u = Re eλt+ikx where
λ = ν + iω ∈ C. Although the system and its solutions are real, for conciseness we will
often work with the complex extension and restrict our attention to the real case when
this has implications for the analysis. After substitution into (9) we obtain a dispersion
relation for (k, λ) in the form L(k, λ) = 0, where
L(k, λ) = 1 − γ1η(λ)K(k, λ). (10)
For a fixed γ1 solving L(k, λ) = 0 defines a function λ(k). An instability occurs when for
the first time there are values of k at which the real part of λ is nonnegative (see Fig. 2,
left). A Turing bifurcation point is defined as the smallest value γc of the control parameter
for which there exists some kc 6= 0 satisfying Re (λ(kc)) = 0. It is said to be static if
Im (λ(kc)) = 0 and dynamic if Im (λ(kc)) ≡ ωc 6= 0. The dynamic instability is often
referred to as a Turing–Hopf bifurcation and generates a global pattern with wavenumber
kc, which moves coherently with a speed c = ωc/kc, i.e. as a periodic travelling wave train.
If the maximum of the dispersion curve is at k = 0 then the mode that is first excited
is another spatially uniform state. If ωc 6= 0, we expect the emergence of a homogeneous
limit cycle with temporal frequency ωc.
For simplicity we will assume that at γ1 = γc the associated value of kc is unique (up to a
sign change). Since K is an even function of the space variable, wave-numbers will always
7
k
ν
0
ν = 0
γ1=γ
c
γ1>γ
c
kc
-4
-2
0
2
4
-2 -1 0 1
λ=iωc
ν
ω λ(±k)
λ=-iωc
Fig. 2. Left: An illustration of the mechanism for a Turing instability: as γc is increased a range
of wavenumbers around kc become linearly unstable. Right: A typical dispersion curve in the
complex (ν, ω)-plane by a complex conjugate pair of roots as k is varied, plotted at the critical
value γ1 = γc.
come in pairs ±kc and because the problem is real the dispersion curve has reflective
symmetry around the real axis and frequencies ±ωc will also appear together as solutions.
Hence, the eigenspace of the linear equation will be real and at γc a complete basis is
where A1,2 are arbitrary complex coefficients depending only on the slow space and time
scales. We may now use the Fredholm alternative [44] to consider restrictions on the gn
that will give us equations for the amplitudes A1,2.
Since L acts on the O(1) variables x and t where the solutions ui are periodic we can
restrict our attention to the subspace of bounded functions of period 2π/kc in x and 2π/ωc
in t. Let Λ = [0, 2π/kc]× [0, 2π/ωc] be the periodicity domain and define the inner product
of two periodic functions u(x, t) and v(x, t) to be
<u, v> =kcωc
4π2
∫
Λu(x, t)v(x, t)dxdt, (26)
where the bar denotes complex conjugation. Introducing the functions η∗(t) = η(−t) and
K∗(x, t) = K(x,−t) (η and K reflected around the point t = 0), it can be shown (using
Fourier series) that the adjoint of L is the operator L∗ = I − γcη∗ ∗K∗⊗, with respect to
the inner product (26).
The operator L∗ has the same kernel space as L (since the dispersion relation is invariant
under the change t → −t). From the Fredholm alternative we have for all u ∈ kerLthat <u, gn> = <u, Lun> = <L∗u, un> = 0. Therefore we can derive equations for the
amplitudes simply by calculating the two complex projections
<ei(ωct±kcx), gn> = 0. (27)
To simplify notation we set u± = ei(ωct±kcx). Then, the scalar products (27) expand as