Top Banner
Lateral Inhibitory Networks: Synchrony, Edge Enhancement, and Noise Reduction Cornelius Glackin, Liam Maguire, Liam McDaid and John Wade Abstract— This paper investigates how layers of spiking neurons can be connected using lateral inhibition in different ways to bring about synchrony, reduce noise, and extract or enhance features. To illustrate the effects of the various connectivity regimes spectro-temporal speech data in the form of isolated digits is employed. The speech samples are pre- processed using the Lyon’s Passive Ear cochlear model, and then encoded into tonotopically arranged spike arrays using the BSA spiker algorithm. The spike arrays are then subjected to various lateral inhibitory connectivity regimes configured by two connectivity parameters, namely connection length and neighbourhood size. The combination of these parameters are demonstrated to produce various effects such as transient synchrony, reduction of noisy spikes, and sharpening of spectro- temporal features. I. I NTRODUCTION It is now a quarter of a century since the famous binding- by-synchrony hypothesis was posed [1]. The hypothesis suggests that neural circuits integrate distributed information into coherent representational patterns. Physical evidence for the hypothesis was quickly found [2], yet it is only relatively recently that networks of spiking neuron models are being experimented with to study how synchrony works [3]–[6]. From the variable binding viewpoint it is difficult to see how synchrony aids computation. Synchronous states generated in experiments thus far typically show entire populations of neurons firing in phase. If all neurons are firing at the same time, how is synchrony producing selectivity to particular stimuli? It seems that there is still much to understand with regard to synchrony. Despite this there have been some important discoveries made. Arguably, it is now accepted that it is inhibition not excitation that brings about synchrony, specif- ically when the rise time of a synapse is longer than the duration of an action potential [3]. Additionally, synchrony is possible even with sparse connectivity [4], and can be influenced by modification of inhibitory time constants, with faster decay constants producing tighter synchrony. This viewpoint that synchrony is enhanced by time delay has also been proposed by [5]. Stable synchronous states exist for weak coupling providing there is a sufficient time delay. Such synchrony can only be achieved in the undelayed system by strong coupling [5]. Both approaches advocate an all-things-being-equal approach to the simulations, keeping Cornelius Glackin et al. are with the Intelligent Systems Research Centre, University of Ulster, Magee Campus, Derry BT48 7JL, Northern Ireland (phone: +44 28 71675249; email: [email protected]). This research is supported under the Centre of Excellence in Intelligent Systems (CoEIS) project, funded by the Northern Ireland Integrated Devel- opment Fund and InvestNI. parameters such as coupling strengths between excitatory and inhibitory cells uniform. Recently, a novel approach advocated modifying the level of coupling with unsupervised learning [6]. Hence, it was reported that a more useful state exists between randomness and full epileptic synchrony that promotes self-organisation of spontaneously active neu- ronal circuits [6]. It was found that spike timing-dependent plasticity (STDP) decouples neurons (breaks synchrony) by modifying synapses in a non-uniform way. Interestingly, anti- STDP (STDP learning window inverted) brings the neuronal circuit back to a synchronous state. Yet it is still not clear how this transitory state aids computation. Knowing that inhibition is crucial for synchrony, perhaps if one studies the biological evidence for inhibitory con- nectivity, some clues as to how synchrony (and transient synchronous states) aid computation may be determined. In studies of vision, specifically in the retina [7], lateral inhibitory connectivity is thought to produce edge/peak en- hancement. An analogous process in audition is performed in the anteroventral cochlear nucleus by T-Stellate cells, which extract the profile of the sound spectrum [8]. Yet both studies are unclear as to what specific connectivity regimes produce this computational capability. Therefore this paper advocates the use of specific regimes for lateral inhibitory connectivity. Biologically plausible input data in the form of spoken isolated digits processed using Lyon’s Passive Ear cochlear model [9] will be used in the presented experiments. Two connectivity parameters will be used to specify the connectivity regime and demonstrate how specific lateral connectivity can produce edge/peak enhancement, promote feature selection, and reduce noise. Section II outlines the steps necessary for obtaining and pre-processing biologically realistic speech signals. Section III demonstrates how the analogue speech signals are con- verted to digital spike trains whilst taking care to preserve spectro-temporal information. Section IV outlines how lateral inhibitory networks can be designed to promote feature selection, produce edge/peak enhancement and reduce noise. Conclusions and a brief mention of future research directions are presented in Section V. Finally, some notes on implemen- tation methodology are provided in Section VI. II. SPEECH PRE- PROCESSING The speech samples used are from the TI46 database [10], specifically the isolated digits. A sample of digits spoken by 5 female speakers, with 10 utterances for each digit, and 10 total digits (0 to 9) making 500 speech samples, is selected for this work. Pre-processing of speech information in order Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 – August 5, 2011 978-1-4244-9637-2/11/$26.00 ©2011 IEEE 1003
7

Lateral inhibitory networks: Synchrony, edge enhancement, and noise reduction

Mar 10, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lateral inhibitory networks: Synchrony, edge enhancement, and noise reduction

Lateral Inhibitory Networks:Synchrony, Edge Enhancement, and Noise Reduction

Cornelius Glackin, Liam Maguire, Liam McDaid and John Wade

Abstract— This paper investigates how layers of spikingneurons can be connected using lateral inhibition in differentways to bring about synchrony, reduce noise, and extractor enhance features. To illustrate the effects of the variousconnectivity regimes spectro-temporal speech data in the formof isolated digits is employed. The speech samples are pre-processed using the Lyon’s Passive Ear cochlear model, andthen encoded into tonotopically arranged spike arrays usingthe BSA spiker algorithm. The spike arrays are then subjectedto various lateral inhibitory connectivity regimes configuredby two connectivity parameters, namely connection length andneighbourhood size. The combination of these parameters aredemonstrated to produce various effects such as transientsynchrony, reduction of noisy spikes, and sharpening of spectro-temporal features.

I. INTRODUCTION

It is now a quarter of a century since the famous binding-by-synchrony hypothesis was posed [1]. The hypothesissuggests that neural circuits integrate distributed informationinto coherent representational patterns. Physical evidence forthe hypothesis was quickly found [2], yet it is only relativelyrecently that networks of spiking neuron models are beingexperimented with to study how synchrony works [3]–[6].From the variable binding viewpoint it is difficult to see howsynchrony aids computation. Synchronous states generatedin experiments thus far typically show entire populations ofneurons firing in phase. If all neurons are firing at the sametime, how is synchrony producing selectivity to particularstimuli?

It seems that there is still much to understand with regardto synchrony. Despite this there have been some importantdiscoveries made. Arguably, it is now accepted that it isinhibition not excitation that brings about synchrony, specif-ically when the rise time of a synapse is longer than theduration of an action potential [3]. Additionally, synchronyis possible even with sparse connectivity [4], and can beinfluenced by modification of inhibitory time constants, withfaster decay constants producing tighter synchrony. Thisviewpoint that synchrony is enhanced by time delay has alsobeen proposed by [5]. Stable synchronous states exist forweak coupling providing there is a sufficient time delay.Such synchrony can only be achieved in the undelayedsystem by strong coupling [5]. Both approaches advocate anall-things-being-equal approach to the simulations, keeping

Cornelius Glackin et al. are with the Intelligent Systems Research Centre,University of Ulster, Magee Campus, Derry BT48 7JL, Northern Ireland(phone: +44 28 71675249; email: [email protected]).

This research is supported under the Centre of Excellence in IntelligentSystems (CoEIS) project, funded by the Northern Ireland Integrated Devel-opment Fund and InvestNI.

parameters such as coupling strengths between excitatoryand inhibitory cells uniform. Recently, a novel approachadvocated modifying the level of coupling with unsupervisedlearning [6]. Hence, it was reported that a more usefulstate exists between randomness and full epileptic synchronythat promotes self-organisation of spontaneously active neu-ronal circuits [6]. It was found that spike timing-dependentplasticity (STDP) decouples neurons (breaks synchrony) bymodifying synapses in a non-uniform way. Interestingly, anti-STDP (STDP learning window inverted) brings the neuronalcircuit back to a synchronous state. Yet it is still not clearhow this transitory state aids computation.

Knowing that inhibition is crucial for synchrony, perhapsif one studies the biological evidence for inhibitory con-nectivity, some clues as to how synchrony (and transientsynchronous states) aid computation may be determined.In studies of vision, specifically in the retina [7], lateralinhibitory connectivity is thought to produce edge/peak en-hancement. An analogous process in audition is performedin the anteroventral cochlear nucleus by T-Stellate cells,which extract the profile of the sound spectrum [8]. Yet bothstudies are unclear as to what specific connectivity regimesproduce this computational capability. Therefore this paperadvocates the use of specific regimes for lateral inhibitoryconnectivity. Biologically plausible input data in the formof spoken isolated digits processed using Lyon’s Passive Earcochlear model [9] will be used in the presented experiments.Two connectivity parameters will be used to specify theconnectivity regime and demonstrate how specific lateralconnectivity can produce edge/peak enhancement, promotefeature selection, and reduce noise.

Section II outlines the steps necessary for obtaining andpre-processing biologically realistic speech signals. SectionIII demonstrates how the analogue speech signals are con-verted to digital spike trains whilst taking care to preservespectro-temporal information. Section IV outlines how lateralinhibitory networks can be designed to promote featureselection, produce edge/peak enhancement and reduce noise.Conclusions and a brief mention of future research directionsare presented in Section V. Finally, some notes on implemen-tation methodology are provided in Section VI.

II. SPEECH PRE-PROCESSING

The speech samples used are from the TI46 database [10],specifically the isolated digits. A sample of digits spoken by5 female speakers, with 10 utterances for each digit, and 10total digits (0 to 9) making 500 speech samples, is selectedfor this work. Pre-processing of speech information in order

Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 – August 5, 2011

978-1-4244-9637-2/11/$26.00 ©2011 IEEE 1003

Page 2: Lateral inhibitory networks: Synchrony, edge enhancement, and noise reduction

to enhance speech features and any resulting recognition is animportant undertaking. With regards to the way in which thepre-processing is conducted, there are two main approaches;computational and biologically-inspired respectively. It haslong been recognised that sounds are best decribed in thefrequency domain. Thus the computational approach beginsby performing spectral analysis to decompose the speechsample into its frequency components. Typically, the short-time Fourier Transform is preferred over the standard fastFourier Transform as it is more suited to time-varyingsignals. The frequency components are usually distributedalong the Mel scale, which is linear for low frequenciesand logarithmic for high frequencies, corresponding to thephysical properties of the human ear. Thus the human audi-tory critical bands are approximated using triangular filters,distributed in a combination of linear and log positions (thereare several methods), and a discrete cosine transform isused to decorrelate the features and produce Mel-FrequencyCepstrum Coefficients (MFCC) [11].

The biologically-inspired approach typically involves theuse of a cochlear model to extract biologically realisticfrequency information from the speech sample. Both method-ologies can result in the generation of a frequency-basedrepresentation of the sound. For the computational approachthis representation is referred to as a spectrogram, and forthe biologically-inspired approach a cochleagram. Figure 1shows a spectrogram generated using the Matlab SignalProcessing Toolbox for speaker 1, utterance 1, digit 1 (s1-u1-d1) from the TI46 speech corpus [10]. This particular samplewas chosen arbitrarily and will be used throughout this paperto illustrate the feature extraction steps.

Time [ms]

Fre

quen

cy

Spectrogram

100 200 3000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 1. Spectrogram generated using Matlab’s Signal Processing Toolboxfor speaker 1, utterance 1, digit 0 (s1-u1-d1) from the TI 46 speech corprus

Figure 2 shows the same sound processed using the morebiologically realistic Lyon’s Passive Ear (LPE) cochlea model[9]. The cochlea model is employed to approximate thefrequency composition along the basilar membrane. The LPEcochlea model contains a notch filter bank which modelsthe sensitivity of the cochlea to certain frequencies. Thecochleagram shown in Figure 2 was generated using Slaney’sAuditory toolbox [12] with a sample rate of 12.5 kHz.

Time [ms]

Cha

nnel

#

Cochleagram

100 200 300

10

20

30

40

50

60

70

Fig. 2. Cochleagram generated using Slaney’s Auditory Toolbox [12] forsample: s1-u1-d1

It can be seen by comparing Figures 1 and 2 that the outputis similar. However, the ultimate aim of the pre-processing isto produce signals that can be converted into spike trains. Theincreased contrast of the cochleagram over the spectrogrammakes it a more suitable candidate for spike encoding.

The typical MFCC methodology suffers from the lim-itation that the temporal nature of the speech sample istreated in an arbitrary way with the features of the soundsignal effectively averaged within each frame. Such treatmentresults in a fixed number of features that are ordered atarbitrary time intervals rather than attempting to preservethe temporal information. This treatment explains why theMFCC technique has been researched extensively in anattempt to improve its performance, particularly with noisydata.

III. SPIKE ENCODING

The next step is to obtain biologically realistic spike trains,and this requires a conversion from the analogue data in thecochleagram into digital spike trains. There are various waysin which this can be performed. The first would be to use thecontinuous values from the cochleagram to modify the cur-rent injected into a leaky integrate-and-fire (LIF) neuron. Thefiring of the single LIF neuron could then be used as an inputto the network. The rate of firing would be dependent on themany neuron parameters. An alternative method considersspike trains as digital signals, since it is thought that it is onlythe timing of the individual neurons that biological neuronsuse to communicate with one another. Therefore, the genera-tion of spike trains from the cochleagram may be consideredas an analogue to digital conversion. There are algorithms inthe literature that have been developed to convert continuousdata into discrete spike timing, these are referred to asspiker algorithms [13], [14]. The two main methodologies arereferred to as HSA (Hough Spiker Algorithm [13]) and BSA(Ben’s Spiker Algorithm [14]) respectively. Both algorithmsutilise a convolution/deconvolution filter that was optimisedfor encoding/decoding [13] using a genetic algorithm tominimise the error in the encoding and decoding process.

1004

Page 3: Lateral inhibitory networks: Synchrony, edge enhancement, and noise reduction

For filter specifications, impulse responses, frequency spectraand pseudo code refer to [14]. Figure 3 shows the resultingspike trains generated using BSA.

100 200 300

10

20

30

40

50

60

70

Time [ms]

Inpu

t Cha

nnel

#BSA Spike Input

Fig. 3. BSA [14] speech encoded spike input for sample: s1-u1-d1

Figure 3 illustrates how the 78 frequency channels gener-ated using the convolution filter and BSA are converted intospike trains. It can be seen from the diagram that individualspike trains begin firing, end firing, and fire maximallyat specific times. The literature describes these featuresof sound signals as onsets, offsets and peak firing ratesrespectively [15]. Onsets can be extracted [16], [17] fromspike trains using spiking neurons with depressing synapsesthat are configured to consume their synaptic resources whenfiring in response to the first presynaptic spike in a spike trainand consuming all their synaptic resources. Offset detectionis a little more complicated but can be performed using threeneurons, two input neurons connecting to an output neuron.One input neuron is inhibitory and the other input neuronis excitatory with a small delay, for further details see [17].Onset and offset detection are fundamental to neural featureextraction, but the process of extracting onsets and offsetsin particular is severely complicated by noisy spikes [17].However, as will be demonstrated by this work there areconnectivity regimes that can be implemented to removenoisy spikes, facilitating onset and offset extraction later ifrequired.

IV. LATERAL INHIBITORY NETWORKS

One way to discover when neurons are firing at the samerate is by synchrony [18]. This can be investigated witha simple experiment originally proposed by Abbott [19].Figure 4 shows the interaction between two LIF neurons withlateral inhibitory connections, connecting to an excitatoryoutput neuron. Each neuron in the input layer receives asinput the inhibitory output of the other. Each of these neuronshas its own excitatory input, one receives a fixed firing rateof 25 Hz, the other a firing rate linearly changing from 28.5to 21.5 Hz.

As can be seen from Figure 4, when the two input neuronsare firing at different frequencies, the input neurons take

Fig. 4. A simple three neuron SNN (top left) used to test the abilityof laterally connected neurons to produce coincidental firing and synchrony[19]. The SNN receives two input spike trains, one with a firing rate reducingfrom 28.5 to 21.5 Hz, the other with a constant firing rate of 25 Hz (topright). The bottom two subplots show the actual input and output spikerasters of inputs 1 and 2 (i/p 1 and i/p 2) and resultant spike output forneurons 1, 2, and 3 respectively.

turns at suppressing the output of one another, depending onspike timing. The output neuron, neuron 3, fires maximally(coincidently) when the two input neurons are firing at thesame frequency [19]. Abbott’s experiment [19] illustrateshow frequency information can be extracted from a spiketrain. It demonstrates how to find when a spike train is firingat a particular frequency. This experiment is particularlyuseful from a feature extraction point of view as it can beused to determine when spike trains in the spike array ofFigure 3 are firing at similar frequencies. In fact there is nolimit to the number of spike trains that can be comparedin this way, coincidence detection can be performed forany number of spike trains as long as they have ‘sufficientconnectivity’ [4].

A. Connection Length

The output from the LPE and subsequent BSA encoding ofspike trains is tonotopically arranged. Therefore, it does notnecessarily make sense to associate every input neuron andhence sound frequency with every other, as this disregardsthis tonotopic arrangement. It seems more likely that thelateral connectivity of the input layer can be describedin terms of a connection length parameter. A particularconnection length of c would mean that each input layerneuron is laterally connected to c neurons either side of it.Figure 5 illustrates this idea, the black lines of various styles

1005

Page 4: Lateral inhibitory networks: Synchrony, edge enhancement, and noise reduction

represent connection lengths between 1 and 3 for an examplelayer of laterally connected neurons.

Fig. 5. Neurons connected laterally as determined by a connection lengthparameter

In this way, a layer of N neurons can have a maximumconnection length of cmax = N − 1. In the case of thespike trains in Figure 4, a layer containing N = 78 neuronssimilar to those in Figure 3 was implemented with aninitial connection length of 0 (i.e. no lateral connectivity).Successive simulations were conducted with the connectionlength increased by 1 up to the maximum connectivity ofcmax = 77. Figure 6 presents a selection of the spike outputsfrom these experiments.

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

Inpu

t Cha

nnel

#

Connection Length = 0

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L1C

hann

el #

Connection Length = 15

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L1C

hann

el #

Connection Length = 30

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L1C

hann

el #

Connection Length = 77

Fig. 6. Effects of increasing connection length on synchrony for sample:s1-u1-d1. A sample of results are shown for connections lengths of 0, 15,30, and 77.

As can be seen from Figure 6, synchrony appears withmuch less than full connectivity, in fact it can be seento begin to form with neighbourhoods laterally connectedto tonotopically adjacent neurons with a small connectionlength (c = 15). Thus, even with sparse lateral inhibition,the layer of neurons in Figure 5 is shown to be capable ofattaining a synchronous state in accordance with [4].

B. Multiple Layers of Synchrony

The layer of neurons with full lateral connectivity pro-duced synchronised spike activity from tonotopically ar-ranged sound information. An obvious line of enquiry is

to investigate what happens when the synchronous spikeactivity is passed through multiple layers of fully connectedlateral inhibitory neurons. Figure 7 shows the input andoutput of 3 layers of such connectivity.

s1−u1−d1 Connection Length = 77

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L3C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L2C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L1C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

Inpu

t Cha

nnel

#

Fig. 7. Input spike trains for sample: s1-u1-d1 (top left) routed throughthree successive layers (L1, L2, and L3) of neurons laterally connected withinhibition.

Figure 7 demonstrates the effects layers of synchrony haveon spike input. The output from the first layer is as expectedfrom Figure 6. However, it can be seen that iterating thelateral processing over several layers serves to successivelyremove more spikes at each layer. The spikes that remain aredensely packed by necessity in order to survive this process.Contrastingly, areas of spikes which are not densely packedare removed. The removal of noisy spikes is a feature of thisconnectivity that is potentially useful, particularly if it canbe implemented without the periodic removal of all spikesat regular time intervals in the output, as this represents apotentially crucial loss of information. Therefore the focusof the remainder of the paper will be to investigate how edgescan be enhanced, and how noisy spikes can be removed, butwithout the periodic removal of spikes associated with suchepileptic synchrony.

C. Neighbourhood Radius

The concept of a neighbourhood in the connectivity ofneurons was introduced by Kohonen with the ‘winner-take-all’ competetive learning algorithm. Essentially, ‘winning’neurons had their weights increased along with neuronstopologically close to them, i.e. in the same neighbourhood.The idea of defining neighbourhoods can also be introducedin the case of spiking neural networks. In this paper, asalready discussed, the connection length parameter defineshow tonotopically far away a neuron in the same layercan be connected using lateral inhibition. Another parameterreferred to as neighbourhood radius can also be introduced,that describes how neurons that are tonotopically close toone another are not connected laterally. Figure 8 illustratesthis idea.

1006

Page 5: Lateral inhibitory networks: Synchrony, edge enhancement, and noise reduction

Fig. 8. Introducing the neighbourhood radius parameter to layers of lateralconnectivity specified by the connection length parameter

In general three distinct feature extraction properties havebeen observed with different connectivity parameters forrandomly selected samples from the TI46 dataset, these aresynchrony, edge enhancement, and noise reduction. Supposeone wishes to identify the most important frequency channelsfor a particular sample, and how the importance of thesechannels varies over time. In terms of the spike encodedspeech sample this means the aim is to find the highestfrequency spike trains at each time interval throughout thesample. This can be achieved for a maximum connectionlength and a relatively smaller neighbourhood size, as illus-trated by Figure 9.

s1−u1−d1 Connection Length = 77 Neighbourhood = 30

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L3C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L2C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L1C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

Inpu

t Cha

nnel

#

Fig. 9. Maximum lateral inhibitory connectivity (connection length = 77)with a neighbourhood radius of 30 for sample: s1-u1-d1. Partial synchronyhas removed some of the spikes from the resultant rasters.

D. Edge Enhancement and Noise Reduction

In order to make meaningful the tonotopic arrangementof the spike encoded sound channels, it follows that neuronsthat are tonotopically distant from one another should havelittle connectivity. In terms of the connectivity parametersalready defined, a less than maximum connection length hasinteresting effects on small neighbourhoods as is illustratedby Figure 10. As can be seen from the figure the small

neighbourhood radius, and slightly larger connection lengthdo not result in a synchronous state. Instead, the connectivityhas served to remove noisy spikes and extract a clearer repre-sentation of the sound signal by sharpening the main contoursof the spike distribution. This kind of noise reduction andedge enhancement capability has been observed in retinaand cochlear nucleus cells, likely with similar regimes ofconnectivity.

s1−u1−d1 Connection Length = 20 Neighbourhood = 5

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L3C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L2C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

L1C

hann

el #

0 100 200 300 400

10

20

30

40

50

60

70

Time [ms]

Inpu

t Cha

nnel

#

Fig. 10. Neighbourhood Radius = 5, Connection Length = 20 for Sample:s1-u1-d1. Single and low frequency ’noisy’ spikes are removed with eachsuccessive layer of this connectivity regime.

V. CONCLUSIONS

The work presented is aimed at providing significantinsight into the effects of modifying lateral inhibitory con-nectivity on a layer of LIF spiking neurons. In contrastto work in reservoir computing where a typically randomconectivity ethos is preferred, this paper represents ongoingwork on attempting to quantify what the effects of particularlateral connectivity regimes are on neural coding. In this way,two connectivity parameters were defined that modify theconnectivity in a conceptually meaningful way.

The connection length parameter dictates how far away(tonotopically) lateral neurons are connected to one another.Conversely, the neighbourhood radius parameter dictates howtonotopically distant neurons can be that are not connectedlaterally. The interplay between the two parameters is themain focus of this paper. Initially experiments considermodifying the connection length parameter alone, in topolo-gies where spiking neurons are sparsely laterally connectedto each other with inhibitory synapses, and investigate towhat extent synchrony is affected by increasing the degreeof lateral connectivity (increasing connection length). Inagreement with other research it is discovered that synchronyresults from sparse connectivity [4]. However, as connectivityincreases synchrony becomes more defined. Similarly mul-tiple layers of synchrony further reduces spike output andproduces more pronounced synchronous states.

1007

Page 6: Lateral inhibitory networks: Synchrony, edge enhancement, and noise reduction

In later experiments the neighbourhood radius parameterand its interplay with inhibitory connectivity regimes (asdictated by the connection length parameter) are investigated.Most synchrony experiments advocate an all-things-being-equal approach where synaptic weights for example arekept the same in order to facilitate synchrony (and alsobecause of a lack of a rationale for varying weights). Inexperiments where weights are modified [6] activity is shownto unsynchronise. Similarly, adding a neighbourhood radiusparameter which dictates a local lack of connectivity forneurons results in spikes that are not synchronised. However,with large connection lengths, neighbourhoods of neuronscompete with one another, with ‘winning’ neighbourhoodssuppressing all others. Interestingly, when the neighbourhoodradius is small and the connection length parameter isonly slightly larger, the connectivity serves to reduce noisyspikes whilst preserving edges in the spike rasters. Suchconnectivity experiments could shed some light on similarcapabilities of retina cells and cells in the cochlear nucleus,which are known to extensively employ lateral inhibition.In summary, this work illustrates some interesting findingswith experiments with various forms of spiking neuronconnectivity, and in particular seeks to unravel some of thefunctionality of inhibitory neurons. Spiking neural networktopologies that implement this kind of connectivity couldhave some significantly improved capabilities for patternrecognition problems.

Future work will extend this research to investigate theeffects of excitatory lateral connectivity within neighbour-hoods, and inhibitory connectivity elsewhere. There are alsoplans to investigate connectivity regimes of feed-forwardand feed-back inhibition. It is envisaged that this kind ofconnectivity will incorporate latency and provide a vehiclefor addressing classification problems for temporal data andthe so-called time warp problem.

VI. METHODS

LIF neurons are used throughout the work presented inthis paper, they are of the standard variety and as such arenot described here, see [20] for details. In every experimentpresented the absolute refractory period of the neurons is3ms, and the magnitude of inhibitory weights is half that ofthe excitatory weights. Nevertheless, aside from the relativequantities used for synaptic weights, the actual values areunimportant as similar results can be achieved for a variety ofweight values. Additionally, much of the initial work in thispaper with regard to synchrony and layers of synchrony hasalso been conducted using conductance-based LIF neuronsand LIF neurons with dynamic facilitating synapses withsimilar results.

In order to investigate different connectivity schemes, aconnectivity matrix C is specified in the Matlab script. Figure11 shows a connectivity matrix and resulting laterally con-nected layer of spiking neurons. A (−1) means that there isan inhibitory connection, and similarly a (+1) can be used torepresent excitatory connections. Note that this connectivitymatrix represents lateral connections only which is why the

feed-forward excitatory synapses are not represented. Therow elements of the connectivity matrix represent wherethe synapse is originating from and the column elementsrepresent which neuron the synapse is connecting to. In orderto realise the connectivity of Figure 8 a connectivity matrix C

is specified in the Matlab code. In fact the connectivity matrixcan be used to specify any connectivity within a multiplelayered network, but here it is just representing a single layer.

Fig. 11. Connectivity matrix and resulting lateral inhibitory layer

REFERENCES

[1] C. von der Malsburg, and W. Schneider, “A neural cocktail-partyprocessor,” Biological Cybernetics, vol. 54, pp. 29–40,1986.

[2] C. M. Gray, P. Knig, A. K. Engel, and W. Singer, “Oscillatoryresponses in cat visual cortex exhibit inter-columnar synchronizationwhich reflects global stimulus properties,” Nature, vol. 338, no. 6213,pp. 334–337, 1989.

[3] C. Van Vreeswijk, L. F. Abbott, and G. B. Ermentrout, “When inhibitionnot excitation synchronizes neural firing,” Journal of ComputationalNeuroscience, vol. 1, no. 4, pp. 313–321, 1994.

[4] C. Borgers, and N. Kopell, “Synchronization in networks of excitatoryand inhibitory neurons with sparse, random connectivity,” Neural Com-putation, vol. 15, no. 3, pp. 509–538, 2003.

[5] M. Dhamala, V. Jirsa, and M. Ding, “Enhancement of neural synchronyby time delay,” Physical Review Letters, vol. 92, no. 7, pp. 074104-1–074104–6, 2004.

[6] E. V. Lubenov, and A. G. Siapas, “Decoupling through synchrony inneuronal circuits with propagation delays,” Neuron, vol. 58, no. 1, pp.118–131, 2008.

[7] F. Ratliff, H. K. Hartline, and W. H. Miller, “Spatial and temporalaspects of retinal inhibitory interaction,” Journal of the Optical Societyof America, vol. 53, no. 1, pp. 110–120, 1963.

[8] S. Shamma, “On the role of space and time in auditory processing,”Trends in Cognitive Sciences, vol. 5, no. 8, pp. 340–348, 2001.

[9] R. G. Lyon, “A computational model of filtering, detection, andcompression in the cochlea,” Proc. IEEE International Conference onICASSP’82, vol. 7, 1982.

[10] M. Liberman. (1993). TI46 speech corpus, (Linguistic DataConsortium)[Online]. Available: http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC93S9

[11] L. C. W. Pols, “Spectral analysis and identification of Dutch vowels inmonosyllabic words,” Ph.D. dissertation, Free University, Amsterdam,The Netherlands, 1966.

[12] M. Slaney. (1998) Auditory toolbox: A MATLAB Toolbox for auditorymodeling work, (Interval Research Corporation).[Online]. Available:http://web.interval.com/papers/1998-010.

1008

Page 7: Lateral inhibitory networks: Synchrony, edge enhancement, and noise reduction

[13] H. de Garis, N. E. Nawa, M. Hough, and M. Korkin, “Evolving anOptimal De/Convolution Function for the Neural Net Modules of ATRsArtificial Brain Project,” Proc. International Joint Conference on NeuralNetworks, IJCNN’99, vol. 1, pp. 438–443, 1999.

[14] B. Schrauwen and J. Van Campenhout, “BSA, a fast and accuratespike train encoding scheme,” Proc. International Joint Conference onNeural Networks, vol. 4, pp. 2825–2830, 2003.

[15] S. A. Wills. (2001) Recognising speech with biologically-plausibleprocessors, (Hamilton Prize) [Online]. Available: http://www.inference.phy.cam.ac.uk/saw27/hamilton.pdf.

[16] L. S. Smith, “Robust sound onset detection using leaky integrate-and-fire neurons with depressing synapses,” IEEE Transaction on NeuralNetworks, vol. 15, no. 5, pp. 1125–1134, 2004.

[17] C. Glackin, L. Maguire, and L. McDaid, “Feature Extraction fromSpectro-temporal Signals using Dynamic Synapses, Recurrency, andLateral Inhibition,” Proc. IEEE World Congress Computational Intelli-gence (WCCI), pp. 1–6, 2010.

[18] J. J. Hopfield and C. D. Brody, “What is a moment? Transientsynchrony as a collective mechanism for spatiotemporal integration,”Proc. National Academy of Sciences, vol. 98, no. 3, pp. 1282–1287,2001.

[19] L. F. Abbott, “The timing game,” Nature Neuroscience, vol. 4, no. 2,pp. 115–116, 2001.

[20] W. Gerstner and W. M. Kistler, “Spiking Neuron Models: SingleNeurons, Populations, Plasticity,” Cambridge University Press, 2002.

1009