Introduction Background Our Approach Results Summary Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12 Andr´ e Gr¨ uning, Brian Gardner and Ioana Sporea Department of Computer Science University of Surrey Guildford, UK 9th November 2015 1 http://dx.doi.org/10.6084/m9.figshare.1517779 2 $Id: multilayerspiker.txt 1827 2015-11-09 07:39:53Z ag0015 $
32
Embed
Learning Spatio-Temporally Encoded Pattern Transformations ... · Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural ... Some Learning Algorithms
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Spiking neurons: real neurons communicate with each other via sequences of pulses – spikes.
1 Dendritic tree, axon and cell body of a neuron.
2 Top: Spikes arrive from other neurons and its membrane potential rises. Bottom: incoming spikes onvarious dendrites elicit timed spikes responses as the output.
3 response of the membrane potential to incoming spikes. If the threshold θ is crossed, the membranepotential is reset to a low value, and a spike fired.
From Andre Gruning and Sander Bohte. Spiking neural networks: Principles and challenges.
In Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational Intelligence andMachine Learning – ESANN, Brugge, 2014.
SpikeProp 3, ReSuMe 4, Tempotron 5, Chronotron 6, SPAN 7,Urbanczik and Senn 8, Brea et al. 9, Freimaux et al. 10, . . .
3S.M. Bohte, J.N. Kok, and H. La Poutre. Spike-prop: error-backpropagation in multi-layer networks of spiking neurons.
Neurocomputing, 48(1–4):17–37, 20024
Filip Ponulak and Andrzej Kasinski. Supervised learning in spiking neural networks with ReSuMe: Sequence learning, classificationand spike shifting.Neural Computation, 22:467–510, 2010
5Robert Gutig and Haim Sompolinsky. The tempotron: a neuron that learns spike timing-based decisions.
6Razvan V Florian. The chronotron: A neuron that learns to fire temporally precise spike patterns.
PLoS ONE, 7(8):e40233, 20127
A. Mohemmed, S. Schliebs, and N. Kasabov. SPAN: Spike pattern association neuron for learning spatio-temporal sequences.Int. J. Neural Systems, 2011
8R. Urbanczik and W. Senn. A gradient learning rule for the tempotron.
Neural Computation, 21:340–352, 20099
Johanni Brea, Walter Senn, and Jean-Pascal Pfister. Matching recall and storage in sequence learning with spiking neural networks.The Journal of Neuroscience, 33(23):9565–9575, 2013
10Nicolas Fremaux, Henning Sprekeler, and Wulfram Gerstner. Functional requirements for reward-modulated spile-timing-dependent
plasticity.The Journal of Neuroscience, 30(40):13326–13337, 10 2010
Left: Input spike train Xi (top) and its evoked post synaptic potential Xi ∗ ε (bottom).
Middle: Fluctuations of a hidden neuron membrane potential uh relative to a firing threshold ϑ, inresponse to inputs from input layer (top). The potential dependent factor of the back propagated errorfrom hidden to input layer [Yh(Xi ∗ ∗ε)]
bottom left: corresponding PSP (according to kernel Xi ∗ ε).
Right: membrane potential of an output neuron uo , in response to hidden layer activity. Target indicatedby dotted lines (top). Weight changes of input-to-hidden weight due to learning rule.
Dependence of the performance on the number of input patterns andnetwork setup. Each input pattern mapped to a unique target output ofsingle output neuron and spike.
Left: performance as a function of the number of input patterns.
Right: Number of episodes to convergence in learning.
Blue curves: hidden weights whi updated according to learningalgorithm, red curves: fixed random weights (plus homoeostasis),green: single layer.
Dependence of the performance on the ratio of hidden to output neurons,and the number of target output spikes. p = 50. A unique target outputspike pattern for each output neuron.
(Left) Performance as a function of the ratio of hidden to outputneurons.
(Right) Minimum ratio of hidden to output neurons required toachieve 90% performance.