On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute of Computer Science Clausthal University of Technology Nicolas Neubauer, M.Sc. Neural Information Processing Group Department of Electrical Engineering & Computer Science Technische Universität Berlin
23
Embed
On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
On the capacity of unsupervised recursive neural networks for symbol processing
29. 8. 2006
Prof. Dr. Barbara HammerComputational Intelligence GroupInstitute of Computer ScienceClausthal University of Technology
Nicolas Neubauer, M.Sc.Neural Information Processing GroupDepartment of Electrical Engineering &
Computer ScienceTechnische Universität Berlin
Overview
Introduction GSOMSD models and capacities
Main part Implementing deterministic push-down automata in RecSOM
Conclusion
Unsupervised neural networks
Clustering algorithms– Each neuron/prototype i has a weight vector wi
– Inputs x are mapped to a winning neuron i such that d(x,wi) is minimal
– Training: Adapt weights to minimize some error function
Self-Organizing Maps (SOMs): Neurons arranged on lattice – During training, adapt neighbours also– After training, similar inputs neighbouring neurons– variants without fixed grid (e.g. neural gas)
Defined for finite-dimensional input vectors– Question:
How to adapt algorithms for inputs of non-fixed length like time series?
– E.g. time windowing, statistical analysis, ...
Rn R?
Recursive processing of time series
• Process each input of time series separately
• Along with a representation C of the map’s response to the previous input (the context)– function rep: RN Rr (N=number of neurons)
– Ct = rep(d1(t-1), ..., dN(t-1))
• Neurons respond not only to input, but to context also– Neurons require additional context weights c– distance of neuron i at timestep t:
di(t) = αd(xt,wi) + βdr(Ct,ci)
x1
Cinit C2
x2
Ct
xt
...rep rep rep
C
Generalized SOM for Structured Data (GSOMSD) [Hammer, Micheli, Sperduti, Stricker, 04]
Unsupervised recursive algorithms: Instances of GSOMSD
Varying in– context function rep, distances d, dr
– and lattice (metric, hyperbolic, neural gas, ...)
Example– Recursive SOM (RecSOM)
context stores all neurons’ activationsr=N, rep(d1,...,dN) = -exp(d1),...,-exp(dN)
each neuron needs N context weights! (memory ~ N^2)
– other models store• properties of winning neuron• previous activations only for single neurons
Computational capacity
0111…1 != 1111…1
• Ability to keep state information: Equivalence to Finite State Automata (FSA) / regular languages
• Decaying context will eventually „forget“ leading 0
([(…)]) != ([(…)]]
• Ability to keep stack information: Equivalence to Pushdown Automata (PDA) / context-free languages
• Finite context cannot store potentially infinite stack
• … Ability to store at least two binary stacks: Turing Machine Equivalence
connecting context models to Chomsky hierarchy
Why capacity matters
• Explore dynamics of algorithms in detail
• Distinguish power of different models– different contexts within GSOMSD
e.g., justify huge memory costs of RecSOM compared to other models– to other approaches
e.g., supervised recurrent networks
• Supervised recurrent networks: Turing machine equivalence – in exponential time
for sigmoidal activation functions [Kilian/Siegelmann ’96]
– in polynomial time for semilinear activation functions[Siegelmann/Sontag ’95]
Various recursive models and their capacity
TKM [Chappell/Taylor,93]
RSOM[Koskela/Varsta/Heikkonen,98]
MSOM[Hammer/Strickert,04]
SOMSD[Hagenbuchner/Sperduti/Tsoi,03]
RecSOM[Voegtlin,02]
context neuron itself
neuron itself winner content
winner index exp(all act.)
encoding input space
input space input space index space activ. space
lattice all all all SOM/ HSOM all
capacity <FSA <FSA FSA FSA ≥ PDA*
* for WTA semilinear context
[Strickert/Hammer,05]
TKM [Chappell/Taylor,93]
RSOM[Koskela/Varsta/Heikkonen,98]
MSOM[Hammer/Strickert,04]
SOMSD[Hagenbuchner/Sperduti/Tsoi,03]
RecSOM[Voegtlin,02]
context neuron itself
neuron itself winner content
winner index exp(all act.)
encoding input space
input space input space index space activ. space
lattice all all all SOM/ HSOM all
capacity <FSA <FSA FSA FSA ≥ PDA*
Overview
Introduction GSOMSD models and capacities
Main part Implementing deterministic push-down automata in RecSOM
Conclusion
Goal: Build a Deterministic PDA
Using• the GSOMSD recursive equation di(t) = αd(xt,wi) + βdr(Ct,ci)
• L1 distances for d, dr (i.e., d(a,b) = |a-b|)
• parameters– α=1– β=¼
• modified RecSOM context:– instead of original exp(-di): max(1-di,0)
• similar overall shape• easier to handle analytically
– additionally, winner-takes-all• required at one place in the proof...• makes life easier overall, however