Top Banner
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright
14

Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Apr 29, 2023

Download

Documents

Julio Gimenez
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

This article appeared in a journal published by Elsevier. The attachedcopy is furnished to the author for internal non-commercial researchand education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling orlicensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of thearticle (e.g. in Word or Tex form) to their personal website orinstitutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies areencouraged to visit:

http://www.elsevier.com/copyright

Page 2: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

Neural Networks 32 (2012) 196–208

Contents lists available at SciVerse ScienceDirect

Neural Networks

journal homepage: www.elsevier.com/locate/neunet

2012 Special Issue

Autonomous Growing Neural Gas for applications with time constraint: Optimalparameter estimationJosé García-Rodríguez a,∗, Anastassia Angelopoulou b, Juan Manuel García-Chamizo a,Alexandra Psarrou b, Sergio Orts Escolano a, Vicente Morell Giménez a

a Department of Computing Technology, University of Alicante, Ap. 99. E03080. Alicante, Spainb Department of Computer Science & Software Engineering (CSSE), University of Westminster, Cavendish W1W 6UW, United Kingdom

a r t i c l e i n f o

Keywords:Self-organizing modelsTopology preservationGrowing Neural GasDelaunay triangulationTemporal constraint

a b s t r a c t

This paper aims to address the ability of self-organizing neural network models to manage real-timeapplications. Specifically, we introduce fAGNG (fast Autonomous Growing Neural Gas), a modifiedlearning algorithm for the incremental model Growing Neural Gas (GNG) network. The GrowingNeural Gas network with its attributes of growth, flexibility, rapid adaptation, and excellent quality ofrepresentation of the input space makes it a suitable model for real time applications. However, undertime constraints GNG fails to produce the optimal topological map for any input data set. In contrastto existing algorithms, the proposed fAGNG algorithm introduces multiple neurons per iteration. Thenumber of neurons inserted and input data generated is controlled autonomous and dynamically basedon a priory or online learnt model. A detailed study of the topological preservation and quality ofrepresentation depending on the neural network parameter selection has been developed to find the bestalternatives to represent different linear and non-linear input spaces under time restrictions or specificquality of representation requirements.

© 2012 Elsevier Ltd. All rights reserved.

1. Introduction

Unsupervised classification is also known as data clustering andis defined as the problem of finding homogeneous groups of datapoints in a given multidimensional data set. Each of these groupsis called a cluster and defines a region where the density of datapoints is locally higher than in others regions. Another objective ofunsupervised learning can be described as topology learning: givena high-dimensional data distribution, find a topological structurethat closely reflects the topology of the data distribution (Furao &Hasegawa, 2006).

Neural networks have been extensively used for data clustering.Self-organizing models (Kohonen, 2001) place their neurons sothat their positions within the network and connectivity betweendifferent neurons are optimized tomatch the spatial distribution ofactivations. As a result of this optimization, existing relationshipswithin the input space will be reproduced as spatial relationshipsamong neurons in the network. In particular, Growing Neural is an

∗ Corresponding author. Tel.: +34 678600796; fax: +34 965909643.E-mail addresses: [email protected] (J. García-Rodríguez), [email protected]

(A. Angelopoulou), [email protected] (J.M. García-Chamizo), [email protected](A. Psarrou), [email protected] (S. Orts Escolano), [email protected](V. Morell Giménez).

incremental model able to learn the important topological relationin a given set of input vectors by means of a simple Hebbian-likelearning rule (Fritzke, 1995).

Growing models have been widely used in recent years fordifferent applications, mainly for clustering or topology learning.There have been a number of papers that have modified GNGalgorithm to improve it or adapt it to different applications. Fritzkepresented variations of the original algorithm to deal with non-stationary distributions (Fritzke, 1997) and a semi-supervisedvariation (SNG) (Fritzke, 1994) combined with RBF networks. Inrecent years, many variations of the GNG algorithm have beenproposed. Marsland, Shapiro, and Nehmzow (2002) present avariation, the Growing When Required (GWR), able to add nodeswhenever the network does not sufficiently match the input.Furao and Hasegawa (2006) introduced an incremental learningGNG model to handle online non-stationary problems. Prudentand Ennaji (2005) proposed an incremental learning model basedon the Adaptive Resonance Theory (ART) mechanism, called theIncremental GNG (IGNG), to handle semi-supervised learning.Cselényi (2005) combined the adaptive receptive field functionwith the GNG algorithm to handle locally dense data aroundneuron units. Qin and Suganthan (2004) proposed the RobustGNG (RGNG) algorithm for unsupervised clustering. The algorithmadded several techniques to the original GNG algorithm toreduce the sensitivity of the algorithm to prototype initialization,input sequence, and outliers. Doherty, Adams, and Davey (2005)

0893-6080/$ – see front matter© 2012 Elsevier Ltd. All rights reserved.doi:10.1016/j.neunet.2012.02.032

Page 3: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208 197

described the TreeGNG, a top-down unsupervised method thatproduces hierarchical classification. Sledge and Keller (2008)modified GNG to detect cluster structures that incrementallyemerge.

Fateminzadeh, Lucas, and Sotanian-Zadeh (2003) have useda modified Growing Neural Gas to automatically correspondimportant landmark points from two related shapes by addinga third dimension to the data points and by treating theproblem of correspondence as a cluster-seeking method byadjusting the centers of points from the two corresponding shapes.Angelopoulou, Psarrou, García-Rodríguez, and Revett (2005) usedthe GNG to automatically obtain interest points in medical shapesand built statistical shape models.

There are many other works related to computer visionand man–machine interaction like: image compression(García-Rodríguez, Flórez-Revuelta, & García-Chamizo, 2007a),segmentation and representation of objects (Flórez, García, Gar-cía, & Hernández, 2002a; Rivera-Rovelo, Herold-García, & Bayro-Corrochano, 2006; Wu, Liu, & Huang, 2000)), objects tracking(Angelopoulou, Psarrou, & García-Rodríguez, 2007; Cao & Sugan-than, 2003; Frezza-Buet, 2008), recognition of gestures (Bauer,Hermann, & Villmann, 1999; Flórez, García, García, & Hernández,2002b; García-Rodríguez, Angelopoulou, & Psarrou, 2006), or 3Dreconstruction (Cretu, Petriu, & Payeur, 2005; do Rêgo, Araújo, &de Lima Neto, 2007; Holdstein & Fischer, 2008). Several applica-tions of GNG to different fields like robotics (García-Rodríguez,Flórez-Revuelta, & García-Chamizo, 2007b; Marsland, Nehmzow,& Shapiro, 2000), communications (Bougrain & Alexandre, 1999),economics (Lisboa, Edisbury, & Vellido, 2000), industrial applica-tions (Rehtanz & Leder, 2000), biology (Ogura, Iwasaki, & Sato,2003) andmedicine (Angelopoulou et al., 2005; Cheng& Zell, 2000;Cselényi, 2005; Fateminzadeh et al., 2003) have been developed inthe last years.

Most of the applications of neural networks are related withdata clustering and do not include temporal restrictions. However,growingmodels are also used to represent the topology of differentobjects and in this case applications have in many cases temporalconstraints.

For objects topology representation and preservation applica-tions, the acceleration of the learning process is very important oreven compulsory in the case of real time applications. Moreover,the condition of finalization for the GNG algorithm is commonlydefined by the insertion of a predefined number of neurons. Theelection of this number and other parameters that constitute a newversion of the GNG, called fast Autonomous GNG, accelerates thelearning process.

However, the acceleration of the algorithm includes somemodifications in the number of input signals and neurons insertedper iteration. These changes can affect the quality of representationthat could be important in applications that characterize, classifyand recognize objects. For that reason, it is necessary to use somemeasures to estimate the topology preservation of the input space(Martinetz & Schulten, 1994).

A detailed study has been conducted in order to define thebest parameters based on image topology, available time andrequired application. Since topology preservation measures arecomputationally expensive, it is not possible to calculate themonline and respond to the time restrictions. In this situation, aprevious election of optimal parameters based on a previous studyshould be helpful.

The remainder of the paper is organized as follows. Section 2provides a detailed description of the topology learning algorithmof GNGwith its fast Autonomous variant fAGNG and discusses sev-eral topological measures to quantify the topology preservation.Section 3 explains how we can apply GNG to represent 2D objectsand presents a set of experimental results with structure size andtime response constrainwhile, in Section 4, some applicationswithdifferent time and quality requirements are presented before weconclude in Section 5.

2. Fast Autonomous Growing Neural Gas

From the Neural Gas model (Martinetz, Berkovich, & Schulten,1993) and Growing Cell Structures (Fritzke, 1993), Fritzke devel-oped theGrowingNeural Gasmodel (Fritzke, 1995),with no prede-fined topology of a union between neurons, which from an initialnumber of neurons, new ones are added (Fig. 1). However, origi-nal GNG algorithm does not deal directly with time constraint andsomemodification should be introduced in order to obtain accept-able results in a predefined time. This new version has been calledfast Autonomous GNG.

2.1. Growing Neural Gas (GNG)

GNG is an unsupervised incremental clustering algorithm.Given some input distribution in Rd, GNG incrementally createsa graph or network of nodes, where each node in the graph hasa position in Rd. GNG can be used for vector quantization byfinding the code-vectors in clusters. In GNG, these code-vectors arerepresented by the reference vectors (the position) of the GNG-nodes. It can also be used for finding topological structures thatclosely reflects the structure of the input distribution. GNG is anadaptive algorithm in the sense that if the input distribution slowlychanges over time, GNG is able to adapt, that is to move the nodesso as to cover the new distribution.

Starting with two nodes, the algorithm constructs a graph inwhich nodes are considered neighbors if they are connected byan edge. The neighbor information is maintained throughout theexecution by a variant of competitive Hebbian learning (CHL).

The graph generated by CHL is called the ‘‘induced Delaunaytriangulation’’ (Fig. 2(b)) and is a sub-graph of the Delaunaytriangulation (Fig. 2(a)) corresponding to the set of nodes. Theinduced Delaunay triangulation optimally preserves topology ina very general sense (Martinetz, 1993). CHL is an essentialcomponent of the GNG algorithm since it is used to direct the localadaptation of nodes and insertion of new nodes.

The network is specified as:– A setN of nodes (neurons). Each neuron c ∈ N has its associated

reference vectorwc ∈ Rd. The reference vectors can be regardedas positions in the input space of their corresponding neurons.

– A set of edges (connections) between pairs of neurons. Theseconnections are not weighted, and its purpose is to define thetopological structure. An edge aging scheme is used to removeconnections that are invalid due to the motion of the neuronduring the adaptation process.

The GNG learning algorithm to approach the network to the inputmanifold is as follows:1. Start with two neurons a and b at random positionswa andwb

in Rd.2. Generate a random input pattern ξ according to the data dis-

tribution P(ξ) of each input pattern. For example if the inputspace is a 2D image, the input pattern ξ is the (x, y) coordi-nate of the points belonging to the object shape to represent.Typically, for the training of the network we generate100–10000 input signals depending on the complexity of theinput space.

3. Find the nearest neuron (winner neuron) s1 and the secondnearest s2.

4. Increase the age of all the edges emanating from s1.5. Add the squared distance between the input signal and the

winner neuron to a counter error of such s1 as:

∆error(s1) =ws1 − ξ

2 . (1)

6. Move thewinner neuron s1 and its topological neighbors (neu-rons connected to s1) towards ξ by fractions εw and εn, respec-tively, of the total distance:

Page 4: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

198 J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208

Fig. 1. Initial, intermediate and final states of the GNG learning algorithm.

Fig. 2. (a) Delaunay triangulation, (b) Induced Delaunay triangulation.

1ws1 = εw(ξ − ws1) (2)

1wsn = εn(ξ − wsn). (3)

7. If s1 and s2 are connected by an edge, set the age of this edgeto 0. If it does not exist, create it.

8. Remove the edges with age larger than amax. If this results inisolated neurons (without emanating edges), remove them, aswell.

9. Every certain numberλ of input signals generated, insert a newneuron as follows:• Determine the neuron qwith themaximumaccumulated er-

ror.• Insert a new neuron r between q and its further neighbor f :wr = 0.5

wq + wf

. (4)

• Insert new edges connecting the neuron r with neurons qand f , removing the old edge between q and f .

• Decrease the error variables of neurons q and f multiplyingthemwith a constant α. Initialize the error variable of r withthe new value of the error variable of q.

10. Decrease all error variables by multiplying them with a con-stant β .

11. If the stopping criterion is not yet achieved, go to step 2.

In summary, the adaptation of the network to the input spacevectors is produced in step 6. The insertion/update of connections(step 4) between the winning neuron and the second closest tothe input signal provides the topological relationship between theneurons.

The elimination of connections (step 8) removes the edges thatare no longer part of that topology. This is done by removing theconnections between neurons that are no longer near or that haveother neurons that are closer, so that the age of these connectionsexceeds a threshold.

The accumulation of the error (step 5) can identify those areasof the input space of vectors where it is necessary to increase thenumber of neurons to improve the mapping.

GNG only uses parameters that are constant in time. Further,it is not necessary to decide on the number of nodes to usea priori since nodes are added incrementally during execution.Insertion of new nodes deceases when a user defined performancecriteria is met (e.g. application time, error minimization) or whena maximum network size has been reached.

2.2. Fast Autonomous Growing Neural Gas (fAGNG)

If the goal of learning is to get a complete network with allneurons in a predetermined time, the learning algorithm should bemodified accordingly. The main factor in the learning time is thenumber of input signals λ generated by iteration, given that newneurons are inserted at smaller intervals, reduces the necessarytime to complete the network.

The completion of the GNG learning is usually determined bythe insertion of a predefined number of neurons to obtain a defaultsize. However, if a condition of completion is included (step 11 ofthe general algorithmpresented in Section 2.1) in some cases thereis not enough time for the network to correctly adapt the inputspace, resulting in wrong connections between neurons (Fig. 3).This causes differences between the configuration of the networkand the Delaunay triangulation that should have been established.

Amodification of the neural network algorithmGrowingNeuralGas is introduced to adapt the network to the input space satisfyingtime constraints so that the process of adaptation is determined bythe number of input signals λ and number neuron k inserted byiteration.

In our case, there may be multiple insertions unlike the singleinsertion in step 9 of the original algorithm. There are some worksin this line (Cheng & Zell, 2000) where more than one neuron byiteration is inserted in order to accelerate the learning process ofthe network.

In our version, step 9 is repeated each iteration by insertingseveral neurons in areas where there is more accumulated errorby creating the necessary connections.

2.2.1. Insertion of neuronsThe main difference of our fast Autonomous GNG is the

insertion of k neurons every λ input signals depending on theavailable time. Parameters λ and k are dynamically adapteddepending on the available time based on a study presented inSection 3.

Each λ input signals are generated, a neuron is inserted follow-ing the process:

Repeat k times

– Determine the neuron q with the highest accumulated error:

q = argmaxc∈A

Ec . (5)

Page 5: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208 199

Fig. 3. Incomplete adjustments for early completion of learning.

– Find the neuron f neighbor from q with highest accumulatederror:

f = argmaxc∈Nq

Ec . (6)

– Insert a new neuron r between f and q:

A = A ∪ {r} (7)

wr =

wq + wf

2

. (8)

– Insert new connections between the neuron r and neurons fand q, deleting connection between f and q if existed:

C = C ∪ {(r, q) , (r, f )}C = C − {(q, f )} .

(9)

– Decrease error of neurons f and q by a fraction α:

1Ef = −αEf1Eq = −αEq.

(10)

– Interpolate error of neuron r between f and q errors:

Er =

Ef + Eq

2

. (11)

It is possible to calculate the k neurons with the highestaccumulated error and make insertions in parallel. However,there are a higher number of accumulated errors when creatingconnections respect to the iterative insertion. Parameters k and λare crucial in the algorithm performance. In the case of multipleinsertion with k > 1, λ should be appropriately elected withhigh values to minimize the errors and achieve a good quality ofrepresentation.

2.3. Topology preservation

The final result of the self-organizing or competitive learningprocess is closely related to the concept of Delaunay triangulation.The Voronoi region of a neuron consists of all points of the inputspace for what this is the winning neuron. Therefore, as a result ofcompetitive Hebbian learning a graph (neural network structure)is obtained whose vertices are the neurons of the network andwhose edges are connections between them, which represents theDelaunay triangulation of the input space corresponding to thereference vectors of neurons in the network.

2.3.1. Topology Preserving NetworksTraditionally, it has been suggested that this triangulation,

result of competitive learning, preserves the topology of the inputspace. However, Martinetz and Schulten (1994) introduces a newcondition which restricts this quality.

It is proposed that the mapping φw of V in A preserves thevicinity when vectors that are close in the input space V aremapped to nearby neurons from network A.

It is also noted that the inverse mapping preserves the neigh-borhood if nearby neurons of A have associated feature vectors

close in the input space.

φ−1w : A → V , c ∈ A → wc ∈ V . (12)

Combining the two definitions, can be established the TopologyPreserving Network (TPN) as the network A whose mappings φwand φ−1

w preserve the neighborhood.Thus, self-organizing maps or Kohonen maps are not TPN as

has traditionally been considered, since this condition only wouldhappen in the event that the topology or dimension of themap andthe input space coincide. Since the network topology is establisheda priori, possibly ignoring the topology of the input space, it is notpossible to ensure that the mappings φw and φ−1

w preserve theneighborhood.

The Growing Cell Structures (Fritzke, 1993) are not TPN sincethe topology of the network is established a priori (triangles,tetrahedra,. . . ). However, it improves the performance comparedto Kohonen maps (Kohonen, 2001), due to its capacity of insertionand removal of neurons.

In the case of the Neural Gases like Growing Neural Gas andNeural Gas, the mechanism for adjusting the network through acompetitive learning generates an induced Delaunay triangulation(Fig. 2(b)), a graph obtained from the Delaunay triangulation,which has only edges of the Delaunay triangulation of pointswhich belong to the input space V . Martinetz and Schulten (1994)demonstrate that these models are TPN.

This capability can be used, for instance, in the applicationof these models to the representation of objects (Fig. 4) andtheir movement (Flórez et al., 2002b). In a previous comparativestudy (Flórez et al., 2002a) with Kohonen Maps, Growing CellStructures and Neural Gas, it was demonstrated that KohonenMaps and Growing Cell Structures are not topology preservingneural networks.Moreover, the original GNG ismore than hundredtimes faster that the NG.

2.3.2. Topology preservation measuresIn applications with time constraints, speed of learning in

the Growing Neural Gas may not be enough. Thus, learningparameters should be adapted to accelerate the learning. However,this increase of speed can lead to poor quality of topologyrepresentation.

This section describes the most used measures for the topologypreservation. These measures are used to estimate the impactof time and network parameters in the topology preservation ofdifferent input spaces.

The adaptation of the self-organizing neural networks is oftenmeasured in termsof twoparameters: (1) resolution and (2) degreeof preservation of the topology of the input space.

Themost widely usedmeasure of resolution is the quantizationerror (Kohonen, 2001), which is expressed as:

E =

∀ξ∈Rd

wsξ − ξ · p (ξ) (13)

where sξ is the closest neuron to the input pattern ξ .

Page 6: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

200 J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208

Fig. 4. Two-dimensional objects represented by a self-organizing network.

With regard to the preservation of the topology, there areseveral measures. Some of the most relevant are: the topographicproduct (Bauer & Pawelzik, 1992), the topographic function(Villmann, Herrmann, & Martinetz, 1997) and the C measure(Kaski & Lagus, 1996). In addition, we introduce the use of anew measure, the topographic geodesic product (Flórez, García,García, & Hernández, 2002c) that adapts the topographic productto measure non-linear input spaces.

2.3.2.1. Topographic product. The topographic product was one ofthe first attempts to quantify the preservation of the topology ofa self-organizing neural network. This measure is used to detectdeviations between the dimensionality of the network and theinput space. Folds in a network indicate that it is trying to approacha different input space dimension.

This measure compares the relationship between neighbor-hood pair of neurons throughout the network with respect toboth its position within the map (P2(j, k)) and its reference vector(P1(j, k)):

P1 (j, k) =

kl=1

dV (wj, wnAl(j))

dV (wj, wnVl(j))

1/l

(14)

P1 (j, k) =

k

l=1

dA(j, nAl (j))

dA(j, nVl (j))

1/l

(15)

where j is a neuron, wj is its reference vector, nVl is the l-th closest

neighbor to j in the input space V with distance dV and nAl is the

l-th closest neuron to j in the network A with distance dA.Combining Eqs. (14) and (15) a measure is obtained of the

topological relationship between the neuron j and its k nearestneurons:

P3 (j, k) =

kl=1

dV (wj, wnAl(j))

dV (wj, wnVl(j))

·dA(j, nA

l (j))dA(j, nV

l (j))

1/2l

. (16)

To extend this measure to all the neurons of the network andall possible orders of neighborhood, the topographic product P isdefined as

P =1

N (N − 1)·

Nj=1

N−1k=1

log P3(j, k). (17)

The topographic product takes values depending on whetherthe size of the network is larger (P > 0), like (P = 0) or minor(P < 0) the dimension of the input space that has been adapted.

2.3.2.2. Topographic function. The topology preservation of themapping ψM→A and ψA→M is expressed respectively as:

fj(k) = #j/dT−

A(i, j) > k; ; dTMA(i) (i, j) = 1

(18)

fj (−k) = #j/dT+

A(i, j) = 1; ; dTMA(i) (i, j) > k

(19)

where # {·} represents the cardinality of the set, j is the index ofneuron, k = 1, . . . , kc′(x),i−1 and dT (i)(i, j) is the distance betweentwo neurons based on diverse topological spaces created from Mand A.

The topographic function φMA of the mapping ψ is defined as:

φMA (k) =

1N

j∈A

fj(k), k = 0

φMA (1)+ φM

A (−1) , k = 0.

(20)

The explicit form φMA expresses detailed information on the

magnitude of the distortions that appear on the map with respectto the input subspace M . If it is not needed so much information,φ = φM

A (1) − φMA (−1) indicates whether the dimension of A is

greater or less thanM depending on its sign.

2.3.2.3. C measure. This measure combines the resolution and thepreservation of topology by calculating the shortest path in theinput space from each of the input signals, using its nearest neuronand reaching the second closest neuron, through the shortest pathbetween these two neurons within the network.

Formally, the distance d(x) can be expressed: with Ii(k) as theindex of the k-th neuron in the path defined inside the map fromIi(0) = c(x) to Ii

Kc′(x),i

= c ′(x). While the function Ii represents

the path through the neurons to themap and the neurons Ii(k) andIi(k + 1) should be neighbors for k = 0, . . . , Kc′(x),i − 1, Using thisnotation the distance d(x) is:

d(x) =x − mc(x)

+ mini

Kc′(x),i−1k=0

mIi(k) − mIi(k+1) . (21)

Measure C of the map would be defined by the average (denotedby E) of the distance of the input signals.

C = E(d(x)). (22)

2.3.2.4. Topographic geodesic-product. All previous measures per-mit to quantify the topological preservation of the input space ob-tained by self-organizing neural networks. However, most of themare not suitable for non-linear input spaces, since they do not con-sider the topology of the input space in their calculations. To solvethis problem, we modify one of the most used measures with lin-ear spaces, the topographic product, by incorporating the geodesicdistance as a measure of the distance between reference vectors ofneurons. This improvement allows the extension of studies (Flórezet al., 2002c) with the original topographic product led to therepresentation of different input spaces using self-organizing net-works and the determination of the dimensionality that must havea network to a proper adaptation to a given input space.

The topographic product is calculated by taking the Euclideandistance dV between the reference vectors of neurons, regardlessof the shape of the input space, and the distance dA as the length ofthe shortest path between two neurons in the graph that describesthe network. In Bauer et al. (1999) is indicated that results would

Page 7: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208 201

Fig. 5. Geodesic distances.

Fig. 6. Non linear (left) and linear (right) input subspaces adaptation.

be improved if the shape of the input subspace were consideredto calculate the distance dV . Using this idea, the geodesic distance(DG) (Aaron, 2006), is defined as the minimum length of roadjoining the two reference vectors of neurons in the input subspace(Fig. 5(a)). If it is not possible to establish a path between them, thedistance dV = ∞.

However, the calculation of geodesic distance has a highcomputational cost. Therefore, various heuristics approaches canbe considered using the neural network obtained in order tosimplify the calculation:

• DG2: Computing the shortest path between each pair ofneurons, including the lengths of the edges that communicate.This approach can be obtained, for example, from the Dijkstraalgorithm (Fig. 5(b)) and would be similar to that presented inVillmann, Merényi, and Hammer (2003).

• DG3: Extending the original graph connecting every pair ofneurons whose connections are entirely within the inputsubspace, and applying the process of approximation to thenetwork obtained earlier (Fig. 5(c)).

An example that illustrates the problem ofmeasuring the topologypreservation of a nonlinear subspace is shown in Fig. 6. Since thetopographic product does not consider the subspace topology, theresults are identical for both images in Fig. 6 (P = −29.92 ×

10−3), being unable to identify which of the two adaptations isbetter. Using the geodesic distance DG3 in the calculation of thetopographic product (Fig. 7 (left)) yields a value close to 0 (P =

0.46 × 10−3) that indicates proper preservation of the topology.However, in Fig. 6 (right), it is obtained the same value as theone obtained using topographic product (P = −29.92 × 10−3),indicating poor adaptation to input subspace.

3. Representation of objects with fast autonomous GNG.Topology preservation under temporary restrictions

The ability of neural gases to preserve the topology will beevaluated in this work with the representation of 2D objects.Identifying the points of the image that belong to objects allowsthe network to adapt its structure to this input subspace, obtainingan induced Delaunay triangulation of the object.

Let an objectO = [AG, AV ] that is definedby a geometric appear-ance AG and a visual appearance AV . The geometric appearance AGis given by morphologic parameters (local deformations) and po-sitional parameters (translation, rotation and scale):

AG = [GM ,GP ] . (23)

The visual appearance AV is set by a set of object characteristicssuch as color, texture or brightness, among others.

3.1. Representation of 2D objects with fAGNG

Given a domain support S ⊆ R2, an image intensity functionI(x, y) ∈ R such that I : S → [0, Imax], and an object O, its standardpotential field ψT (x, y) = fT (I (x, y)) is the transformation ψT :

S → [0, 1] which associates to each point (x, y) ∈ S the degreeof compliance with the visual property T of the object O by itsassociated intensity I(x, y).

Considering:

• The space of input signals as the set of points in the image:

V = Sξ = (x, y) ∈ S.

(24)

• The probability density function according to the standardpotential field obtained for each point of the image:

p (ξ) = p (x, y) = ψT (x, y) . (25)

Learning takes place following the fast Autonomous GNG algo-rithm. So, during this process, the neural network is obtainedwhich preserves the topology of the object O from a certain featureT . That is, from the visual appearance AV of the object is obtainedan approximation to its geometric appearance AG.

Henceforth we call it the Topology Preserving Graph TPG =

⟨A, C⟩ defined by a set of vertices (neurons) A and a set of edgesC that connect them, preserving the topology of an object from theconsidered standard potential field.

Fig. 7 shows an overview of the system to obtain the TPG of anobject from a scene. We observe that different TPG can be obtainedfrom different features T of objects without changing the learning

Page 8: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

202 J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208

Fig. 7. System description to obtain the Topology Preserving Graph of an object.

algorithm of neural gases. It is only necessary to define a differentpotential field.

Different potential fieldψT (x, y) can cause different structuresin the network. Fig. 8 (left) represents the topology of a 2D objectwhile Fig. 8 (right) represents the contour.

3.2. Quality of representation of the fAGNG

Some of the faster alternatives violate the preservation of thetopology of the input space (Fig. 9). Therefore, different measuresfor the preservation of the topology are used to evaluate thecorrectness of the various adaptations over time.

A study has been conducted with new measures of topologypreservation that extends previous studies (García-Rodríguezet al., 2011) to nonlinear input spaces, taking some input spaces

with different topologies and obtaining the time spent and thequality of representation of the different options presented. Someparameters have been fixed (ε1 = 0.1, ε2 = 0.01, α =

0.5, β = 0.0005, amax = 250) based on our previous experiencerepresenting different objects in images (García-Rodríguez, 2009),by changing the number of input signals λ and neurons k insertedby iteration. The different alternatives are denoted as GNGλ,kwhere λ is the number of input signals and k represents thenumber of neurons inserted by iteration. All experiments havebeen performed on a 2.4 GHz Pentium IV platform and C++ Builderhas been used to code and compile the algorithms.

The different parameters have been chosen based on ourprevious experience to illustrate the impact on applications wherethe interest is in the topological representation of the input space.In this case, the representation of present objects in 2D images.Different applications, like clustering, require different a prioristudy of the adequate parameters in order to achieve the expectedresults.

The study was carried out for the input spaces presented inFig. 10 and using the measures outlined in the previous section.For reasons of brevity and because the results are similar in

Fig. 8. Different adaptations of the neural gas to the same object.

Page 9: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208 203

Fig. 9. Final adjustments depending on the number of neurons inserted by iteration: 1 (a), 2 (b), 5 (c) and 10 (d).

Fig. 10. Bi-dimensional objects used in the study.

Fig. 11. Time spent in learning depending on the number of neurons.

all cases, this section will present only those obtained withone of the non-linear spaces (ring). Figs. 11–14 present onlythe most representative curves to improve the readability ofgraphs. However, experiments have been developed with all thecombinations for k = 1, 2, 5, 7 and 9 neurons and λ = 1000, 2500,5000 and 1000 input signals.

3.2.1. Study depending on the number of neuronsFig. 11 shows the learning time for the different variants

throughout the learning process with the condition of networkfinalization once 100 neurons have been inserted.

As it was expected, the fastest options, that insert 100 neuronsin less time, are those that insert a larger number of neurons periteration kwith smaller λ. At the other extreme is the combinationthat inserts a single neuron per iteration with higher λ, in our case10000.

Fig. 12 shows the preservation of the topology of the variousoptions depending on the number of neurons inserted, progressingaccording to the adaptive process of the network. This study is ofinterest in case there is no time restriction that limits the adaptiveprocess, but there is a maximum possible size of the network, inthis case set in 100 neurons.

In early stages of adapting process, networks try to quicklyrepresent the input space so that the preservation of the topologyfluctuates considerably. From a small number of neurons, it isstabilized. However, the preservation of topology is maintainedworst during the whole process for the most rapid options, giventhat incorrect connections are established in the network.

For all the measures, the number of neurons from which wecan consider that the preservation of the topology becomes stable,and the performance is acceptable can be set between 20 and40 neurons. However, depends on the measure, and is defined byreference to the stability of the variants shown in Fig. 12. Fastestand slowest variants have been eliminated. It is considered thatpoint of stability in which themeasure does not varymore than 5%with the final measure of preservation for the insertion of all theneurons in the network.

The establishment of the minimum number of neurons thatmaintain a valid topological preservation is of great importance.By inserting a small number of neurons with an acceptablepreservation, it reduces representations obtained and permits to fitthe objects shape more quickly. This fact allows subsequent tasksto be performed easily.

3.2.2. Study depending on the available timeFig. 13 shows the number of neurons that the network has over

time (every 0.1 s) for all variants, ending the learning process in1 s and without restricting the number of neurons to be inserted.

Page 10: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

204 J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208

Fig. 12. Topology preservation depending on the number of neurons.

Obviously, insertingmore neurons or reducing the number of inputsignals, both per iteration, allows larger networks in less time. Thevariant with more neurons inserted in one second is the one withk = 9 neurons inserted and using λ = 1000 input signals, both periteration.

Fig. 14 shows the preservation of the topology of the variousoptions in terms of time, as the adaptive process is in progress. Asin the prior section, combinations that insert several neurons everyless number of input signals permit to insert a larger number ofneurons in the time available, reaching 500 neurons in a second.

Fig. 13. Number of neurons inserted in 1 s.

The differences in the preservation of the topology of thedifferent options, using the topographic geodesic product are nottoo significant. However, it is noted that since a large number ofneurons are inserted, the topology preservation is lost. Since thenumber of input signals λ by iteration is insufficient to correctlyupdate the reference vectors of all these neurons. This is becausethe number of input signals is insufficient to adapt the referencevectors of many neurons.

The topographic function, however, shows differences in thetopology preservation, indicating that, at the expense of havingmore neurons, networks have incorrect connections. Althoughthe maximum age factor αmax also contributes to this effect, ithas been remained fixed so that no influence on the differentparameterizations of λ and k is induced.

The time in which most of the variants get an acceptable errorand, therefore, a good preservation of the topology has been alsoset. In this study, occurs around the 0.6 s as it is shown in the graphsof Fig. 14. Obviously, in fast variants occurs quite fast. Therefore,the two slowest and faster variants have been eliminated. It hasbeen considered a point of stability, in which themeasure does notvary more than 5% with the measure of preservation for the finaltime established.

3.3. Applications of fAGNG under temporal constraint

The fast Autonomous GNG can be used in applications thatoperate under time constraints obtaining a solution of a specifiedquality within a period of time.

Since it is not possible to calculate the measures for thepreservation of the topology for each object in real time, dueto their high computational cost during the implementation, themeasured values are taken a priori as those obtained in previousstudies (Figs. 11–14).

From these results, we can determine the correct parameteriza-tion of the fAGNG to obtain the representation with a supposedlybetter quality for a particular object as a function of time.

Moreover, from previous studies, we could determine, foreach type of object, which is the optimal number of neurons,that is, the minimum size of the network from which there arenot significant improvements in quality. Smaller representationspermit a faster handling of subsequent tasks. As well as todefine what parameterization values allows faster adaptation toappropriate quality of representation, which is very valuable in theevent of any interruption due to halt the process of adaptation.

In cases where the objective is the disruption of learning due tothe application requirements, it is important to insert a minimumnumber of neurons to achieve aminimumquality in a limited time.

Table 1 shows the topology preservation and time spent by thedifferent variants in the event that the restriction is the insertion of

Page 11: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208 205

Fig. 14. Topology preservation depending on the available time.

a certain number of neurons (in our case 100). In this case, the inputspace used for the study is the hand. The study was conducted forfive cases per variant, and the average was taken.

Faster variants get worse topology preservation. In general, thenetwork converges quickly to represent the input space. Therefore,if it is needed to interrupt the process of adaptation is possible topreserve the topology of the input space because with very fewneurons the quality of representation is sufficient.

Table 1Topology preservation and temporal cost ordered by learning time.

Variant Time(s)

Quantizationerror

Topographicgeodesicproduct

Topographicfunction

C measure

GNG1000,7 0.06 63.92 0.0118 −0.28 0.159

GNG1000,9 0.06 66.60 0.0132 −0.40 0.179

GNG1000,5 0.10 64.08 0.0138 −0.54 0.158

GNG2500,9 0.14 63.87 0.0117 −0.50 0.158

GNG2500,7 0.16 64.58 0.0111 −0.44 0.166

GNG2500,5 0.22 61.01 0.0093 −0.44 0.139

GNG1000,2 0.24 61.32 0.0099 −0.42 0.141

GNG5000,9 0.27 65.69 0.0098 −0.44 0.159

GNG5000,7 0.32 61.32 0.0094 −0.42 0.148

GNG5000,5 0.48 62.24 0.0051 −0.40 0.149

GNG1000,1 0.48 61.91 0.0074 −0.20 0.140

GNG10 000,9 0.50 60.10 0.0073 −0.22 0.140

GNG2500,2 0.54 61.40 0.0075 −0.22 0.143

GNG10 000,7 0.65 61.83 0.0075 −0.30 0.146

GNG10 000,5 0.90 58.91 0.0070 −0.20 0.144

GNG2500,1 1.12 61.06 0.0067 −0.16 0.140

GNG5000,2 1.13 58.98 0.0072 −0.10 0.142

GNG5000,1 2.17 57.76 0.0040 −0.10 0.198

GNG10 000,2 2.21 59.80 0.0068 −0.10 0.151

GNG10 000,1 4.25 56.80 0.0056 −0.08 0.141

The study presented in Table 1 is useful because it would allowchoosing the appropriate parameters in various situations. Forexample if:• It is necessary to obtain a representation in a given time and

with an expected quality and otherwise fails.• Request for an expected quality and interrogates the system

about the minimum time that can be used to obtain results.• Need of solutions where is possible to interrupt the system at

different times, always with an expected quality, thus requiringa topology preservation curve where the error decrease quicklyfrom early stages of learning.

These cases andmany others find answers in the tables and furtherstudies can be supplemented with new types of homogeneous orheterogeneous input spaces.

4. Applications

This section presents the use of the previous studies to differentfields that include applications with time restrictions like visualsurveillance systems or applicationswith quality of representationrequirements like human–machine interaction.

Previous work (Angelopoulou et al., 2005; García-Rodríguez,2009; García-Rodríguez et al., 2007a, 2007b) has been used todemonstrate the utility of a priori study with different parameter-ization of the neural network. However, the best solution to obtainthe most accurate values for the different parameters is the exe-cution of the experiments and training of the system with imagesor sequence of images we want to represent, if possible. In bothcases, the system is able to work close to video rate obtaining agood quality of representation for this kind of applications. The ex-periments were developed with a Pentium IV 2.4 GHz processorand C++ Builder environment were used to code the algorithms.

4.1. Applications with real time constraint

To test our fAGNG, we have chosen a visual surveillance systemapplication. In this case, the main goal is to represent people that

Page 12: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

206 J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208

Fig. 15. Trajectories obtained with fAGNG applied to visual surveillance images.

Fig. 16. fAGNG based hand gesture representation system with integrated segmentation.

appear in the image and follow their trajectories based on thecentroid of the representation to detect anomalous behaviors.

Fig. 15 shows a visual surveillance system with real time con-strain where the representation obtained with fAGNG representsentities in a sequence of images taken from CAVIAR database(Fisher, 2004) with a size of 250× 250 pixels. The system uses theneural network to segment, represent and track entities in the im-age. The number of neurons used to represent the entities of inter-est in the image was set to 100.

In this case, the parameters chosen are k = 5 and λ = 1000that permits the system to work fast and to obtain an acceptablequality of representation for this application. The time to adapt thefAGNG to the persons to be tracked in the images is about 50ms. Inthe case of the standard GNG with k = 1 and λ = 10 000 it takesmore than 1 s per frame to obtain the representation. Versionswithhighest k permit the system to work even faster, but the qualityof representation is not good enough to keep the tracking. Theinsertion of a higher number of neurons would increase the timespent to represent and track entities in frames.

4.2. Applications with quality of representation requirements

In this case, we use an application of gesture recognitionfor human–machine interaction. The objective is to representhand with an acceptable quality of representation to distinguishamong different gestures that have a specific meaning for thesystem.

Fig. 16 presents the representation of different gestures inimages of 200 × 160 pixels with fAGNG. Parameters chosen arebased in the quality of representation instead of the available time.For this reason, we established k = 1 and λ = 5000 to obtain a lowerror with a representation that keeps the topological structureof the objects that appear in the images. The minimum numberof neurons inserted to obtain a good quality of representationis 100 neurons since a higher number of neurons that notimproved the quantization error. However, reducing neurons thequantization error is incremented. As can be observed in Table 1,the topographic geodesic product is the lowest for the parametersselected.

Page 13: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208 207

5. Conclusions

In this paper, we have proposed modification of the GrowingNeural Gas learning algorithm with the objective of satisfyingtemporal constraints in the adaptation of the neural network toan input space with a good quality of representation, so that itsprocess of adaptation will be determined by the number λ of inputsignals and the number of neurons k inserted per iteration.

A detailed study has been conducted to estimate the optimalparameters that keep a good quality of representation in theavailable time. For this study, different quality of representationmeasures have been used to determine topology preservation ofthe representations obtained. From the experimental results, someconclusions are obtained for further use of the accelerated modelto support applications that operate under time constraints:

• The study permit to choose the best alternative depending onthe type of object that manages our application and the qualityof representation required.

• From the experimentation, it can be extracted a priori selectionof the minimum number of neurons that should compose thenetwork. So that, inserting a small number of neurons with anacceptable preservation, the maps are obtained more quicklyand are easy to be managed.

The study has been applied to different applications with temporalconstrain and quality of representation requirements with satis-factory results.

As a drawback, most of the quality measures are impossible tocalculate in real time, and the study should be developed off-linefor the kind of objects to be managed. Also for a high number ofneurons is difficult to obtain representation with good quality in alimited time.

Further work will include the implementation of the fastAutonomous GNG and the topology preservationmeasures used inthe study ontoGraphic Processor Units to accelerate its calculation.

Acknowledgments

This work was partially supported by the University of Alicanteproject GRE09-16 and Valencia Government project GV/2011/034.

References

Aaron, A. (2006). Graph-based normalization and whitening for non-linear dataanalysis. Neural Networks, 19, 864–876.

Angelopoulou, A., Psarrou, A., & García-Rodríguez, J. (2007). Robust modeling andtracking of non-rigid objects using Active-GNG. In ICCV 2007 (pp. 1–7).

Angelopoulou, A., Psarrou, A., García-Rodríguez, J., & Revett, K. (2005). Automaticlandmarking of 2D medical shapes using the growing neural gas network. InICCV 2005 workshop CVBIA (pp. 210–219).

Bauer, H.-U., Hermann, M., & Villmann, T. (1999). Neural Maps and TopographicVector Quantization. Neural Networks, 12(4–5), 659–676.

Bauer, H.-U., & Pawelzik, K. (1992). Quantifying the neighborhood preservationof self-organising feature maps. IEEE Transactions on Neural Networks, 3(4),570–578.

Bougrain, L., & Alexandre, F. (1999). Unsupervised connectionist clustering algo-rithms for a better supervised prediction: application to a radio communicationproblem. In Proceedings of the international join conference on neural networks(pp. 381–391). Vol. 28.

Cao, X., & Suganthan, P. N. (2003). Video shot motion characterization based onhierarchical overlapped Growing Neural Gas networks. Multimedia Systems, 9,378–385.

Cheng, G., & Zell, A. (2000). Double growing neural gas for disease diagnosis. InProceedings of artificial neural networks in medicine and biology conference (pp.309–314).

Cretu, A. M., Petriu, E. M., & Payeur, P. (2005). Evaluation of growing neural gas forSelective 3D scanning. In Proceedings of IEEE international workshop on roboticsand sensors environments.

Cselényi, Z. (2005). Mapping the dimensionality, density and topology of data: thegrowing adaptative neural gas. Computer Methods and Programs in Biomedicine,78, 141–156.

Doherty, K., Adams, R., & Davey, N. (2005). TreeGNG—hierarchical topologicalclustering. In ESANN (pp. 19–24).

do Rêgo, R. L., Araújo, A. F., & de Lima Neto, F. B. (2007). Growing self-organising maps for surface reconstruction from unstructured point clouds, InProceedings of IEEE international joint conference on artificial neural networks (pp.1900–1905).

Fateminzadeh, E., Lucas, C., & Sotanian-Zadeh, H. (2003). Automatic landmarkextraction from image data using modified growing neural gas network. IEEETransactions on Information Technology in Biomedicine, 77–85.

Fisher, R. B. (2004). PETS04 surveillance ground truth data set. In Proc. Sixth IEEE int.work. on performance evaluation of tracking and surveillance (pp. 1–5).

Flórez, F., García, J. M., García, J., & Hernández, A. (2002a). Representationof 2D objects with a topology preserving network. In Proceedings of the2nd international workshop on pattern recognition in information systems(pp. 267–276). ICEIS Press.

Flórez, F., García, J. M., García, J., & Hernández, A. (2002b). Hand gesture recognitionfollowing the dynamics of a topology-preserving network. In Proc. of the 5th IEEEintern. conference on automatic face and gesture recognition (pp. 318–323). IEEE,Inc..

Flórez, F., García, J. M., García, J., & Hernández, A. (2002c). Geodesic topographicproduct: an improvement to measure topology preservation of self-organisingneural networks. Lecture Notes in Artificial Intelligence, 3315, 841–850.

Frezza-Buet, H. (2008). Following non-stationary distributions by controlling thevector quantization accuracy of a growing neural gas network.Neurocomputing ,71, 1191–1202.

Fritzke, B. (1993). Growing cell structures—a self-organising network for unsuper-vised and supervised learning. Berkeley, California: Technical report TR-93-026.International Computer Science Institute.

Fritzke, B. (1994). Fast learning with incremental RBF networks. Neural ProcessingLetters, 1(1), 2–5.

Fritzke, B. (1995). A growing neural gas network learns topologies. In Advances inneural information processing systems: Vol. 7. Cambridge, Mass: MIT Press.

Fritzke, B. (1997). A self-organizing network that can follow non-stationarydistributions. In Proc. of the international conference on artificial neural networks:vol. 97 (pp. 613–618). Springer.

Furao, S., & Hasegawa, O. (2006). An incremental network for on-line unsupervisedclassification and topology learning. Neural Networks, 19, 90–106.

García-Rodríguez, J. (2009). Self-organizing neural network model to representobjects and their movement in realistic scenes. Ph.D. thesis. University ofAlicante.

García-Rodríguez, J., Angelopoulou, A., García-Chamizo, J. M., Psarrou, A., Orts-Escolano, S., & Morell-Giménez, V. (2011). Fast Autonomous Growing NeuralGas. In Proceedings of the 2011 international joint conference of neural networks(pp. 725–732).

García-Rodríguez, J., Angelopoulou, A., & Psarrou, A. (2006). Growing neural gas(GNG): a soft competitive learning method for 2D hand modeling. IEICETransactions, 89-D(7), 2124–2131.

García-Rodríguez, J., Flórez-Revuelta, F., & García-Chamizo, J. M. (2007a) Imagecompression using growing neural gas. In Proceedings of international jointconference on artificial neural networks (pp. 366–370).

García-Rodríguez, J., Flórez-Revuelta, F., & García-Chamizo, J. M. (2007b). Learningtopologic maps with growing neural gas. Lecture Notes in Artificial Intelligence,4693(2), 468–476.

Holdstein, Y., & Fischer, A. (2008). Three-dimensional surface reconstructionusing meshing growing neural gas (MGNG). Visual Computer , 24, 295–302.doi:10.1007/s00371-007-0202-z.

Kaski, S., & Lagus, K. (1996). Comparing self-organising maps. In Internationalconference on artificial neural netwoks: Vol. 1112 (pp. 809–814). Springer.

Kohonen, T. (2001). Self-organising maps. Springer-Verlag.Lisboa, P. J., Edisbury, B., & Vellido, A. (2000). Business applications of neural networks:

the state-of-the-art of real-world applications. World Scientific.Marsland, S., Nehmzow, U., & Shapiro, J. (2000). A real-time novelty detector for a

mobile robot. In EUREL advanced robotics conference.Marsland, S., Shapiro, J., & Nehmzow, U. (2002). A self-organizing network that

grows when required. Neural Networks, 15, 1041–1058.Martinetz, T. (1993). Competitive hebbian learning rule forms perfectly topology

preserving maps, In Proccedings of ICANN.Martinetz, T., Berkovich, S. G., & Schulten, K. J. (1993). Neural-gas network for vector

quantization and its application to time-series prediction. IEEE Transactions onNeural Networks, 4(4), 558–569.

Martinetz, T., & Schulten, K. (1994). Topology representing networks. NeuralNetworks, 7(3), 507–522.

Ogura, T., Iwasaki, V., & Sato, C. (2003). Topology representing network enableshighly achùrate classification of protein images taken by cryo electron-microscope without masking. Journal of Structural Biology, 143, 185–200.

Prudent, Y., & Ennaji, A. (2005). An incremental growing neural gas learnstopologies. In Proceedings of international joint conference on neural networks(pp. 1211–1216).

Qin, A. K., & Suganthan, P. N. (2004). Robust growing neural gas algorithm withapplication in cluster analysis. Neural Netwoks, 17, 1135–1148.

Rehtanz, C., & Leder, C. (2000). Stability assessment of electric power systemsusing growing neural gas and self-organising maps. In Proceedings of ESSAN(pp. 401–406).

Rivera-Rovelo, J., Herold-García, S., & Bayro-Corrochano, E. (2006). Objectsegmentation using growing neural gas and generalized gradient vector flowin the geometric algebra framework. In CIARP (pp. 306–315).

Page 14: Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation

Author's personal copy

208 J. García-Rodríguez et al. / Neural Networks 32 (2012) 196–208

Sledge, I. J., & Keller, J. M. (2008). Growing neural gas for temporal clustering. In ICPR(pp. 1–4).

Villmann, T., Herrmann, M., & Martinetz, T. (1997). Topology preservation in self-organising feature maps: exact definition and measurement. IEEE Transactionson Neural Networks, 8(2), 256–266.

Villmann, T., Merényi, E., & Hammer, B. (2003). Neural maps in remote sensingimage analysis. Neural Networks, 16, 389–403.

Wu, Y., Liu, Q., & Huang, T. S. (2000). An adaptive self-organising color segmentationalgorithm with application to robust real-time human hand localization. InProceedings of the IEEE asian conference on computer vision (pp. 1106–1111).