Top Banner
Could dynamic attractors explain associative prosopagnosia? Ali Zifan a,1 , Shahriar Gharibzadeh a, * , Mohammad Hassan Moradi b a Neuromuscular Systems Laboratory, Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran 15875-4413, Iran b Biological Signal Processing Laboratory, Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran 15875-4413, Iran Received 21 June 2006; accepted 28 June 2006 Summary Prosopagnosia is one of the many forms of visual associative agnosia, in which familiar faces lose their distinctive association. In the case of prosopagnosia, the ability to recognize familiar faces is lost, due to lesions in the medial occipitotemporal region. In ‘‘associative’’ prosopagnosia, the perceptual system seems adequate to allow for recognition, yet recognition cannot take place. Our hypothesis is that a possible cause of associative prosopagnosia might be the occurrence of Dynamic attractors in the brain’s auto-associative circuits. We present a biologically plausible model composed of two stages: Pre-processing and face recognition. In the first stage, the face image is passed through Gabor filters which model the kind of visual processing carried out by the simple and complex cells of the primary visual cortex of higher mammals and the resulting features are fed into a Pseudo-inverse associative neural network for the recognition task. Next, we damage the network by reducing self-connections below a certain threshold in order to create dynamic attractors and hence hinder the networks ability to recognize familiar faces (faces already learned). Results obtained from simulations show that the resulting network responses are very similar to those of associative prosopagnosic patients. We conclude that the problems concerning associative prosopagnosia may partly be explained through the concepts of dynamic attractors. Although there is no known cure for prosopagnosia, we believe that the focus of any treatment should be to help the individual with prosopagnosia develop compensatory strategies for remembering faces. Adults with prosopagnosia as a result of stroke or brain trauma can be retrained to use other clues to identify faces. And a cure for children born with prosopagnosia might eventually rely on reinforcement techniques that reward them for paying attention to faces during early childhood. Reinforcement learning from examples of patterns to be classified using habituation and association would create lower dimensional local basins in the brain, which would form a global attractor landscape with one basin for each face. These local basins would eventually constitute dynamical memories that solve difficult problems in classifying and recognizing faces. c 2006 Elsevier Ltd. All rights reserved. Introduction Prosopagnosia is a rare disorder of face perception where the ability to recognize faces is impaired, although the ability to recognize objects may be 0306-9877/$ - see front matter c 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.mehy.2006.06.056 * Corresponding author. Tel.: +98 21 6454 2369; fax: +98 21 6649 5655. E-mail address: [email protected] (S. Gharibzadeh). 1 Fax: +98 21 88368581. Medical Hypotheses (2007) 68, 1399–1405 http://intl.elsevierhealth.com/journals/mehy
7

Could dynamic attractors explain associative prosopagnosia?

May 06, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Could dynamic attractors explain associative prosopagnosia?

Medical Hypotheses (2007) 68, 1399–1405

http://intl.elsevierhealth.com/journals/mehy

Could dynamic attractors explain associativeprosopagnosia?

Ali Zifan a,1, Shahriar Gharibzadeh a,*, Mohammad Hassan Moradi b

a Neuromuscular Systems Laboratory, Faculty of Biomedical Engineering, Amirkabir University ofTechnology, Tehran 15875-4413, Iranb Biological Signal Processing Laboratory, Faculty of Biomedical Engineering, Amirkabir University ofTechnology, Tehran 15875-4413, Iran

Received 21 June 2006; accepted 28 June 2006

Summary Prosopagnosia is one of the many forms of visual associative agnosia, in which familiar faces lose theirdistinctive association. In the case of prosopagnosia, the ability to recognize familiar faces is lost, due to lesions in themedial occipitotemporal region. In ‘‘associative’’ prosopagnosia, the perceptual system seems adequate to allow forrecognition, yet recognition cannot take place. Our hypothesis is that a possible cause of associative prosopagnosiamight be the occurrence of Dynamic attractors in the brain’s auto-associative circuits. We present a biologicallyplausible model composed of two stages: Pre-processing and face recognition. In the first stage, the face image ispassed through Gabor filters which model the kind of visual processing carried out by the simple and complex cells ofthe primary visual cortex of higher mammals and the resulting features are fed into a Pseudo-inverse associative neuralnetwork for the recognition task. Next, we damage the network by reducing self-connections below a certain thresholdin order to create dynamic attractors and hence hinder the networks ability to recognize familiar faces (faces alreadylearned). Results obtained from simulations show that the resulting network responses are very similar to those ofassociative prosopagnosic patients. We conclude that the problems concerning associative prosopagnosia may partly beexplained through the concepts of dynamic attractors. Although there is no known cure for prosopagnosia, we believethat the focus of any treatment should be to help the individual with prosopagnosia develop compensatory strategiesfor remembering faces. Adults with prosopagnosia as a result of stroke or brain trauma can be retrained to use otherclues to identify faces. And a cure for children born with prosopagnosia might eventually rely on reinforcementtechniques that reward them for paying attention to faces during early childhood. Reinforcement learning fromexamples of patterns to be classified using habituation and association would create lower dimensional local basins inthe brain, which would form a global attractor landscape with one basin for each face. These local basins wouldeventually constitute dynamical memories that solve difficult problems in classifying and recognizing faces.

�c 2006 Elsevier Ltd. All rights reserved.

0306-9877/$ - see front matter �c 2006 Elsevier Ltd. All rights resedoi:10.1016/j.mehy.2006.06.056

* Corresponding author. Tel.: +98 21 6454 2369; fax: +98 216649 5655.

E-mail address: [email protected] (S. Gharibzadeh).1 Fax: +98 21 88368581.

Introduction

Prosopagnosia is a rare disorder of face perceptionwhere the ability to recognize faces is impaired,although the ability to recognize objects may be

rved.

Page 2: Could dynamic attractors explain associative prosopagnosia?

Figure 1 The face recognition model.

1400 Zifan et al.

relatively intact. It is composed of the Greek words‘‘Prosopon’’ (the face) and ‘‘agnosia’’ (not recog-nizing) and usually appears to result from a braininjury or neurological illness affecting specificareas of the brain [1–3], although more recentlycases of congenital or developmental prosopagno-sia have also been reported (see [4] for refer-ences). Patients with prosopagnosia are generallyable to make fine visual discriminations. For exam-ple, they are able to read. They can also categorizea visual stimulus as a face. But when prevented touse voice, clothes, gait, or specific attributes suchas glasses, ear-rings or moustaches, they cannotidentify a familiar person, and even themselves insome cases.

It has been proposed that there may be severalsubtypes of prosopagnosia [5,6]. Two broad cate-gories are apperceptive and associative prosopag-nosia [5]. In apperceptive prosopagnosia, thechief deficit is held to be the impairment in gener-ating an adequate percept of the face, such that itcannot be matched to stored representations ofpreviously seen faces. In ‘‘associative’’ prosopag-nosia, the perceptual system seems adequate to al-low for recognition, yet recognition cannot takeplace. Prosopagnosia of the ‘‘associative’’ type isgenerally caused by a bilateral damage in the infe-rior occipital and temporal visual cortices [inferiorcomponents of Brodmann areas (BA) 18 and 19], aswell as damage to areas in the posterior temporalregion (BA 37) [7]. In associative prosopagnosia,the process of generating the percept is intactbut the percept cannot be matched to storedrepresentations.

The study of prosopagnosia has been crucial inthe development of theories of face perception.Because prosopagnosia is not a unitary disorder(i.e., different people may show different typesand levels of impairment), it has been argued thatface perception involves a number of stages, eachof which can be damaged separately [8]. This is re-flected not just in the amount of impairment dis-played but also in the qualitative differences inimpairment that a person with prosopagnosia maypresent with. This sort of evidence has been crucialin supporting the theory that there may be a spe-cific face perception system in the brain.

Memories are thought to be attractor states ofneuronal representations. A dynamical system isdefined as a vectorfield of the tendencies of thesystem to change at every state of the system.With the passage of time, the system passesthrough a succession of states after a given initialstate is specified This path is called a trajectoryin the state space. A portrait is the collection ofall such trajectories. Portraits reveal certain fea-

tures. For example, all nearby trajectories may de-part from a given point or cycle (called a point orperiodic repellor). Similarly, all nearby trajectoriesmay approach a point or a cycle (point or periodicattractor). Saddle points and cycles are points orcycles to which some trajectories approach andothers depart. If there is more than one attractorin a portrait, basins comprise trajectories goingto each, and trajectories not tending to any attrac-tor are separatrices (often forming the boundariesbetween basins). In general the expansion of anattractor requires more energy reserved to it; in-creased forces increase the magnitude of the vec-tors of the vectorfield. And conversely, theshrinkage of an attractor is accompanied by thelessening of energy requirements by the system.From these considerations, we believe that a prob-able cause of prosopagnosia in a dynamic systemsuch as the human brain is due to the shrinkageof attractors. In other words, when associativememories in the brain lose their ability to convergeinto fixed stable basins and instead fall into dy-namic attractors. The model will be discussednext.

The model

The face recognition model is shown in Fig. 1. Themodel is composed of two stages: Pre-processingand face recognition. In the first stage, Gabor fea-tures at different orientations and scales are ex-tracted from of the face image and thesefeatures are next fed into a pseudo-inverse associa-tive neural network for the recognition task.

Next, we will get into the details of each of theabove subsystems and state the reasons behindtheir usage in our model.

Pre-processing stage

In this stage we extract features from the imageusing Gabor filters. The details will be discussednext.

Gabor featuresIn this stage, we extract gabor features from theinput image. The gabor wavelet resembles a sinu-soid restricted by a Gaussian function, which may

Page 3: Could dynamic attractors explain associative prosopagnosia?

Could dynamic attractors explain associative prosopagnosia? 1401

be tuned to a particular orientation and frequency,and is similar to the observed receptive fields ofsimple cells in primary visual cortex [9]. The shapeof the receptive fields of these cells and their orga-nization are the results of visual unsupervisedlearning during the development of the visual sys-tem in the first few months of life [10]. These fea-tures exhibit some invariance to background,translation, distortion, and size [11].

We combine the response of several filters withdifferent orientations as shown in Fig. 2. The basicwavelet form is:

Gð~k;~xÞ ¼ expði~k �~xÞ exp � k2~x �~x2r2

!; ð1Þ

where

~k ¼ kðcos/; sin/Þ:

Again as in [11], for each of the eight orientationsand wavelengths, we convolve Gð~k;~xÞ with the in-put image Ið~xÞ:

ðwIÞð~k;~x0Þ ¼Z

G~kð~x0 �~xÞIð~xÞd2x: ð2Þ

Then normalize the response values across scales:

ðTIÞð~k;~x0Þ ¼jðWIÞð~k;~x0ÞjRjðWIÞð~k;~xÞjd2xd/

: ð3Þ

Figure 2 Gabor filters used i

In our implementation 40 filters (5 frequencies and8 orientations) compose the filter bank. Hence, theprocess results in a vector of 40 complex values ateach point of an image. So we convolve the imagewindows with gabor filters, the window in the fre-quency domain will multiply by gabor filters.

Face recognition stage

Face recognition, although a trivial task for the hu-man brain, has proved to be extremely difficult toimitate artificially. Face recognition involves com-paring an image with a database of stored faces inorder to identify the individual in that input image.

Neural network theoryNeural nets are essentially networks of simple neu-ral processors, arranged and interconnected in par-allel. Neural networks are based on our currentlevel of knowledge of the human brain, and attractinterest from both engineers, who can use neuralnets to solve a wide range of problems, and scien-tists who can use them to help further our under-standing of the human brain.

Neural nets can be used to construct systemsthat are able to classify data into a given set orclass, in the case of face detection, a set of imagescontaining one or more face, and a set of imagesthat contains no faces.

n the pre-processing stage.

Page 4: Could dynamic attractors explain associative prosopagnosia?

1402 Zifan et al.

Pseudo-inverse neural networksAuto-associative neural networks are intensivelyused for pattern recognition problems. What makesthese networks attractive is (1) their cooperativecomputation, which provides not only massive par-allelism but also a great degree of robustnness, (2)analogy to biological nervous systems and (3) fastlearning and evaluation (for further details see[12]).

The self-organizing feature of these networks isattributed to their ability to converge from an arbi-trary state to a stable state, which we will call astatic attractor. There is an ‘‘energy’’ associatedwith the states of an associative memory network.Each attractor is a minimum, i.e. a lower pointthan its immediate surroundings, in this energylandscape. If the retrieval process starts from acorrupted memory, then it starts at a high energyand, like a ball in a real landscape, it rolls downto a minimum, hopefully to the right one. Undercertain conditions [14], the network may not con-verge to a stable state but to a cycle of a numberof states. These cycles are referred to as dynamicattractors.

A state can be a vector of pixel intensities or avector of facial features, with faces to be storedas static attractors as in face recognition [13].The number of static attractors determines thecapacity of the network, which is the number ofprototypes a network of size N is able to recognize.It is known that when the number of attractors ex-ceeds a certain boundary, the network may losethe ability to converge to desired attractors.

The network consists of N mutually intercon-nected two-state neurons yi:{�1,+1}. The evolutionof the network in time is determined by a synch-rous update rule:

yiðtþ 1Þ ¼ sgn½siðtÞ � bi� ¼siðtÞ � bi; if siðtÞ > bi;

�ðsiðtÞ � biÞ; otherwise;

siðtÞ ¼XNj¼1

WijyjðtÞ;

ð4Þ

where si(t) is a postconnection potential and bi is athreshold of a neuron. wij are interconnectionweights, which are assumed to be symmetric, i.ewij = wji, with self-connection wii not necessarilybeing zero.

In vector form the update rule can be rewrittenas:

Yðtþ 1Þ ¼ sgn½SðtÞ � B�; SðtÞ ¼ WYðtÞ: ð5ÞVector Y(t) is referred to as a state of the networkand matrix w :N · N is a weight matrix. The weightmatrix is calculated from a requirement that a set

of prototypes V1, . . . ,VM be the static attractors ofthe network.

It is known that decreasing the self-connectionincreases the direct attraction radius of the net-work, but it is also known that it increases thenumber of dynamic attractors [14].

We begin with a common pattern classificationsetting, where we have a number of pattern clas-ses. For a specific class, suppose we have N proto-types {x1, . . . ,xN}. A prototype is predefined as avector in an I dimensional space. In the case of aface, xi can be a vector formed from concatenatingrows of an image with suitable size, or a featurevector such as the features resulting from gabor fil-tering. We want to construct a projection operatorW for the corresponding class with its prototypessuch that any prototype in it can be representedas a projection onto the subspace spanned by{x1, . . . ,xN}. That is

xn ¼ Wxn; n ¼ 1; . . . ;N: ð6ÞFor face recognition, an associative memory modelwill enable us to combine multiple prototypesbelonging to the same person in an appropriateway to infer a new image of the person.

There is another type of linear associative mem-ory known as the pseudo-inverse or generalized-inverse memory [12,15], and [16]. Given aprototype matrix, the estimate of the memory ma-trix is given by

W ¼ XXþ; ð7Þwhere X is a I · N matrix in which the kth column isequal to xk and X+ is the pseudo-inverse matrix of X,i.e. X+ = (XTX)�1XT. Kohonen showed that such anauto-associative memory can be used to storeimages of human faces and reconstruct the originalfaces when features have been omitted or de-graded [15]. Eq. (7) is a solution of the followingleast square problem:

JðWÞ ¼ min kX �WXk: ð8ÞThe pseudo-inverse memory provides a better per-formance than do similar memory models [16].

Associative memorymodels can be efficiently ap-plied to face recognition. If a memory matrixW(k) isconstructed for the kth person, a query face y can beclassified into one of the C face classes based on adistance measure of how far y is from each class.The distance can be simply the Euclidean distance

dðy; ykÞ ¼ ky � ykk ¼ ky �W ðkÞyk; k ¼ 1; . . . ;C:

ð9ÞThe face represented by y is classified as belongingto the class W(k) represented by xðkÞ1 ; . . . ; xðkÞN if thedistance d(y,yk) is minimum.

Page 5: Could dynamic attractors explain associative prosopagnosia?

Could dynamic attractors explain associative prosopagnosia? 1403

Simulation and results

In order to test the described model, we gatheredface images from the FERET database [17]. Weuse a target set with 90 images of 30 different clas-ses (3 images per class), and a query set of 120images (1 image per class). Eyelocation is includedin the FERET database for all the images beingused. Then the pre-processing consists in centeringand scaling images so that the position of the eyeskept in the same relative place.

First gabor features were extracted by convolv-ing the image with the 40 gabor filters. These ma-trixes will concate to form a matrix of complexnumbers. In order to reduce the computation time,we reduced the matrix size using the median of

Table 1 Mean recognition rates using differentnumbers of training images per class (N), and takingthe average of 20 different training sets (smallnumbers correspond to the standard deviations)

Training images N = 2 (%) N = 3 (%)

Recognition 85.65 86.74Deviation 9.3 8.3

Figure 3 Simulation results. (a) Original image.Retrieved images: (b) D = 0.15; (c) D = 0.09 and (d) D =0.08.

each 3 · 3 pixels (although there are other waysto reduce the matrix size [see [18]).

Next, in order to test the retrieval capabilityof the network, the network was trained withthe gabor features obtained in the pre-processingstage. We implemented the cascade recognitionsystem. In Table 1 we can see that the simulationresults obtained with the proposed system whenthe number of training images per class is largerthan two.

Modeling associative prosopagnosia

As mentioned in the previous section, decreasingthe self-connection between two thresholds in-creases the direct attraction radius of the network,but it is also known that it increases the number ofdynamic attractors. So in order to model a healthysubject in the first simulation, we reduced the self-connections 0.15 times, i.e. when

wii ¼ D �wii; where D ¼ 0:15: ð10ÞThe result of this simulation for one of the subjectsin the query is shown in Fig. 3(b). As was observedfrom similar simulations, reducing the self-connec-tions by 0.15 increased the ability of the associa-tive network to recognize faces.

In the second simulation in order to model ‘asso-ciative prosopagnosia’, we chose D to be below0.1. The results for one of the simulations for thesame subject is shown in Fig. 3(c) and (d). As isclearly seen, decreasing D below 0.1 is undesirable,as this results in the occurrence of dynamic attrac-tors, thus reducing the attraction radius. As it canbe clearly seen the network’s ability in recognizingfaces decreases significantly. This response is verysimilar to that of prosopagnosic patients in thatthey fail to recognize faces.

The corresponding euclidean distances of thissubject to a particular class of learnt images bythe network is shown in Fig. 4. Fig. 4(a) shows aminimum at index 8, thus indicating subject No. 8is recognized as matching the probe face. But ascan be seen from Fig. 4(b) for D < 1 ,this networkfails to match a specific subject to the image probeand we have three or more subjects matching theprobe face (No. 9, 12 and 15), hence the networkfails to recognize the face. Hence, it can be ob-served that decreasing D below 0.1 is undesirable,as this results in the occurrence of numerous dy-namic attractors,thus reducing the attraction ra-dius. In other words, from a neural point of viewthe ability of the attractor basin to pull the imageinto its basin of attraction is reduced and the net-work enters a cycle of states and cannot settle in astable state. Hence, reducing self-connections 0.15

Page 6: Could dynamic attractors explain associative prosopagnosia?

Figure 4 Illustration of the euclidean distances calculated from the probe image of Fig. 3 and the images alreadylearned by the network. (a) D = 0.15; (b) D < 0.1.

1404 Zifan et al.

times, resulted in the expansion of an attractor.And conversely, reducing the D below 0.1 resultedin the shrinkage of the attractor.

Discussion

Prosopagnosia is classically defined as an inabilityto recognize faces of people known to the patienton the basis of visual perception, despite the ab-sence of low level visual impairments, or cognitivealterations such as mental confusion or amnesia,with a preserved ability to recognize peoplethrough other cues: voice or other visual traits suchas gait, size, clothes, or even facial features(moustache, scar, blemish) or accessories (ear-rings, eyeglasses).

Complexity theory tells us that dynamic complexsystems can settle into any of many possible differ-ent steady states – depending upon the values ofthe system variables. These steady states arecalled ‘attractor basins’ or more simply ‘attrac-tors’. Changing the variables associated with a sys-tem can cause it to switch from one attractor basin(steady state) to another (steady or dynamic), andwe did just that in order to model associative pros-opagnosia. We proposed a biologically plausiblemodel composed of two stages: Pre-processingand face recognition. In the first stage, Gaborfeatures are extracted from an image and thesefeatures are next fed into a Pseudo-inverse associa-tive neural network for the recognition task. Next,by reducing self-connections below a certainthreshold we created dynamic attractors in thenetwork and hence hindered the networks’ abilityto recognize faces. The results are very similar toobservations from subjects suffering from associa-tive prosopagnosia. The simulation results shows

that the network is a good model of associativeprosopagnosia: as self-connections are reduced be-low a certain threshold, the network’s ability torecognize faces decreases dramatically. Westrongly believe that even if it is impossible, asyet, to competently describe the processes in thebrain, it is possible to discuss some of its propertiesfrom the conjectures evolving from dynamical sys-tems’ theory, even though there is only a meagerexperimental support as yet.

Regardless of the fact that dynamic attractorsmay occur in associative networks, these attractorscan be predicted and avoided [14]. This might sug-gest that people suffering from prosopagnosia can-not avoid these attractors because of severedamage to their face processing areas. Althoughthere is no known cure for prosopagnosia we be-lieve that the focus of any treatment should beto help the individual with prosopagnosia developcompensatory strategies for remembering faces.Adults with prosopagnosia as a result of stroke orbrain trauma can be retrained to use other cluesto identify faces, this would expand the basins ofother attractors related to a particular individuals’face. And a cure for children born with prosopagno-sia might be by means of reinforcement techniquesthat reward them for paying attention to faces dur-ing early childhood. Reinforcement learning fromexamples of patterns to be classified using habitu-ation and association would create lower dimen-sional local basins in the brain, which would forma global attractor landscape with one basin foreach face. These new local basins would eventuallyconstitute dynamical memories that solve difficultproblems in classifying and recognizing faces. Fur-thermore, new local basins would be added quicklyfrom a very few examples without the loss of exist-ing basins. The new attractor landscape would pro-

Page 7: Could dynamic attractors explain associative prosopagnosia?

Could dynamic attractors explain associative prosopagnosia? 1405

vide alternative static attractors to those of dy-namic attractors.

References

[1] Benton A. The neuropsychology of facial recognition. AmPsychol 1980;35(2):176–86.

[2] De Renzi E, Perani D, Carlesimo A, Silveri MC, Fazio F.Prosopagnosia can be associated with damage confined tothe right hemisphere – an MRI and PET study and a reviewof the literature. Neuropsychology 1994;3(8):893–902.

[3] Damasio AR. Prosopagnosia. Trends Neurosci 1985;8:132–5.[4] Duchaine Bradley C, Yovel Galit, Butterworth Edward J,

Nakayama Ken. Prosopagnosia as impairment to face-specific mechanisms: elimination of the alternativehypotheses in a developmental case. Cogn Neuropsychol2006;23(5):714–47.

[5] De Renzi E, Faglioni P, Grossi D, Nichelli P. Apperceptive andassociative forms of prosopagnosia. Cortex 1991;27:213–21.

[6] Damasio AR, Tranel D, Damasio H. Face agnosia and theneural substrates of memory. Ann Rev Neurosci 1990;13:89–109.

[7] Posamentier Mette T, Abdi Herv’e. Processing faces andfacial expressions. Neuropsychol Rev 2003;13(3).

[8] Young AW, Newcombe F, de Haan EHF, Small M, Hay DC.Dissociable deficits after brain injury. In: Young AW, editor.Face and mind. Oxford: Oxford University Press; 1998.

[9] Jones JP, Palmer LA. An evaluation of the two-dimensionalGabor filter model of simple receptive fields in cat striatecortex. J Neurophysiol 1987;58(6):1233–58.

[10] Wilson HR, Levi D, Maffei L, Rovamo J, DeValois R. Theperception of form: retina to striate cortex. Visual per-ception: the neurophysiological foundations. AcademicPress; 1990.

[11] Buhmann J, Lades M, von derMalsburg C. Size and distortioninvariant object recognition by hierarchical graph match-ing. In: IJCNN international joint conference on neuralnetworks, vol. 2; 1990. p. 411–6.

[12] Haykin S. Neural networks: a comprehensive foundation.New York: Macmillan; 1995.

[13] Gorodnichy DO, Armstrong WW, Li X. Adaptive logicnetworks for facial feature detection. in: Proceedings ofICIAP, 1997.

[14] Gorodnichy Dmitry O, Reznik Alexandre M. Increasingattraction of pseudo-inverse auto-associative networks.Neural Process Lett 1997;5(2).

[15] Kohonen T. Correlation matrix memories. IEEE TransComput 1972;21:353–9.

[16] Haykin S. Neural networks: a comprehensive foundation.Macmillan College Publishing Company; 1995.

[17] Phillips PJ, Wechsler H, Huang J, Rauss P. The FERETdatabase and evaluation procedure for face recognitionalgorithms. Image Vision Comput J 1998;16(5):295–306.

[18] Imola K. Fodor. A survey of dimension reduction tech-niques. Center for Applied Scientific Computing, LawrenceLivermore National Laboratory, 2002.