Top Banner
Research Report Reading sadness beyond human faces Mariam Chammat , Aurélie Foucher, Jacqueline Nadel, Stéphanie Dubal CNRS USR 3246, CHU Pitié-Salpêtrière, Paris, France ARTICLE INFO ABSTRACT Article history: Accepted 17 May 2010 Available online 26 May 2010 Human faces are the main emotion displayers. Knowing that emotional compared to neutral stimuli elicit enlarged ERPs components at the perceptual level, one may wonder whether this has led to an emotional facilitation bias toward human faces. To contribute to this question, we measured the P1 and N170 components of the ERPs elicited by human facial compared to artificial stimuli, namely non-humanoid robots. Fifteen healthy young adults were shown sad and neutral, upright and inverted expressions of human versus robotic displays. An increase in P1 amplitude in response to sad displays compared to neutral ones evidenced an early perceptual amplification for sadness information. P1 and N170 latencies were delayed in response to robotic stimuli compared to human ones, while N170 amplitude was not affected by media. Inverted human stimuli elicited a longer latency of P1 and a larger N170 amplitude while inverted robotic stimuli did not. As a whole, our results show that emotion facilitation is not biased to human faces but rather extend to non- human displays, thus suggesting our capacity to read emotion beyond faces. © 2010 Elsevier B.V. All rights reserved. Keywords: ERP P1 N170 Inversion Faces Non-face-like stimuli 1. Introduction Emotion recognition and face processing are highly inter- twined. With faces being the main emotion displayers, the human brain seems to be geared with an elaborate circuitry for their in-depth processing. Indeed a facial expression can indicate the other's state of mind but can also convey intentionality thus guiding and regulating social interaction. Beyond the emotional expression portrayed on a face, one also processes a face in search for its identity, cues about attractiveness, approachability, trustworthiness and any other basic classification such as gender or age (Adolphs, 2002). Yet as intricate as these analyses may seem, it remains that most of this information is encoded very rapidly and at early stages of visual processing. As presented in Bruce and Young's (1986) facial recognition model, expression analysis and face identity are processed separately. It has further been shown that expression processing takes about 100 ms post-stimulus to occur (Krolak-Salmon et al., 2001) while facial identity seems to take place at around 170 ms (Bentin et al., 1996; George et al., 1996). Neurophysiologic studies on monkeys have shown that face selective neurons located in the temporal cortex showed enhanced responses to faces with positive or negative emotional expressions as compared to neutral facial expressions (Sugase et al., 1999). Likewise, brain imaging studies in humans have reported increased activations in the visual cortex (Vuilleumier et al., 2004) in response to emotional faces as opposed to neutral ones. Studies exploring the processing of facial emotional expressions have primarily focused on the category of human facial stimuli (Batty & Taylor, 2003; Rossion et al., 1999). A few studies have portrayed human facial expressions as schematic line drawings, emoticons or smileys and have shown these to also elicit emotional perceptual activations (Eger et al., 2003; Boucsein et al., 2001). Schematic BRAIN RESEARCH 1348 (2010) 95 104 Corresponding author. CNRS USR 3246, Pavillon Clérambault, CHU Pitié-Salpêtrière, 47, Bd de l'Hôpital, 75013, Paris, France. Fax: + 331153790770. E-mail address: [email protected] (M. Chammat). 0006-8993/$ see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.brainres.2010.05.051 available at www.sciencedirect.com www.elsevier.com/locate/brainres
10

Reading sadness beyond human faces

Apr 22, 2023

Download

Documents

Paolo Desogus
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Reading sadness beyond human faces

B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

ava i l ab l e a t www.sc i enced i r ec t . com

www.e l sev i e r . com/ loca te /b ra i n res

Research Report

Reading sadness beyond human faces

Mariam Chammat⁎, Aurélie Foucher, Jacqueline Nadel, Stéphanie DubalCNRS USR 3246, CHU Pitié-Salpêtrière, Paris, France

A R T I C L E I N F O

⁎ Corresponding author. CNRS USR 3246, PaviFax: +331153790770.

E-mail address: mariam.chammat@upmc

0006-8993/$ – see front matter © 2010 Elsevidoi:10.1016/j.brainres.2010.05.051

A B S T R A C T

Article history:Accepted 17 May 2010Available online 26 May 2010

Human faces are the main emotion displayers. Knowing that emotional compared toneutral stimuli elicit enlarged ERPs components at the perceptual level, one may wonderwhether this has led to an emotional facilitation bias toward human faces. To contribute tothis question, we measured the P1 and N170 components of the ERPs elicited by humanfacial compared to artificial stimuli, namely non-humanoid robots. Fifteen healthy youngadults were shown sad and neutral, upright and inverted expressions of human versusrobotic displays. An increase in P1 amplitude in response to sad displays compared toneutral ones evidenced an early perceptual amplification for sadness information. P1 andN170 latencies were delayed in response to robotic stimuli compared to human ones, whileN170 amplitude was not affected bymedia. Inverted human stimuli elicited a longer latencyof P1 and a larger N170 amplitude while inverted robotic stimuli did not. As a whole, ourresults show that emotion facilitation is not biased to human faces but rather extend to non-human displays, thus suggesting our capacity to read emotion beyond faces.

© 2010 Elsevier B.V. All rights reserved.

Keywords:ERPP1N170InversionFacesNon-face-like stimuli

1. Introduction

Emotion recognition and face processing are highly inter-twined. With faces being the main emotion displayers, thehuman brain seems to be gearedwith an elaborate circuitry fortheir in-depth processing. Indeed a facial expression canindicate the other's state of mind but can also conveyintentionality thus guiding and regulating social interaction.Beyond the emotional expression portrayed on a face, one alsoprocesses a face in search for its identity, cues aboutattractiveness, approachability, trustworthiness and anyother basic classification such as gender or age (Adolphs,2002). Yet as intricate as these analyses may seem, it remainsthat most of this information is encoded very rapidly and atearly stages of visual processing.

As presented in Bruce and Young's (1986) facial recognitionmodel, expression analysis and face identity are processed

llon Clérambault, CHU Pit

.fr (M. Chammat).

er B.V. All rights reserved

separately. It has further been shown that expression processingtakes about 100ms post-stimulus to occur (Krolak-Salmon et al.,2001) while facial identity seems to take place at around 170ms(Bentin et al., 1996; George et al., 1996). Neurophysiologic studieson monkeys have shown that face selective neurons located inthe temporal cortex showed enhanced responses to faces withpositive or negative emotional expressions as compared toneutral facial expressions (Sugase et al., 1999). Likewise, brainimaging studies in humans have reported increased activationsin the visual cortex (Vuilleumier et al., 2004) in response toemotional faces as opposed to neutral ones. Studies exploringthe processing of facial emotional expressions have primarilyfocused on the category of human facial stimuli (Batty & Taylor,2003; Rossion et al., 1999). A few studies have portrayed humanfacial expressions as schematic line drawings, emoticons orsmileysandhaveshown these toalsoelicit emotional perceptualactivations (Eger et al., 2003; Boucsein et al., 2001). Schematic

ié-Salpêtrière, 47, Bd de l'Hôpital, 75013, Paris, France.

.

Page 2: Reading sadness beyond human faces

96 B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

faces are particular in that they exclude additional informationrelated to personal identity while remaining evocative of thehuman per se. By maintaining the most invariant informationthat we can extract from an expressive face, schematic faces aresimplified representations of the human category (Krombholz etal., 2007).

With the fast development and incorporation of artificialintelligence in our daily interactions (Dautenhahn, 2007), weseem to be developing an intricate interface with a novelcategory of partner: the robot category. Robotic technologiesare increasingly integrated in our daily lives for household use(Collins et al., 2005; Ishiguro et al., 2003) and in an aim toimprove their functioning in the human environment, some ofthese robots, in particular the pet robots, are designed tocommunicate ‘emotional states’ (Arbib & Fellous, 2004;Breazeal, 2002; Canamero & Fredslund 2001; Canamero &Gaussier, 2005; Kozima et al., 2004 ). Manifestly, the emergenceof an interaction with artificial intelligence poses a new seriesof questions as to how our brains perceive robots. Whileneuroimaging studies have approached this question from acognitive point of view (Ge & Han, 2008), more elementalanswers need to be brought as to lower perceptual levels ofencoding robotic stimuli (Schiano et al., 2000). It can be furtherasked whether perceptual modulations for emotion occur inresponse to artificial stimuli, such as non-humanoid robotsmade of machine-like arrangements of salient metallic part,nails and cables (Dubal et al., 2010).

Numerous studies have now found a preferential and fasterdetection of emotional stimuli compared to non-emotionalstimuli (Ohman et al., 2001). ERPs focusing on components asearly as 100ms show an increase in the activation of visualcortex to emotional cues (Keil et al., 2002; Batty andTaylor, 2003;Eger et al., 2003; Pizzagalli et al., 1999, Pourtois et al. 2005; Streitet al., 2003). The sensitivity of the P1 component of ERPs toemotional cues was underpinned when compared with theface-specific component occurring at 170 ms (N170). Indeed amajority of studies found the face sensitive N170 beingunaffected by facial emotional expression (Holmes et al. 2003;Ashley et al., 2004; Balconi & Lucchiari, 2005) while otherparadigmspoint to an increase inN170 amplitude for emotionalexpressions (Campanella et al., 2002; Batty and Taylor, 2003;Stekelenburg and de Gelder, 2004).

Independently of emotional modulations of the N170, it hasbeen shown that information about the affective meaning ofvisual stimuli can be extracted rapidly around 100ms, at thelatency of the P1 component (Eimer & Holmes, 2002; Kawasakiet al., 2001; Pizzagalli et al., 2003). This occurs before perceptualanalysis differentiating faces from other objects is fully complet-ed inposteriorvisual cortex (Bentinetal., 1996).Thishypothesis isin linewith the idea of a double pathway selective of informationtype. Indeed along with slow cortical pathway responsible forrelaying complex stimuli, there might reside a sub-corticalpathway for the fast travel of information. The processing ofaffective stimuli would proceed via the superior colliculus andpulvinar (Ledoux, 1996, Freese&Amaral 2005, Sahetal. 2003). Theanatomical projections from the amygdala directly onto visualcortical regions have long been suspected to play a role inenhancement of visual activity (Ledoux 1996, Amaral et al., 2003).Indirect evidence comes froman fMRI that shows an attenuationof this amplification in the absence of the amygdala (Vuilleumier

et al., 2004). An earlier PET study by Morris et al. (1998) has alsoshown significant correlation between the enhancement of thefusiform responses to fearful faces and the magnitude ofamygdala activation by fearful and happy faces. Even thoughthe exact mechanism of visual enhancement for emotionalstimuli is subject to debate, it remains that earlymodulations areclearly observable and measurable at the perceptual level. Thismodulation of early sensory responses has been extensivelystudied with use of fearful expressions (Eimer & Holmes, 2002;Holmes et al., 2003; Pourtois et al., 2005). The effect of otheremotional expressions on the perceptual encoding and espe-cially that of sadness has been underrepresented.

Sadness is important to explore since it seems to be apivotal component of negative symptomatology expressionin emotional disorders (Dubal & Viaud-Delmon, 2008).Compared to other emotional expressions, sadness seemsto be highly social whereby it signals a need for help or aninvitation to sympathy (Nadel & Muir, 2005; Panksepp, 1998).While disgust, for example, is simply informative of apunctual state of reaction, sadness seems to indicate amore complex affective state that calls for bonding. A rapidresponse to displays of sad expressions in conspecifics is ofnotable evolutionary benefit to one's survival. Neurophys-iologically, the processing of sad expressions has beentightly linked to amygdala activations (Blair et al. 1999,Phelps & Anderson 1997, Lennox et al. 2004, Goldin et al.2005). Batty and Taylor (2003) have shown in a comparativestudy of the six basic human facial expressions that sadnessaffects early stages of perceptual encoding resulting inactivations at around 100 ms post-stimulus.

In order to explore the temporal dynamics of sadnessperception in artificial stimuli, we used different non-human-oid robotic designs. A previous study conducted by our teamused images of these robotic stimuli to compare the temporaldynamics of perception of happiness on humans versusrobots (Dubal et al., 2010). In an emotional judgment task,this experiment on happiness showed similar modulations ofP1 by happy and neutral expressions for robot and humanconditions. Later and lower N170 was elicited by roboticcompared to human stimuli.

To study the general temporal dynamics of sadnessprocessing, we compared the ERPs evoked by eight sets ofstimuli: pictures of sad versus neutral human expressions andsad versus neutral robotic expressions on the one hand, butalso sad versus neutral inverted human expressions and sadversus neutral inverted robotic expressions on the other hand(Fig. 1). In light of our team's previous findings, on the onehand, we expected P1 in response to sad robots to be broaderthan that to neutral robotic displays. On the other hand, weexpected robotic stimuli to generate a delayed N170 withlower amplitude than the response to human faces.

Our design included a condition of inversion. Invertedstimuli prevent habituation and serve as a control for P1modulations by ensuring that the observed P1 amplitudemodulations are indeed due to detected emotion rather thanto low-level physical characteristics of the stimuli. Faceinversion effect (FIE) is also a hallmark of the specificity offace perception compared to object perception. It was shownthat inversion causes an increase and delay of the N170(Bentin et al., 1996; Rossion et al., 1999) and that these effects

Page 3: Reading sadness beyond human faces

97B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

are specific for faces or larger than for other objects (Rossionet al., 2000). We tested whether FIE occurs to inverted roboticstimuli.

2. Results

2.1. Behavioral data

2.1.1. AccuracyParticipants correctly identifiedaround88.75%of the expressionsregardless of the media support (F(1, 14)=4.382, p=.55) orexpression (F(1, 14)=2.54,p=.134) (Table1).Asignificant inversioneffect (F(1, 14)=13.47, p=.003) was accompanied by a significantmedia× inversion interaction (F(1,14)=11.74, p=.004) as well asmedia× inversion×emotion interaction (F(1,14)=.89, p=.029).

The 2×2 comparisons showed that the inversion effect wassignificant for human neutral (F(1, 14)=2.51, p=.02), humansad, (F(1, 14)=4.65, p=.000), robot sad (F(1, 14)=4.65, p=.000) butnot robot neutral (F(1,14)=3.57, p=.07) conditions. Inversiondecreased identification of emotional expression in all condi-tions but neutral robot one.

2.1.2. Reaction timesMean reaction times (RTs) were affected by media (F(1,14)=8.06, p=.013). Reaction times to humans (693±140 ms) wereshorter than those to robots (751±162 ms) (Fig. 2).

Mean RTs were affected by emotion (F(1,14)=6.04, p=.02),with shorter responses to sad (726±166 ms) than to neutralstimuli (762±164 ms), and were also affected by inversion(F(1,14)=69.66, p=.000). RTs to upright images 728±153 mswere shorter than to inverted ones 768±154 ms.

The interaction between inversion and emotion almostreached significance (F(1,14)=4.34, p=.059). The emotionaleffect on RT was higher for upright (F(1,14)=10.19, p=.012)than for inverted faces (F(1,14)=3.3, p=.08). For invertedstimuli, RTs in response to sad face were 31 ms faster thanthose to neutral faces while for upright faces response to sadfaces were 41 ms faster than those to neutral ones.

2.2. Evoked potentials

2.2.1. P1 componentP1 amplitude was higher to sad than to neutral expressions (F(1,14)=6.87, p=.020) (Figs. 3e–h). P1 amplitudewasnot affected by

Table 1 –Mean percent accuracy of expression recognition.The table indicates mean percent accuracy of recognitionratesof emotional expressionaccording tomedia, inversionand expression in addition to standard errors of the mean.

Recognition accuracy (%) S.E.M.

Human Upright Neutral 93.43 3.14Sad 93.78 1.57

Inverted Neutral 88.73 3.85Sad 88.31 2.52

Robot Upright Neutral 80.74 4.36Sad 92.62 1.90

Inverted Neutral 83.25 3.83Sad 89.15 2.22

human versus robotic media (F(1,14)=0.05, p=.827). An inver-sion effect was observed (F(1,14)=9.93, p=.009) whereby invertedfaces led to higher P1 amplitude than upright faces (4.52±3.50and 4.08±3.53 respectively) (Figs. 3g and h). There was no in-teraction between inversion and media (F(1,14)=0.07, p=.790),inversion and emotion (F(1,14)=1.15, p=.301) or media andemotion.

A significant interaction between inversion×media×hemi-sphere and electrode (F(6,11)=6.09 p=.009) supported thevarious P1 distributions across conditions. On Fig. 3, one mayobserve that P1 distribution fields aremore posterior in invertedthan upright conditions and that this may be more evident onthe left than on the right hemisphere in human conditions.

P1 latency was not affected by emotional expression(F(1,14)=0.62, p=.446). P1 was however shorter in response tohuman than to robotic displays (F(1,14)=6.07, p=.027) (Figs. 3aand b). Mean latency was respectively 102.2±12.5 ms for hu-man and 104.4±14.8 ms for robot.

An inversion effect was also observed (F(1,14)==20.36, p=.000)along with a significant media–inversion interaction (F(1,14)=13.18, p=.003), evidencing that delay in latencies to inverted faceswas only seen for humans (F(1,14)==27.13, p=.000) as opposed torobots (F(1,14)=0.08, p=.93). Latency to upright human faces was99.4±11.5 ms versus 105±12.9 ms for inverted ones. This delaywas not seen for robotic stimuli (104.4±14.4 for upright versus104.4±15.29 for inverted) (Fig. 3).

2.2.2. N170 componentEmotion did not affect N170 amplitude or latency (F(1,14)=0.47, p=.83 and F(1,14)=0.79, p=.387, respectively).

Media impacted on N170 latency (F(1,14)=41.51, p=.000).N170 to human faces occurred earlier than those to roboticstimuli (161.6±10.7 versus 167.2±12.7, respectively; Figs. 4a, b,d, and e). N170 latency was also significantly affected byinversion (F(1,14)=14.834, p=.002).

Upright faces caused a mean latency of 163.1±13.4 andinverted ones caused a mean latency of 165.58±13.28.

There was a close to significance media effect on N170amplitude (F(1,14)=14.35, p=.056), an inversion effect (F(1,14)=30.32, p=.000) and an inversion×media interactionwhich posthoc tests showed to be due to a significant inversion effect inresponse to human stimuli (F(1,14)=45.14, p=.000) but not inresponse to robotic stimuli (F(1,14)=1.97, p=.18). N170 ampli-tude was larger in response to inverted than to upright humanstimuli (Figs. 4c and f). We also found a hemisphere effect onboth N170 amplitude and latency (F(1,14)=6.732, p=.021, and F(1,14)=10.752 p=.005, respectively).

3. Discussion

Using evoked potentials, we investigated the neural correlatesand temporal dynamics of brain activity evoked by the humanand robotic expression of sadness. Following our previousstudy on happiness, this experiment focuses on the earlyoccipital component P1 and the temporo-occipital componentN170. We compared the modulations of these componentsaccording to emotion, media and inversion.

We found P1 amplitude to be significantly modulated byemotional expression. Emotional stimuli caused an increase

Page 4: Reading sadness beyond human faces

Fig. 1 – Examples of upright and inverted human and robotic stimuli. The neutral facial expression (left of each category) andsad facial expressions (right in each category) are in accordance with the FACS.

98 B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

in P1 amplitude and a decrease in reaction time regardless ofthe media portraying it. The response to robotic displays asopposed to human faces led to longer P1 and N170 latenciesand longer reactions times. As for inversion, an increase in P1amplitude was seen for inverted faces when compared toupright ones. The delay in P1 and N170 latencies and theincrease in N170 amplitude only occurred to human invertedfaces and not to robotic inverted displays. Reaction times toupright were faster than those to inverted ones.

The current results suggest that emotional facilitation is notspecific to human emotional expression. Reaction times wereshorter to human and robotic sad stimuli alike compared toneutral stimuli. This finding goes along with the behavioralfacilitation tendency in response to emotional content incomplex visual scenes (Ohman et al., 2001; Miltner et al., 2004),which has been hypothesized as stemming from amygdalamodulations. Studies showing behavioral facilitation in re-sponse to emotional expression often focus on threat relatedstimuli such as fearful or angry faces (Purkis and Lipp, 2007;Schupp et al., 2004). Some recent studies have extended thiseffect for positive stimuli such as happy faces (Becker andDetweiler-Bedell, 2009; Holmes et al., 2009). Our previousstudy showed emotional facilitation to both happy humanand robotic stimuli. Here we show emotional facilitation to

Fig. 2 – Mean reaction times to sad versus neutral stimuli accordemotion, inversion and media. Error bars indicate S.E.M.

sadness: to our knowledge, no prior study has focused onthe behavioral emotional facilitation to sad emotionalexpressions.

Sad human and robotic stimuli alsomodulated P1 amplitude.Findings in the literature show P1 modulation to wide-rangingemotional stimuli. In addition to faces, emotionally arousingstimuli such as body expression (Meeren et al., 2005) or complexvisual scenes (Carretie et al., 2004) also elicit enhanced P1component. Such perceptual enhancement has been found inresponse to positive as well as to negative eliciting stimuli. Battyand Taylor (2003) generalized this effect to the facial expressionsof fear, disgust, anger, surprise, happiness and sadness ascompared to neutrality on human faces. Our experiment ex-tends P1 emotional modulation beyond human faces.

In addition to emotion, P1 is increasingly shown to be in-fluenced by attention processes (Pessoa et al., 2005; Vuilleu-mier et al., 2004; Schupp et al., 2007). Whether the increase inP1 amplitude was modulated by emotion alone or by additiveattention effects, our result shows no difference betweenhuman or robotic stimuli.

As compared to human faces, robotic stimuli caused adelay in P1 and N170 latency but also in reaction times ingeneral. This delay can be accounted for by the novelty ofrobotic patterns as compared to human faces. With the

ing to media. RT differences were significant according to

Page 5: Reading sadness beyond human faces

99B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

occipital and inferior temporal cortices localized as sources offace-specific components (Wong et al., 2008), it is likelythat human stimuli are processed faster than robotic ones.

Fig. 3 – Emotion effects on P1. (a–d) Topographies of each categoryeach of the upright and inverted, human and robotic images, thewhich they culminated.

The delays can therefore correspond to an increase inneuronal firing in the visual pathway (Mace et al., 2005) forthe processing of our novel robotic stimuli.

were taken at the latency of maximumP1 amplitude. (e, f) ForP1 to neutral and sad images are displayed at electrodes on

Page 6: Reading sadness beyond human faces

100 B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

While the N170 is consensually considered to reflect thestructural encoding of faces (Bentin et al., 1996), it is less clearwhether it also encodes the emotional expressions depicted bythose faces. Emotional effects on N170 are in fact highlycontradictory. Some studies have shown that N170 amplitudeis sensitive to emotional expression (Mikhailova & Bogomolova,1999; Sokolov & Boucsein, 2000; Krombholz et al., 2007). Amajority of studies however pointed to no effect of emotion onN170 (Carretie and Iglesias, 1995; Eimer & Holmes, 2002; Holmeset al., 2003; Ashley et al., 2004; Herrmann et al., 2002; Eimer &Holmes, 2007; Krolak-Salmon et al., 2001;, Balconi & Lucchiari,2005, Dubal et al., 2010). Morel et al. (2008), who study possibleeffects of repetition on emotionalmodulation of theN170, showrepetition enhancement effect only selective of neutral faces.Discrepancies regarding the emotional modulation of N170might be due to variations in experimental designs and indifferences of stimuli which still need to be clearly identified.

Inverted faces on the other hand seem to be classicallyconsidered a hallmark of face perception. Face inversion effect(FIE) is one of the most robust findings in the face literature(Rossion & Jacques, 2008). This finding shows that inversiondoes not only increase and delay the N170 (Bentin et al., 1996;Rossion et al., 1999) but that these effects are specific for facesand larger than for other objects (Rossion et al., 2000). TheN170 delay was tested and proved robust across conditions ofhighly familiar objects such as cars or words (Rossion et al.,2003) or the same novel objects after expertise training(Rossion et al., 2002; Busey & Vanderkolk, 2005) but wasconsistently found to be larger for human faces. Along withthis finding, our results showed a significant inversion effecton N170 for human faces but not for robotic stimuli.

Face inversion has been repeatedly shown to disrupt bothholistic processing and relational configuration processing (Holeet al., 1999), suggesting that inverted faces are rather processedbased on their components. Case reports of prosopagnosicpatients show that their processing of faces is not affected byinversion with some being even better at recognizing invertedthan upright faces (de Gelder, 2000; Farah et al., 1995). Thedisruption of holistic and relational configuration seems to be ofmuch higher impact for human faces than robotic stimuli. Bydefinition, a robot is an object that is made out of a lithecompilation of other objects. Therefore, apart from an approxi-mate display of its elements, we lack a solid representation ofwhat a robot should look like. The representation we have of ahuman face on the other hand is a solid universal one. Perhapsinverting a human face creates a bigger representational conflictto the upright counterpart compared to inverting a robotic one.

In addition to affecting the N170 latency and amplitude,holistic and relational disruption caused a delay in P1 latencyto inverted human stimuli. This effect has been documentedin numerous studies (Linkenkaer-Hansen et al., 1998, Itier andTaylor 2002, Rossion & Jacques 2008).

A review of the literature shows that only a few studiesreported an effect of inversion on P1. Rossion et al. (1999) whofirst found no effect on inversion on P1 amplitude later foundamplitude to be larger in response to inverted faces comparedto upright ones (Rossion et al., 2000). A recent study investi-gating the differential neural responses to humans versushumanoid robotic bodies reported higher P1 amplitude forinverted as opposed to upright condition (Hirai and Hiraki,

2007). Another study by Linkenkaer-Hansen et al. (1998)confirmed this finding. Our results are indeed in line withthese findings showing that amplitude of the P1 was higher forinverted faces.

Moreover, in our experiment, there was an emotionalmodulation of inverted stimuli aswell. Inverted conditions aregenerally used to impair emotion recognition (Pourtois et al.2004, McKelvie, 1995, Calder et al. 2000, Durand et al., 2007).The effect we found might therefore be due to the perceptionof sad mouths—which when inverted would depict “smiles”.Smiles are indeed considered to be one of the most importantdiagnostic features for the recognition or identification ofhappiness expressions (Adolphs, 2002; Schyns et al., 2007;Leppanen & Hietanen 2007).

4. Conclusion

This study shows that robotic emotional stimuli elicit anemotional behavioral facilitation since they shorten reactiontime compared to their non-emotional counterpart. Modula-tion of the P1 component by emotional stimuli complementsthis finding. Detecting sad expressions was faster thandetecting neutral expressions, regardless of the media. Ourrobotic stimuli are complex but non-humanoid patternseliciting delayed N170 compared to human stimuli: thispoints to the fact that they are not readily recognized as facelike. We verified this hypothesis by using the face inversioneffect. As expected, inverted human stimuli elicited a longerlatency of P1 and a larger N170 amplitude, while invertedrobotic stimuli did not. With the remarkable finding thatemotional response is still elicited by robotic non-biologicalstimuli, this study emphasizes our expertise at emotionreading. A key function of emotion is to communicatesimplified but high impact information (Arbib & Fellous,2004). Schematic patterns may well fulfill this adaptivefunction. The continued reading of facial expressions is ofevolutionary benefit to one's survival. This capacity is notablein its own right but is remarkable in that it extends to readingemotions beyond human faces.

5. Procedure

During the experiment, participants were asked to identifywhether pictures displayed a neutral or an emotional expressionand to press a choice button by either the left or the right hand.The left and right arrangement of the response buttons wascounterbalanced across subjects. Subjects were asked to react asquickly as possible although avoiding incorrect response.

The participants completed a practice block of 12 trials,followed by three blocks of 184 experimental trials. Each blockof threemalehuman faces, three robotic faces and twoemotionalexpressions (neutral, sad), but also 3 inverted human faces andthree inverted robot facespresented in a randomizedorder. Thus,there were 24 different pictures stimuli for a total of 552 trials.

Each trial beganwith the presentation of a picture stimulusfor 500 ms at the centre of the screen, followed by a fixationcross lasting 1000 to 2400 ms.

Page 7: Reading sadness beyond human faces

Fig. 4 – Media and inversion effects on N170. (a–d) Topographies of each category were taken at the latency of maximum N170amplitude. (e, f) For each of the neutral and sad, human and robotic images, the N170 to upright and sad inverted images aredisplayed at electrodes on which they culminated.

101B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

Page 8: Reading sadness beyond human faces

102 B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

5.1. Behavioral data analysis

Accuracy, by means of percentage of correct identification ofemotional expression, and response time were analyz-ed separately by a 2×2×2 repeated measures analysis ofvariance (ANOVA), with inversion (upright, inverted), media(robot, human) and emotion (sad/neutral) as repeatedmeasures.

5.2. Electrophysiological recording and analysis

Electroencephalograms (EEGs) were acquired from 62 Ag/AgClactive Electrodes (Brain products) positioned according to theextended 10-20 system. Impedances were kept below 5 kΩ. Thehorizontal electrooculogram (EOG) was acquired using a bipolarpair of electrodes positioned at the external canthi. The verticalEOG was monitored with supra- and infra-orbital electrodes.

The EEG and EOG signals were sampled at 500 Hz, filteredonline with a band pass of 0.1–100 Hz and offline low pass at30 Hz using a zero phase shift digital filter. Eye blinks wereremoved offline and all signals with amplitudes exceeding±100 μV in any given epoch were discarded. Data were epochedfrom −100 ms to 800 ms after stimulus onset, corrected forbaseline over the 100-ms window pre-stimulus, transformedto a common average reference and averaged for eachindividual in each of the eight experimental conditions.

Peak amplitudes were analyzed at 14 posterior parieto-occipital electrode sites for the P1 (PO9, P7, P3, O1, P5, PO7, PO3,P4, P6, P8, PO4, PO8, PO10, O2) and at 8 sites for the N170,namely, PO9, P7, P5, PO7, P6, P8, PO8 and PO10.

Peak latency was measured by locating the most positivedeflection from stimulus onset within the latency window of80 to 120 ms. N170 was identified in grand average waveformsas the maximum voltage area within the latency windows of130 to 170 ms.

For P1 peaks, amplitude and latency were analyzedseparately by a 2×2×2×2×7 repeated measures analysis ofvariance (ANOVA), with inversion (upright/inverted), media(robot, human), emotion (sad/neutral), hemisphere (right/left)and electrodes (7 levels) as intra-subject factors.

For N170 peaks on the other hand, amplitude and latencywere also analyzed separately by a 2×2×2×2×4 repeatedmeasures analysis of variance (ANOVA), with inversion (up-right/inverted), media (robot/human), emotion (sad/neutral),hemisphere (right/left) and electrodes (4 levels) as intra-subjectfactors. A Greenhouse–Geisser correction was applied to p-values associated with multiple df repeated measures.

6. Experimental procedures

6.1. Participants

Fourteen right-handed and one left-handed participants tookpart in the experiment (7 women and 8 men, mean age 20 ±2.9.). All had normal or corrected to normal vision. Allparticipants provided informed consent to take part to theexperiment that had been previously approved by the localethical committee.

6.2. Stimuli and design

Six different pictures from3 differentmale human faceswith sador neutral expressions and 6 pictures from 3 different roboticheadswith sadorneutral expressionswere selected (Fig. 1). Thesesame pictures were inverted resulting in a total of 24 pictures.

Human expressions were drawn from Ekman and Friesen's(1976) classical set of prototypical facial expressions for 2men,and we added a homemade picture of a younger male actordisplaying sad and neutral expressions according to thestandards of the Facial Action Coding System (FACS) elabo-rated by Ekman and Friesen (1976). The recognition of theactor's expressions was compared to the recognition of Ekmanand Friesen's pictures of facial affect in 20 young adults. AnANOVA showed no differences between the recognition ofEkman's emotional expressions (M=4.85, SD=.366) and that ofthe actor's expressions (M=4.85, SD=.489) (for detailed results,see Nadel et al., 2006), thus allowing to assert the prototypi-cality of our actor's expressions. The 6 robotic expressionswere created following the FACS criteria. The robotic headswere non-humanoid ones but gave a reasonably believableversion of a face though they had no chin, no cheeks and nonose, which prevented from creating AU6, orbicularis oculiaction, present in Duchenne smile (Soussignan, 2002). Eachemotional expression was validated by FACS certifiedresearchers and asserted by Oster (FACS expert). The roboticexpressions were easily recognizable by a group of 19 adults(Nadel et al., 2006). All 16 pictures were black and white. Theywere equalized for global luminance. Pictures were presentedin the centre of the computer screen and subtended a verticalvisual angle of 7°.

Before running the experiment, a pre-test design was usedwhereby 19 subjects were asked to identify if picturesdisplayed a neutral or emotional expression. This pre-testallowed to certify that all pictures where recognized asexpected: at the recognition rate, all stimuli included was of93.12 5%. An ANOVA on recognition rates showed no effect ofmedia (F(1,18)=0.437, p= .517) or emotion (F(1,18)=0.000,p=.1000) or stimulus (F(1,18)=1.4141, p=.264) nor interactionbetween any of the variables (all p-values=n.s.).

Acknowledgments

This research was funded by the Sixth Framework Program ofthe European Union IST-045169, Feelix Growing.

We are grateful to R. Soussignan, FACS certified, and to H.Oster, FACS expert, for their selection of human and roboticexpressions according to FACS criteria. We thank P. Canet andM. Simon, Emotion Centre, for their help in designing roboticexpressions. We also thank G. Rautureau for software designand Pr. Jouvent for constructive support.

R E F E R E N C E S

Adolphs, R., 2002. Recognizing emotion from facial expressions:psychological and neurological mechanisms. Behav. Cogn.Neurosci. Rev. 1, 21–62.

Page 9: Reading sadness beyond human faces

103B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

Amaral, D.G., Behniea, H., Kelly, J.L., 2003. Topographicorganization of projections from the amygdala to the visualcortex in the macaque monkey. Neuroscience 118,1099–1120.

Arbib, M.A., Fellous, J.M., 2004. Emotions: from brain to robot.Trends Cogn. Sci. 8, 554–561.

Ashley, V., Vuilleumier, P., Swick, D., 2004. Time course andspecificity of event-related potentials to emotional expres-sions. NeuroReport 15, 211–216.

Balconi, M., Lucchiari, C., 2005. In the face of emotions: eventrelated potentials in supraliminal and subliminal faceexpression recognition. Genet Soc. Gen. Psychol. Monogr. 131,41–69.

Batty, M., Taylor, M.J., 2003. Early processing of the six basicfacial emotional expressions. Brain Res. Cogn. Brain Res. 17,613–620.

Becker, M.W., Detweiler-Bedell, B.Q.J., 2009. Early detection andavoidance of threatening faces during passive viewing. Exp.Psychol. 9, 1–8.

Bentin, S., McCarthy, G., Perez, E., Puce, A., Allison, T., 1996.Electrophysiological studies of face perception in humans. J.Cogn. Neurosci. 8, 551–565.

Blair, R.J., Morris, J.S., Frith, C.D., Perrett, D.I., Dolan, R.J., 1999.Dissociable neural responses to facial expressions of sadnessand anger. Brain 12, 883–893.

Boucsein, W., Schaefer, F., Sokolov, E.N., Schröder, C., Furedy, J.,2001. The color-vision approach to emotional space: corticalevoked potential data. Integr. Psychol. Behav. Sci. 36, 137–153.

Breazeal, C., 2002. Designing sociable robots. MIT Press, Cam-bridge, MA.

Bruce, V., Young, A., 1968. Understanding face recognition. Br. J.Psychol. 77, 305–327.

Busey, T.A., Vanderkolk, J.R., 2005. Behavioral andelectrophysiological evidence for configural processing infingerprint experts. Vision Res. 45, 431–448.

Calder, A.J., Young, A.W., Keane, J., Dean, M., 2000. Configuralinformation in facial expression perception. J. Exp. Psychol.Hum. Percept. Perform. 26, 527–551.

Campanella, S., Quinet, P., Bruyer, R., Crommelinck, M., Guerit, J.M., 2002. Categorical perception of happiness and fearfacial expressions: an ERP study. J. Cogn. Neurosci. 14,210–227.

Canamero, L., Fredslund, J., 2001. I show you that I like you—canyou read it onmy face? IEEE Trans. Syst. Man Cybern., Part A 31,454–459.

Canamero, L., Gaussier, P., 2005. Emotion understanding: robots astools and models. In: Nadel, J., Muir, D. (Eds.), EmotionalDevelopment. Oxford University Press, New York, pp.235–258.

Carretie, L., Iglesias, J., 1995. An ERP study on the specificity offacial expression processing. Int. J. Psychophysiol 19, 183–192.

Carretie, L., Hinojosa, J.A., Martin-Loeches, M., Mercado, F., Tapia,M., 2004. Automatic attention to emotional stimuli: neuralcorrelates. Hum. Brain Mapp. 22, 290–299.

Collins, S., Ruina, A., Tedrake, R., Wisse, M., 2005. Efficient bipedalrobots based on passive-dynamic walkers. Science 307,1082–1085.

de Gelder, B., 2000. Neuroscience. More to seeing than meets theeye. Science 289, 1148–1149.

Dautenhahn, K., 2007. Socially intelligent robots: dimensions ofhuman-robot interaction. Philos. Trans. R. Soc., B 362, 679–704.

Dubal, S., Viaud-Delmon, I., 2008. Magical ideation andhyperacusis. Cortex 44, 1379–1386.

Dubal, S., Foucher, A., Jouvent, R., Nadel, J., 2010. Human brainspots emotion in non-humanoid robots. Soc. Cogn. AffectNeurosci. nsq019v1-nsq019.

Durand, K., Gallay, M., Seigneuric, A., Robichon, F., Baudouin, J.Y.,2007. The development of facial emotion recognition: the roleof configural information. J. Exp. Child Psychol. 97, 14–27.

Eger, E., Jedynak, A., Iwaki, T., Skrandies, W., 2003. Rapidextraction of emotional expression: evidence from evokedpotential fields during brief presentation of face stimuli.Neuropsychologia 41, 808–817.

Eimer, M., Holmes, A., 2002. An ERP study on the time course ofemotional face processing. Neuroreport 13, 427–431.

Eimer, M., Holmes, A., 2007. Event-related brain potentialcorrelates of emotional face processing. Neuropsychologia 45,15–31.

Ekman, P., Friesen, W.V., 1976. Pictures of facial affect. ConsultingPsychologist Press, Palo Alto.

Farah, M.J., Tanaka, J.W., Drain, H.M., 1995. What causes the faceinversion effect? J. Exp. Psychol. Hum. Percept. Perform. 21,628–634.

Freese, J.L., Amaral, D.G., 2005. The organization of projectionsfrom the amygdala to visual cortical areas TE and V1 in themacaque monkey. J. Comp. Neurol. 486, 295–317.

Ge, J., Han, S., 2008. Distinct neurocognitive strategies forcomprehensions of human and artificial intelligence. PLoSONE 3, e2797.

George, N., Evans, J., Fiori, N., Davidoff, J., Renault, B., 1996. Brainevents related to normal and moderately scrambled faces.Brain Res. Cogn. Brain Res. 4, 65–76.

Goldin, P.R., Hutcherson, C.A., Ochsner, K.N., Glover, G.H., Gabrieli,J.D., Gross, J.J., 2005. The neural bases of amusement andsadness: a comparison of block contrast and subject-specificemotion intensity regression approaches. Neuroimage 27,26–36.

Herrmann, M.J., Aranda, D., Ellgring, H., Mueller, T.J., Strik, W.K.,Heidrich, A., Fallgatter, A.J., 2002. Face-specific event-relatedpotential in humans is independent from face expression. Int.J. Psychophysiol. 4, 241–244.

Hirai, M., Hiraki, K., 2007. Differential neural responses to humansversus robots: an event-related potential study. Brain Res.1165, 105–115.

Holmes, A., Vuilleumier, P., Eimer, M., 2003. The processing ofemotional facial expression is gated by spatial attention:evidence from event-related brain potentials. Brain Res. Cogn.Brain Res. 16, 174–184.

Holmes, A., Bradley, B.P., Kragh, Nielsen M., Mogg, K., 2009.Attentional selectivity for emotional faces: evidence fromhuman electrophysiology. Psychophysiology 46 (1), 62–68.

Hole, G.J., George, P.A., Dunsmore, V., 1999. Evidence for holisticprocessing of faces viewed as photographic negatives.Perception 28, 341–359.

Ishiguro, H., Ono, T., Imai, M., Kanda, T., 2003. Development of aninteractive humanoid robot “Robovie”—an interdisciplinaryapproach. In: Jarvis, R.A., Zelinsky, A. (Eds.), Rob. Res.Springer-Verlag, New York, pp. 179–191.

Itier, R.J., Taylor, M.J., 2002. Inversion and contrast polarity reversalaffect both encoding and recognition processes of unfamiliarfaces: a repetition study using ERPs. Neuroimage 15,353–372.

Kawasaki, H., Kaufman, O., Damasio, H., Damasio, A.R.,Granner,M., Bakken,H., Hori, T., Howard,M.A., Adolphs, R., 2001.Single-neuron responses to emotional visual stimuli recorded inhuman ventral prefrontal cortex. Nat. Neurosci. 4, 15–16.

Keil, A., Bradley, M.M., Hauk, O., Rockstroh, B., Elbert, T., Lang, P.J.,2002. Large-scale neural correlates of affective pictureprocessing. Psychophysiology 39, 641–649.

Kozima, H., Nakagawa, C., Yano, H., 2004. Can a robot empathizewith people? Artif. Life Rob. 8, 83–88.

Krolak-Salmon, P., Fischer, C., Vighetto, A., Mauguiere, F., 2001.Processing of facial emotional expression: spatio-temporaldata as assessed by scalp event-related potentials. Eur. J.Neurosci. 13, 987–994.

Krombholz, A., Schaefer, F., Boucsein, W., 2007. Modification ofN170 by different emotional expression of schematic faces.Biol. Psychol. 76, 156–162.

Page 10: Reading sadness beyond human faces

104 B R A I N R E S E A R C H 1 3 4 8 ( 2 0 1 0 ) 9 5 – 1 0 4

Leppanen, J.M., Hietanen, J.K., 2007. Is there more than a happyface than just a big smile? Visual Cognition 15, 468–490.

Lennox, B.R., Jacob, R., Calder, A.J., Lupson, V., Bullmore, E.T., 2004.Behavioural and neurocognitive responses to sad facial affectare attenuated in patients with mania. Psychol. Med. 34,795–802.

LeDoux, J., 1996. Emotional networks and motor control: a fearfulview. Prog. Brain Res. 107, 437–446.

Linkenkaer-Hansen, K., Palva, J.M., Sams, M., Hietanen, J.K.,Aronen, H.J., Ilmoniemi, R.J., 1998. Face-selective processing inhuman extrastriate cortex around 120 ms after stimulus onsetrevealed by magneto- and electroencephalography. Neurosci.Lett. 11 (253), 147–150.

Mace, M.J., Thorpe, S.J., Fabre-Thorpe, M., 2005. Rapidcategorization of achromatic natural scenes: how robust atvery low contrasts? Eur. J. Neurosci. 21, 2007–2018.

McKelvie, S.J., 1995. Emotional expression in upside-down faces:evidence for configurational and componential processing. Br.J. Soc. Psychol. 34, 325–334.

Meeren, H.K., van Heijnsbergen, C.C., de Gelder, B., 2005. Rapidperceptual integration of facial expression and emotional bodylanguage. Proc. Natl. Acad Sci. U.S.A. 102, 16518–16523.

Mikhailova, E.G., Bogomolova, I.V., 1999. The evoked corticalactivity of the cerebral hemispheres in man during the activeand passive perception of face expression. Zh. Vyssh. Nerv.Deiat. Im. I. P. Pavlova 49, 566–575.

Miltner, W.H., Krieschel, S., Hecht, H., Trippe, R., Weiss, T., 2004.Eye movements and behavioral responses to threatening andnonthreatening stimuli during visual search in phobic andnonphobic subjects. Emotion 4, 323–339.

Morel, S., Ponz, A., Mercier, M., Vuilleumier, P., George, N., 2008.EEG–MEG evidence for early differential repetition effects forfearful, happy and neutral faces. Brain Res. 1254, 84–98.

Morris, J.S., Ohman, A., Dolan, R.J., 1998. Conscious andunconscious emotional learning in the human amygdala.Nature 393, 467–470.

Nadel, J., Muir, D., 2005. Emotional development. OxfordUniversity Press, Oxford.

Nadel, J., Simon, M., Canet, P., Soussignan, R., Blancard, P.,Canamero, L., 2006. Human response to an expressive robot.Epigenetic Rob 06, 79–86.

Ohman, A., Lundqvist, D., Esteves, F., 2001. The face in the crowdrevisited: a threat advantage with schematic stimuli. J. Pers.Soc. Psychol. 80, 381–396.

Panksepp, J., 1998. Affective Neuroscience: The Foundations ofHuman and Animal Emotions.

Pessoa, L., Japee, S., Ungerleider, L.G., 2005. Visual awareness andthe detection of fearful faces. Emotion 5, 243–247.

Phelps, E.A., Anderson, A.K., 1997. Emotional memory: what doesthe amygdala do? Curr. Biol. 7, 311–314.

Pizzagalli, D., Regard, M., Lehmann, D., 1999. Rapid emotional faceprocessing in the human right and left brain hemispheres: anERP study. Neuroreport 10, 2691–2698.

Pizzagalli, D.A., Greischar, L.L., Davidson, R.J., 2003. Spatio-temporal dynamics of brain mechanisms in aversiveclassical conditioning: high-density event-related potentialand brain electrical tomography analyses.Neuropsychologia 41, 184–194.

Pourtois, G., Grandjean, D., Sander, D., Vuilleumier, P., 2004.Electrophysiological correlates of rapid spatial orientingtowards fearful faces. Cereb Cortex 14, 619–633.

Pourtois, G., Thut, G., Grave de Peralta, R., Michel, C., Vuilleumier,P., 2005. Two electrophysiological stages of spatial orientingtowards fearful faces: early temporo-parietal activation

preceding gain control in extrastriate visual cortex.Neuroimage 26, 149–163.

Purkis, H.M., Lipp, O.V., 2007. Automatic attention does not equalautomatic fear: preferential attention without implicit valence.Emotion 7, 314–323.

Rossion, B., Jacques, C., 2008. Does physical interstimulus varianceaccount for early electrophysiological face sensitive responsesin the human brain? Ten lessons on the N170. Neuroimage 39,1959–1979.

Rossion, B., Delvenne, J.F., Debatisse, D., Goffaux, V., Bruyer, R.,Crommelinck, M., Guerit, J.M., 1999. Spatio-temporallocalization of the face inversion effect: an event-relatedpotentials study. Biol. Psychol. 50, 173–189.

Rossion, B., Gauthier, I., Tarr, M.J., Despland, P., Bruyer, R.,Linotte, S., Crommelinck, M., 2000. The N170 occipito-temporal component is delayed and enhanced to invertedfaces but not to inverted objects: an electrophysiologicalaccount of face-specific processes in the human brain.Neuroreport 11, 69–74.

Rossion, B., Curran, T., Gauhtier, I., 2002. A defense of thesubordinate-level expertise account for the N170 component.Cognition 85, 189–196.

Rossion, B., Joyce, C.A., Cottrell, G.W., Tarr, M.J., 2003. Earlylateralization and orientation tuning for face, word, andobject processing in the visual cortex. Neuroimage 20,1609–1624.

Sah, P., Faber, E.S., Lopez De Armentia, M., Power, J., 2003. Theamygdaloid complex: anatomy and physiology. Physiol. Rev.83, 803–834.

Schiano, D.J., Ehrlich, S., Rahardja, K., Sheridan, K., 2000.Measuring and modeling facial affect. Behav. Res. MethodsInstrum. Comput. 32, 505–514.

Schupp, A., Öhman, M., Junghöfer, A.I., Weike, J., Stockburger, J.,Hamm, A.O., 2004. The facilitated processing of threateningfaces: an ERP analysis. Emotion 4, 189–200.

Schupp, H.T., et al., 2007. Selective visual attention to emotion. J.Neurosci. 27, 1082–1089.

Schyns, P.G., Petro, L.S., Smith, M.L., 2007. Dynamics of visualinformation integration in the brain for categorizing facialexpressions. Curr. Biol. 17, 1580–1585.

Stekelenburg, J.J., de Gelder, B., 2004. The neural correlates ofperceiving human bodies: an ERP study on the body-inversioneffect. Neuroreport 15, 777–780.

Sokolov, E.N., Boucsein, W., 2000. A psychophysiological modelof emotional space. Integr. Physiol. Behav. Sci. 35,81–119.

Soussignan, R., 2002. Duchenne smile, emotional experience, andautonomic reactivity: a test of the facial feedback hypothesis.Emotion 2, 52–74.

Streit, M., Dammers, J., Simsek-Kraues, S., Brinkmeyer, J., Wölwer,W., Ioannides, A., 2003. Time course of regional brainactivations during facial emotion recognition in humans.Neurosci. Lett. 342, 101–104.

Sugase, Y., Yamane, S., Ueno, S., Kawano, K., 1999. Global and fineinformation coded by single neurons in the temporal visualcortex. Nature 400, 869–873.

Vuilleumier, P., Richardson, M.P., Armony, J.L., Driver, J., Dolan, R.J., 2004. Distant influences of amygdala lesion on visual corticalactivation during emotional face processing. Nat. Neurosci. 7,1271–1278.

Wong, J.H., Peterson, M.S., Thompson, J.C., 2008. Visualworking memory capacity for objects from differentcategories: a face-specific maintenance effect. Cognition 108,719–731.