YOU ARE DOWNLOADING DOCUMENT

Please tick the box to continue:

Transcript
Page 1: Emotion Recognition in Autism Spectrum Disorder: …graphics.uni-konstanz.de/.../Spicker2016EmotionRecognitionAutism.pdf · Emotion Recognition in Autism Spectrum Disorder: Does Stylization

Emotion Recognition in Autism Spectrum Disorder: Does Stylization Help?

Marc Spicker1 Diana Arellano2 Ulrich Schaller3 Reinhold Rauh3 Volker Helzle2 Oliver Deussen1

1University of Konstanz, Germany2Filmakademie Baden-Wurttemberg, Germany

3University of Freiburg, Germany

Figure 1: The emotion happiness shown by a female virtual character rendered realistic (left) and in different stylized variants.

Abstract

We investigate the effect that stylized facial expressions have on theperception and categorization of emotions by participants with high-functioning Autism Spectrum Disorder (ASD) in contrast to twocontrol samples: one with Attention-Deficit/Hyperactivity Disorder(ADHD), and one with neurotypically developed peers (NTD). Real-time Non-Photorealistic Rendering (NPR) techniques with differentlevels of abstraction are applied to stylize two animated virtual char-acters performing expressions for six basic emotions. Our resultsshow that the accuracy rates of the ASD group were unaffected bythe NPR styles and reached about the same performance as for thecharacters with realistic-looking appearance. This effect, however,was not seen in the ADHD and NTD groups.

Keywords: facial animation, emotion recognition, non-photorealistic rendering, autism spectrum disorder

Concepts: •Computing methodologies → Animation; Non-photorealistic rendering; •Applied computing→ Psychology;

1 Introduction

The ability to perceive emotions and other affective traits from hu-man faces is considered the result of a “social training” that occursfrom early childhood and develops during the adolescence and adult-hood. However, individuals with Autism Spectrum Disorders (ASD)present difficulties not only judging someone’s emotions from theirfacial expressions [Kennedy and Adolphs 2012], but also in faceprocessing in general [Harms et al. 2010]. There is little consensusregarding the causes of the impairments [Uljarevic and Hamilton

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies are notmade or distributed for profit or commercial advantage and that copies bearthis notice and the full citation on the first page. Copyrights for componentsof this work owned by others than the author(s) must be honored. Abstractingwith credit is permitted. To copy otherwise, or republish, to post on serversor to redistribute to lists, requires prior specific permission and/or a fee.Request permissions from [email protected]. c© 2016 Copyright heldby the owner/author(s). Publication rights licensed to ACM.SAP ’16, July 22 - 23, 2016, Anaheim, CA, USAISBN: ACM 978-1-4503-4383-1/16/07...$15.00DOI: http://dx.doi.org/10.1145/2931002.2931004

2013][Lozier et al. 2014], but there are indications that an atypicallocal-oriented strategy while processing faces might be one reason[Behrmann et al. 2006][Deruelle et al. 2008].

In this paper we present a study that investigates the differences inperception and categorization of abstracted emotional facial expres-sions in virtual characters between children and adolescents withASD, and those Neuro-Typically Developed (NTD) and those withAttention-Deficit/Hyperactivity Disorder (ADHD). We decided touse facial expressions with variable amount of details, and thereforevarying information load [Kyprianidis and Kang 2011] to assess theimpact on emotion recognition due to the atypical face processingin individuals with ASD, in comparison with their NTD and ADHDpeers. Thus we formulated two hypotheses:

a) For stimuli with realistic-looking appearance, the NTD groupshould have higher accuracy (percentage of accurately iden-tified emotions) in recognition rates than the ADHD group,which in turn should be higher than those of the ASD group.

b) For all styles (realistic-looking and NPR rendered stimuli), wepresume that stylization and abstraction affects accuracy ratesof ASD differently compared to ADHD and NTD, due to anatypical face processing in ASD that should be sensitive tofeatures of the facial stimuli.

For this study, we carried out the NPR-DECT test, an adaptationof the “Dynamic Emotion Categorization Test” (DECT) [Rauh andSchaller 2009]. In contrast to the DECT, which only providedrealistic-looking emotional facial expressions, the NPR-DECT relieson Non-Photorealistic Rendering (NPR) algorithms to generate styl-ized facial expressions. The real-time capabilities give the possibilityfor interactive parametrization (i.e. speed, intensity, or abstractionstyles), allowing the experimenters to manipulate the stimuli. More-over, it provides possibilities for future interactive applications (e.g.head and gaze animation controlled by the user’s gaze data, or styl-ization methods for real video streams).

The results showed that the NTD group performed clearly better thanthe ASD group, even when categorizing abstracted emotions. Aninteresting observation is the lack of preference for any NPR styleby participants in the ASD group. This is coherent with other recentresearch, which showed that different levels of detail in avatarscreated a roughly equivalent verbal and non-verbal social behaviorin children with ASD [Carter et al. 2016].

Page 2: Emotion Recognition in Autism Spectrum Disorder: …graphics.uni-konstanz.de/.../Spicker2016EmotionRecognitionAutism.pdf · Emotion Recognition in Autism Spectrum Disorder: Does Stylization

2 Background

Previous studies have suggested that individuals with ASD processfaces and decode facial expressions differently than NTD individuals[Harms et al. 2010]. Two theoretical frameworks have addressedthese differences. One is Weak Central Coherence (WCC) that pos-tulates a deficit in global processing and a detail-focused cognitivestyle [Happe and Frith 2006]. The other is the Enhanced PerceptualFunctioning theory (EPF), an alternative to the WCC that re-assertsthe principle of locally oriented and low-level perception, withoutmaking assumptions about the quality and quantity of global pro-cessing [Mottron et al. 2006]. Rosset et al. [2008] compared theprocessing of emotional expressions of real faces, human cartoonand non-human cartoon faces in children with ASD to two groups ofNTD children, finding that those with ASD relied on a local-orientedstrategy. This atypical local strategy have also been reported byBehrmann et al. [2006] and Deruelle et al. [2008]. Pelphrey et al.[2002] analyzed visual scan paths of five high-functioning ASD andfive NTD adult males as they viewed photographs of human facialexpressions. They found out erratic and disorganized scan paths inthe ASD participants, and noticed that they spent most of the timeviewing non-feature areas of the faces and a smaller percentage oftime examining core features such as the nose, mouth, and, espe-cially, the eyes. These findings have also been supported by otherworks ([Kirchner et al. 2011], [Valla et al. 2013]).

Virtual Characters

A number of studies have confirmed the advantages of using virtualcharacters (VC) in research on autism. The DECT test [Rauh andSchaller 2009] assessed the feasibility of using real-time anima-tions of realistic-looking virtual characters by contrasting them withvideos of human actors performing emotional facial expressions. Itshowed that the recognition rates using recorded humans or VCswere highly correlated. Tartaro and Cassell [2008] suggested thatcollaborative narratives with virtual peers may offer a structuredsetting for using contingent discourse. Georgescu et al. [2014]reviewed previous research done with virtual characters and virtualreality environments (VRE). They concluded that these provide greatvalue for experimental paradigms in social cognition, specificallythose related with non-verbal behavior and perception in participantswith high-functioning autism, because it allows to grasp the fullextent of the social world in a controlled manner.

Stylization

Regarding visual representation, recent studies with autistic childrenhave used VCs as tutors or play companions either with cartoonystyles (e.g. [Alcorn et al. 2011], [Alves et al. 2013], [Serret et al.2014]), or more realistic-looking ones (e.g. [Baron-Cohen et al.2009], [Milne et al. 2011]). To better understand the differencesin style representation and perception of VCs, other studies haveendeavored in finding the elements that produced a more appealing,realistic or believable VC. For instance, Grawemeyer et al. [2012]developed a pedagogical agent with the aid of four young peoplewith ASD, resulting in thin line drawn 2D characters. McDonnell etal. [2012] investigated the effects of typical rendering styles in CGproductions. Their results showed that participants, all NTDs, wereso focused on the given task that the characters appearance was notnoticed. Hyde et al. [2013] studied how rendering style and facialmotion affect impressions of character personality. They found thatslightly exaggerated realistic-looking characters appeared more like-able and intelligent than slightly exaggerated cartoon characters. Zellet al. [2015] observed that shape in stylized characters is the maincontributor of perceived realism, whereas material mainly affectsperceived appeal, attractiveness and eeriness. Regarding the inten-

sity of facial expressions (anger, happiness, sadness, surprise andneutral), shape was the main factor but material had no significantinfluence. Makarainen et al. [2015] studied the effects of realismand magnitude of facial expressions, showing that contrary to whatwas expected, there are cases where stimuli rated high in strangenessproduced a positive emotional response. More recently, Carter et al.[2016] found that changing the visual complexity of avatars (video,computer-generated, and cartoon) did not significantly affect anysocial interaction behaviors of children with ASD.

ADHD

As far as we could observe, there is a gap between studies of facialcategorization with participants with ASD in comparison to otherclinical groups like ADHD, and studies of facial perception of styl-ized characters with NTD participants. We included the ADHDgroup as clinical comparison group because previous studies tend toindicate that these children have difficulties with facial recognitioncompared to NTDs, particularly in emotion recognition [Collin et al.2013], and (slightly) better recognition rates than the ASD group[Bora and Pantelis 2015]. The ADHD group also served as controlgroup to confirm that the pattern of impairments are specific to theASD group, and not due to psychopathology in general.

3 Visual Abstraction

Realistic representations, where the highest level of detail is gener-ally preferred, usually contain more information than necessary totransmit intended information. Therefore, artists typically removedetails and use abstraction for a more effective visual communication[Kyprianidis and Kang 2011]. Abstraction is a continuous processfor communicating a specific thing to a more general concept, asdepicted in Figure 2.

Figure 2: Different levels of abstraction for the representation of aspecific thing (a person), describing an abstract concept (a face andan emotion)[McCloud 1994].

Non-Photorealistic Rendering (NPR) achieves a variation in thelevel of abstraction, adapting images to “focus the viewer’s attention”[Gooch and Gooch 2001]. It attempts to model the physical prop-erties of an artistic medium such as watercolor, oil, or pen and ink.Therefore, research on NPR aims at the automatic creation of artisticimages with a previously defined communication goal. We want touse the abstraction property to reduce the information load whenlooking at the character’s faces to convey the presented informationmore efficiently. The chosen styles meet the following criteria:

• Large visual distance between each style to cover a large por-tion of the NPR spectrum.

• Intrinsic possibility to vary the level of abstraction.• Run at interactive frame rates and are temporally coherent.

Each algorithm was implemented as a multi-pass rendering processin OpenGL to ensure the real-time capability. Table 1 lists thecomplete parameter set used for each algorithm and abstraction usedin the experiments. For a more detailed description of the parameters

Page 3: Emotion Recognition in Autism Spectrum Disorder: …graphics.uni-konstanz.de/.../Spicker2016EmotionRecognitionAutism.pdf · Emotion Recognition in Autism Spectrum Disorder: Does Stylization

and their usage, please refer to the original contributors. We useda resolution of 1280× 1024 throughout our experiments. Figure 3shows the virtual characters used in the experiment rendered with arealistic appearance: Hank (elderly male) and Nikita (young female).

3.1 Image Abstraction

This style is inspired by paintings and painting-like images, whereabsence of fine-grained texture details and increased sharpness ofedges are two relevant visual characteristics [Papari et al. 2007].To reproduce the style, our algorithm is based on Kyprianidis etal. [2009] and Kyprianidis and Kang [2011] which use directionalinformation from the input image to guide a filtering that removessmall details but keeps regions boundaries intact. A generalizationof the flow-based anisotropy-guided Kuwahara filtering is used toblur similar regions, while leaving local shapes and structures of theinput intact. First, a rotational symmetric derivative filter is used toapproximate the local structure, the structure tensor. In a next step,these tensors are Gaussian smoothed to avoid artefacts due to noiseor very small features. The eigenvectors of the smoothed tensorsare used to determine the local orientations and anisotropy. As alast step, an ellipse is defined whose major axes are aligned to thelocal eigenvectors and scaled according to the eigenvalues. Thisellipse is divided into 8 radial regions, and the colors of each regionare summed up and then weighted by a Gaussian kernel within theelliptical area. The final output is the summed up results for all 8regions. Two parameters give the user control over the abstractionof the algorithm. One is the filter radius that controls the size offeatures to be filtered out: Choosing a large radius results in blurring-out larger features. The second parameter controls the sharpness ofthe output by using it as exponent in the weighting calculation of theelliptic subregions. Figure 4 shows the image abstraction.

3.2 Coherent Line Drawing (CLD)

Line drawing effectively conveys shapes and outlines to the viewerwith simple primitives: lines. Despite the vast amount of computervision algorithms for line extraction and edge detection, many ofthem suffer from noisy inputs and produce incoherent outputs, suchas line fragments. We use the algorithm of Kang et al. [2007]which creates coherent and artistic-looking lines. The idea behindit, which is closely related to line integral convolution, is to usethe directional information of the local image structure to guide ananisotropic difference-of-Gaussian (DoG) filtering. The directionalinformation is extracted from the input image by creating an edgetangent flow field. The DoG-filtering collects filter responses alongthe flow to determine the local edge strength, creating coherent linesand suppressing noise. Different levels of abstraction are achievedby varying the DoG filter kernel size, the amount of blur during theedge tangent field construction, and the integration length along theflow. Figure 5 shows results of the coherent line drawing algorithm.

Figure 3: The male (Hank) and female (Nikita) characters used forthe experiments in a neutral pose and rendered realistically.

Figure 4: Image Abstraction (Left: medium, right: high abstraction)applied on a male and female character.

3.3 Pencil Drawing

Pencil drawing is one of the most fundamental techniques in visualarts to abstract human perception of natural scenes. By convertinginput scenes into just lines and shading, a great number of detailsare removed, while keeping the object boundaries and plasticity ofthe rendered objects by preserving their shading. We based ourimplementation on the work of Lu et al. [2012], which combinestwo common steps in the human drawing process: pencil sketchingto express the general structure of the scene, and tone drawing forshading. The method begins by computing the images gradientsas a simple forward difference and clustering them according todiscrete slopes with eight different angles. The gradient directionwith maximal magnitude for each pixel is determined and a kernelwith this direction is used to create the pencil drawing line. Thetonal value is estimated from the input image by converting it intoa grayscale image and applying an empirically obtained heuristicmodel defined by three components. The tonal value can either beused directly as brightness to create a grayscale image, or as valuein the YUV color space for colorized results, providing a simplemean of abstraction. In addition, the length of the created lines inthe pencil sketch can be controlled by the directional kernel size.Figure 6 shows the results of the pencil drawing.

Param. Medium Abs. High Abs.

CoherentLine

Drawing

r 5 8τ 0.98 0.97σ1 1 0.7σ3 4 6

PencilDrawing

Monochrome no noKernel Size 200 120W1 / W2 / W3 11 / 37 / 52 11 / 37 / 52

ImageAbstraction

r 6 10q 6 16

Watercolor Noise Scale 2.85 2.85Noise Intensity 45 80

Table 1: Parameters used for the rendering algorithms and theirabstraction levels.

Page 4: Emotion Recognition in Autism Spectrum Disorder: …graphics.uni-konstanz.de/.../Spicker2016EmotionRecognitionAutism.pdf · Emotion Recognition in Autism Spectrum Disorder: Does Stylization

Figure 5: Coherent Line Drawing (Left: medium, right: high ab-straction) applied on a male and female character.

3.4 Watercolor

Watercolor painting is an artistic style that creates the effect of water-dissolved colors on paper, or similar surfaces. Two main styles ofwatercolor painting can be differentiated: wet-on-wet, where colorsare added onto not yet dried colors on the medium, and wet-on-dry,where the underlying color is already dry. The prominent featuresof real watercolor images, such as brilliant colors, subtle variationof color saturation, and visibility of the underlying medium are theresult of a complex interaction between pigments, water and thesupport medium [Bousseau et al. 2006]. We decided to use this stylenot only for the abstraction it provides, but also because it is used bymany individuals with ASD to express themselves through painting,or during therapies that introduce artistic elements [Tataroglu 2013].Our algorithm is based on the work of Luft et al. [2006] whoseapproach simplifies the visual complexity and imitates the naturaleffects of watercolor. The effect of painting wet-on-wet is imitatedby varying the pigment granulation according to the underlyingpaper structure, affecting the water flowing process. This processwas simulated by modifying the input image according to a noise-based displacement vector. The level of abstraction of the watercoloralgorithm is achieved by varying the noise intensity. Figure 7 showsthe results of the watercolor algorithm.

4 Experiment

A total of 62 participants (all male; ASD is diagnosed about fourtimes more often in males than females according to the AmericanPsychiatric Association [2013]) within an age range between 13.0and 18.0 years and IQ of at least 85 took part in the experiment. 24of them were diagnosed with ADHD, 16 with high-functioning ASDand 22 were neurotypically developed (NTD) peers. This last groupwas recruited from local schools and showed no indication of anypsychiatric disorder according to screening questionnaires. All sub-jects were diagnosed according to ICD-10 criteria for ADHD (F90.0,F90.1) and ASD (F84.0, F84.1, F84.5), respectively. Especially, allpatients with ASD were diagnosed based on the “gold standard” withthe Autism Diagnostic Observation Schedule (ADOS-G) and withthe Autism Diagnostic Interview-Revised (ADI-R). All participantshad normal or corrected-to-normal vision and were compensated

Figure 6: Pencil Drawing (Left: medium, right: high abstraction)applied on a male and female character.

Parameter # ValuesCharacter 2 Male, FemaleEmotion 6 Anger, Disgust, Fear, Happiness, Sadness, Surprise

Style 5 Original, Image Abstraction, Coherent Line Drawing,Pencil Drawing, Watercolor

Abstraction 2 Medium, High

Table 2: Parameter space and values used in the experiments.

for their time. The study protocol has been performed in accor-dance with the ethical standards laid down in the 1964 Declarationof Helsinki and its later amendments. To examine the ability ofemotion categorization in dynamic facial stimuli, we conducted theNPR-DECT that included stylized representations of the characters.The test was developed in the Frapper framework1 that provides aninterface in which psychologists or experimenters can interactivelyguide participants through the test sessions. The stimuli consistedof 108 animations compliant with the Facial Action Coding System(FACS) [Ekman et al. 2002], 2 characters (male and female, see Fig-ure 3), 6 basic emotions (anger, disgust, fear, happiness, sadness, andsurprise), 5 styles (original, image abstraction, coherent line draw-ing, pencil drawing, watercolor), and 2 abstraction levels (mediumand high). For the original, or realistic-looking style, no abstractionlevels exist. The 2 characters were rigged using the Adaptable FacialSetup (AFS) [Helzle et al. 2004], a tool-set that relies on a completemotion capture-based library of deformations, generating high qual-ity, natural and non-linear deformations. Once the characters werefinished, they were exported into Frapper. The facial movements aredescribed in terms of Action Units (AUs), providing a parameteriz-able way of creating animated facial expressions, which could thenbe translated to any other character with a similar FACS-based rig.In order to keep the number of animations within a reasonable range,intensity of emotions and speed of animation were kept constantacross all animations in this experiment. Table 2 gives an overviewof each parameter and the associated possible values.

The NPR-DECT had an average duration of 25 minutes. One trialconsisted of a fixation cross presented for 0.5s, followed by a char-acter’s animation displaying a basic emotion rendered in one of the

1research.animationsinstitut.de/frapper/

Page 5: Emotion Recognition in Autism Spectrum Disorder: …graphics.uni-konstanz.de/.../Spicker2016EmotionRecognitionAutism.pdf · Emotion Recognition in Autism Spectrum Disorder: Does Stylization

Figure 7: Watercolors (Left: medium, right: high abstraction)applied on a male and female character.

distinct styles. Once the animation was finished, a response screenwith the six possible basic emotions was shown until the participantpressed a key on the numeric keypad corresponding to an emotionname (e.g. 1-happiness, 2-sadness, 3-anger, 7-fear, 8-disgust and9-surprise). Trials were presented in a pseudo-randomized order, andthe association between numeric keys and emotion names was coun-terbalanced across participants to avoid order effects. Additionally,the participants were asked to rate the recognizability and likeabilityof the different combinations of characters, rendering style, andabstraction level on a scale from 1 (very good) to 7 (very bad).In the following we will refer to the different rendering styles bytheir abbreviations: Original with realistic appearance (Ori.), ImageAbstraction (IA), Coherent Line Drawing (CLD), Pencil Drawing(PD), Watercolor (W). For the abstraction levels we will use (M) formedium and (H) for high abstraction.

5 Results

5.1 NPR Style and Level of Abstraction

Ori. IA (H) CLD (H) PD (H) W (H)0

0.2

0.4

0.6

0.8

IA (M) CLD (M) PD (M) W (M)

Mea

nac

cura

cy

Accuracy rates for NPR styles

NTD ADHD ASD

Figure 8: Accuracy rates for NPR styles with varying abstraction(M: medium, H: high). Aggregated: Emotions, Characters. Errorbars represent 95% CI.

Concerning accuracy rates, the main analysis consisted of a 9x3MANOVA with repeated measurements with the factors style pa-

rameter (NPR style and level of abstraction) and diagnostic group.The interaction between the factors did not reach significance(F (16, 104) = 1.24, p = .250), whereas both main effects turnedout to be significant (style parameter: F (8, 52) = 3.35, p = .004;diagnostic group: F (2, 59) = 7.84, p = .001). In the follow-upanalyses, MANOVAs with repeated measurements for each diag-nostic group revealed a significant main effect for style parame-ter for the NTD and ADHD group (F (8, 14) = 3.28, p = .025;and F (8, 16) = 2.78, p = .039), but not for the ASD group(F (8, 8) = 1.14, p = .427). In sum, an “abstracted faces effect”showed up in the NTD and ADHD group, but not in the ASD group,which showed an indifference to the stylizations (Hypothesis 2). Ac-curacy rates aggregated across emotions and virtual characters aredisplayed in Figure 8. It can be observed that each NPR renderingstyle had a lower accuracy rate than the realistic-looking counterpart(Ori.). For the ASD group, accuracy rates were not affected byrendering styles and stay around their total mean of 55.4%. Thiseffect is not seen in the NTD group, and to some extent neither inthe ADHD group. It is also interesting to note that increasing theabstraction from medium to high has an overall negative effect onthe accuracy for the Image Abstraction and Coherent Line Drawingstyles, no effect for the Watercolor style, and a positive effect onlyfor the Pencil Drawing. The accuracy rankings between the mediumand the high abstraction remained the same for the NTD group, butchanged for the other two groups with no obvious pattern. Betweenthe ADHD and ASD groups the only algorithm that performed wellthroughout was the Watercolor algorithm. The overall accuracy ratesof the NTD group is above those of the two other groups for all stylesand abstraction intensities, as formulated in our first hypothesis.

5.2 Emotions

Figure 9 shows accuracy rates for the 6 basic emotions used in theexperiments, for each of the diagnostic groups and NPR styles, toinvestigate if there are any relationships. Characters and abstractionlevels are aggregated in this case. Looking at the rank orderingof the emotions, it can be seen that happiness has the highest rankwith above 90% accuracy, followed by surprise (∼80%). Sadness,disgust and anger are always in the mid-ranks and change positionsbetween the test groups (∼40-66%). Fear recognition was the low-est in all conditions with an accuracy level that is near chance level(∼10-19%). In particular for the emotion disgust in all renderingstyles, the ASD group outperformed the ADHD group. The oppositeoccurred for sadness in all rendering styles. Leaving out fear due toits low accuracy rates, there is no consistent trend of better perfor-mance of one group over another for the remaining emotions, butthe expected good performance of the NTD group (first hypothesis)and the overall slightly better performance of ADHD over ASD forthe realistic stimuli. Regarding the NPR styles, accuracy ratingsfor the different emotions show that none of them performed betteror worse at conveying a single emotion. Styles that perform wellfor a group (e.g. Watercolor) do so for different emotions, and theaccuracy primarily depends on the shown emotion, indicating thatthe corresponding facial animation outweighs the importance of theNPR style.

5.3 Characters

Figure 10 depicts the accuracy rates for the male and female char-acters for the different diagnostic groups. The attributes NPR style,abstraction level and emotion are aggregated. It can be seen thatthe NTD group has the best performance for both characters (firstand second hypothesis), while the other two groups have a similar,but worse performance. For both the male and female character theperformance difference is about the same (∼14.5%). Regardless ofthe groups, using the female character results in an higher accuracy.

Page 6: Emotion Recognition in Autism Spectrum Disorder: …graphics.uni-konstanz.de/.../Spicker2016EmotionRecognitionAutism.pdf · Emotion Recognition in Autism Spectrum Disorder: Does Stylization

Ori. IA CLD PD W0

0.2

0.4

0.6

0.8

1M

ean

accu

racy

Accuracy rates for Emotion: Anger

Ori. IA CLD PD W0

0.2

0.4

0.6

0.8

1

Accuracy rates for Emotion: Disgust

Ori. IA CLD PD W0

0.2

0.4

0.6

0.8

1

Accuracy rates for Emotion: Happiness

Ori. IA CLD PD W0

0.2

0.4

0.6

0.8

1

Mea

nac

cura

cy

Accuracy rates for Emotion: Fear

NTD ADHD ASD

Ori. IA CLD PD W0

0.2

0.4

0.6

0.8

1

Accuracy rates for Emotion: Sadness

NTD ADHD ASD

Ori. IA CLD PD W0

0.2

0.4

0.6

0.8

1

Accuracy rates for Emotion: Surprise

NTD ADHD ASD

Figure 9: Accuracy rates for each of the 6 emotions and their respective rendering styles. Aggregated: Characters and abstraction levels.Error bars represent 95% CI.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Male

Female

Mean accuracy

Accuracy rates for characters

NTD ADHD ASD

Figure 10: Accuracy rates for the male and female character de-pending on test group. Aggregated: Emotions, NPR styles, abstrac-tion levels. Error bars represent 95% CI.

5.4 Recognizability and Likeability

Table 3 shows the results of participants rating the recognizabilityand likeability of each NPR style, abstraction level, and virtual char-acter (VC): male (M) and female (F). Ratings vary from 1 (verygood) to 7 (very bad) and are color-coded so that worse ratingsare displayed with a more intense red color. Concerning recogniz-ability, a 9x2x3 MANOVA with repeated measurements revealeda tendency for the interaction of style parameter (NPR style andabstraction level) and character (F (8, 51) = 1.96, p = .072), anda significant main effect of style parameter (F (8, 51) = 12.93,p < .0001). Regarding likeability, the measurements again re-vealed a significant interaction of style parameter and character(F (8, 51) = 2.44, p = .026), and a significant main effect of styleparameter (F (8, 51) = 9.20, p < .0001). It indicates that bothrecognizability and likeability ratings were reliably influenced bythe style parameter, and additionally, by the underlying character. Toelucidate the impact of different NPR styles and the two levels of ab-straction, an additional 4x2x2x3 MANOVA with repeated measure-ments was conducted. For recognizability, it revealed a significantinteraction of NPR style and character (F (3, 56) = 3.26, p = .028),and also significant main effects of NPR style (F (3, 56) = 27.06,p < .0001) and level of abstraction (F (1, 58) = 38.57, p < .0001).For likeability, significant interactions were found between charac-ter and NPR style (F (3, 56) = 3.64, p = .018) and character andlevel of abstraction (F (1, 58) = 8.38, p = .005). Significant main

effects were obtained for NPR style (F (3, 56) = 17.71, p < .0001)and level of abstraction (F (1, 58) = 425.19, p < .0001). The resul-tant pattern corroborates and refines the findings from the MANOVAabove, pointing out that recognizability and likeability were influ-enced by NPR style and level of abstraction (higher levels led tolower recognizability ratings), and both were differentially affectedby the NPR style depending on the VC. Interestingly, membershipof diagnostic group seems to have no influence on the ratings.

6 Conclusion

We investigated the differences between neuro-typically developed(NTD) children and adolescents, and those with ADHD and high-functioning ASD when categorizing stylized emotional facial expres-sions. Our first hypothesis was corroborated by the higher accuracyrates of the NTD subjects over those with ADHD and ASD. Oursecond hypothesis was that stylization and abstraction of visual emo-tion stimuli affects accuracy rates of ASD differently than ADHDand NTD. In this regard, we obtained valuable insights into thegeneral perception of stylized emotion representations. Analyzingeach group separately, the ASD group did not show significant pref-erences for any NPR styles over the realistic one. This could beunderstood as a lacking “abstracted faces effect”, which indicatesthe atypical processing of facial emotional stimuli in ASD. For theADHD group, the medium abstracted CLD and IA styles obtainedhigher accuracy rates; however this was not consistent across allemotions and there was no style where ADHDs were as good oreven outperformed NTDs, as formulated in our first hypothesis. Sur-prisingly, the reduction of details in the face due to the stylizationdid not improve accuracy for the ASD group.

An explanation for these results is provided by Zell et al. [2015], whofound that blurring a realistic texture does not significantly reducerealism. This is demonstrated with the IA style, which contributedto higher accuracy rates, at least in the NTD group. Similarly, theyobserved that material stylization reduces the perceived intensity ofexpressions, explaining why emotions other than happiness weremore difficult to categorize. Our findings regarding the low accuracyrates for categorization of fear, and to some extent for sadness anddisgust, are consistent with previous works that suggested that these

Page 7: Emotion Recognition in Autism Spectrum Disorder: …graphics.uni-konstanz.de/.../Spicker2016EmotionRecognitionAutism.pdf · Emotion Recognition in Autism Spectrum Disorder: Does Stylization

Recognizability

Style VC NTD ADHD ASD Total

Ori. M 2.6 2.23 2.46 2.41F 2.93 2.18 2.25 2.39

AI (M) M 2.4 2.41 2.71 2.52F 2.47 2.41 2.33 2.39

AI (H) M 2.67 2.82 2.75 2.75F 3 2.68 2.96 2.87

CLD (M) M 3.4 4.09 3.63 3.74F 3.4 4.27 3.25 3.66

CLD (H) M 3.53 4.86 4.29 4.31F 4 4.45 3.21 3.85

PD (M) M 3.4 4 3.83 3.79F 4.33 4 4.17 4.15

PD (H) M 3.67 4 3.96 3.9F 4.73 4.27 4.67 4.54

W (M) M 3.87 3.68 3.79 3.77F 3.6 3.27 3.42 3.41

W (H) M 4 4.45 4.33 4.3F 4.2 4.14 4.13 4.15

Likeability

Style VC NTD ADHD ASD Total

Ori. M 2.87 2.41 2.63 2.61F 2.87 2.18 2.29 2.39

IA (M) M 2.8 2.68 3.13 2.89F 2.87 2.64 2.67 2.7

IA (H) M 3.4 3.05 2.79 3.03F 3.33 3.18 3.17 3.21

CLD (M) M 4 3.95 4 3.98F 3.8 3.68 3.42 3.61

CLD (H) M 3.8 4.45 4.42 4.28F 4.33 4 3.29 3.8

PD (M) M 3.53 3.86 3.92 3.8F 4 3.45 4.33 3.93

PD (H) M 3.67 3.91 3.71 3.77F 4.73 4 4.75 4.48

W (M) M 3.93 3.86 4.17 4F 4.2 3.64 3.54 3.74

W (H) M 4.2 4.55 4.63 4.49F 4.07 4.41 4.17 4.23

Table 3: The participants rated recognizability (left) and likeability (right) of the rendering styles and virtual characters (VC) from 1 (verygood) to 7 (very bad). Results are color-coded from white to red, more intense red meaning worse.

emotions are the most challenging to recognize ([Harms et al. 2010],[Alves et al. 2013]). Pelphrey et al. [2002] also observed a deficitamong autistic individuals in the identification of fear, and in generalin tasks involving recognition of very simple or basic emotions. Therecognizability-likeability questionnaire shed light on the perceptionof the NPR styles among all groups, showing that pencil based styles(CLD and PD) are harder to recognize and not as likeable as therealistic or more painterly (Watercolor) ones. One conclusion thatmight be drawn from this is that pencil based styles enhance, insteadof reduce, the level of details in the face. The better recognition ofemotions with the female character is also consistent with formerresearch, where emotions were better recognized on female actorsoverall ([Battocchi et al. 2005], [Zibrek et al. 2013]). Another factoris the age difference between characters, with accentuated wrinklesand furrows in the elderly character when using pencil-based styles.

The obtained data is of great value as it delineates guidelines forour future work. First, we need to improve the NPR algorithms toachieve a more uniform level of abstraction that can be comparedamong the styles. Secondly, the animations should be artisticallyenhanced to accentuate the emotional meaning (e.g. add head orshoulder movement). We are also creating new characters thatare the counterparts of the existing ones (an elderly female and ayounger male) to better assess the influence of age and gender in boththe NPR abstractions and emotion categorization. Finally, furtherstudies need to be done to detect finer differences in the way theclinical groups perceive and categorize emotions. The fact that nosignificant difference was found among styles does not necessarilyshow that the participants were unaware of them. An hypothesiscould be that the different stimuli (emotional expression, NPR style,age difference) blocked or dampened the impression of the facialexpression, and thus the categorization of the emotions. This wouldbe consistent with the view that NTD have a more holisitc processingof faces, while ASD tend to have fragmentary perception of faces,relying more on local facial elements than NTD [Deruelle et al.2008]. In this sense, future experiments will refine the level ofabstraction by selecting particular regions of the face (e.g. eyeregion, mouth region) based on eye-tracking information.

Acknowledgements

We would like to thank the German Research Foundation (DFG) forfinancial support within the projects AR 892/1-1, DE 620/18-1, andRA 764/4-1.

References

ALCORN, A., PAIN, H., RAJENDRAN, G., SMITH, T., LEMON, O.,PORAYSKA-POMSTA, K., FOSTER, M. E., AVRAMIDES, K.,FRAUENBERGER, C., AND BERNARDINI, S. 2011. Social com-munication between virtual characters and children with autism.In 15th Int. Conf. on Artificial Intelligence in Education.

ALVES, S., MARQUES, A., QUEIROS, C., AND ORVALHO, V.2013. Lifeisgame prototype: A serious game about emotions forchildren with autism spectrum disorders. PsychNology Journal11, 3, 191–211.

AMERICAN PSYCHIATRIC ASSOCIATION. 2013. Diagnostic andstatistical manual of mental disorders: DSM-5 (5th ed.). Arling-ton, VA: American Psychiatric Publishing.

BARON-COHEN, S., GOLAN, O., AND ASHWIN, E. 2009. Canemotion recognition be taught to children with autism spectrumconditions? Philosophical Transactions B, 364, 3567–3574.

BATTOCCHI, A., PIANESI, F., AND GOREN-BAR, D. 2005. Afirst evaluation study of a database of kinetic facial expressions(dafex). In Proceedings of ICMI 05, ACM, 214–221.

BEHRMANN, M., THOMAS, C., AND HUMPHREYS, K. 2006.Seeing it differently: visual processing in autism. Trends inCognitive Sciences 10, 6, 258–264.

BORA, E., AND PANTELIS, C. 2015. Meta-analysis of social cogni-tion in attention-deficit/hyperactivity disorder (adhd): comparisonwith healthy controls and autistic spectrum disorder. Psychologi-cal Medicine 46, 4, 699–716.

BOUSSEAU, A., KAPLAN, M., THOLLOT, J., AND SILLION, F.2006. Interactive watercolor rendering with temporal coherenceand abstraction. In Proceedings of NPAR 06, ACM.

CARTER, E., HYDE, J., WILLIAMS, D., AND HODGINS, J. 2016.Investigating the influence of avatar facial characteristics on thesocial behaviors of children with autism. Proceedings of theSIGCHI Conference on Human Factors in Computing Systems.

COLLIN, L., BINDRA, J., RAJU, M., GILLBERG, C., AND MINNIS,H. 2013. Facial emotion recognition in child psychiatry: asystematic review. Research in Developmental Disabilities 34, 5,1505–1520.

Page 8: Emotion Recognition in Autism Spectrum Disorder: …graphics.uni-konstanz.de/.../Spicker2016EmotionRecognitionAutism.pdf · Emotion Recognition in Autism Spectrum Disorder: Does Stylization

DERUELLE, C., RONDAN, C., SALLE-COLLEMICHE, X.,BASTARD-ROSSET, D., AND DA FONSECA, D. 2008. At-tention to low- and high-spatial frequencies in categorizing facialidentities, emotions and gender in children with autism. Brainand Cognition 66, 2, 115–123.

EKMAN, P., FRIESEN, W., AND HAGER, J. 2002. The FacialAction Coding System. Weidenfeld & Nicolson, London, UK.

GEORGESCU, A. L., KUZMANOVIC, B., ROTH, D., BENTE, G.,AND VOGELEY, K. 2014. The use of virtual characters to assessand train non-verbal communication in high-functioning autism.Frontiers in Human Neuroscience 8, 807.

GOOCH, B., AND GOOCH, A. 2001. Non-Photorealistic Rendering.A.K. Peters.

GRAWEMEYER, B., JOHNSON, H., BROSNAN, M., ASHWIN, E.,AND BENTON, L. 2012. Developing an embodied pedagogicalagent with and for young people with autism spectrum disorder.In Intelligent Tutoring Systems, Springer, LNCS, 262–267.

HAPPE, F., AND FRITH, U. 2006. The weak coherence account:Detail-focused cognitive style in autism spectrum disorders. JAutism Dev Disord 36, 1, 5–25.

HARMS, M. B., MARTIN, A., AND WALLACE, G. L. 2010. Facialemotion recognition in autism spectrum disorders: A review ofbehavioral and neuroimaging studies. Neuropsychol Rev, 20,290–322.

HELZLE, V., BIEHN, C., SCHLOMER, T., AND LINNER, F. 2004.Adaptable setup for performance driven facial animation. In ACMSIGGRAPH 2004 Sketches, 54.

HYDE, J., CARTER, E. J., KIESLER, S., AND HODGINS, J. K.2013. Perceptual effects of damped and exaggerated facial motionin animated characters. In FG, IEEE Computer Society, 1–6.

KANG, H., LEE, S., AND CHUI, C. K. 2007. Coherent line drawing.In Proceedings of NPAR 07, ACM, 43–50.

KENNEDY, D., AND ADOLPHS, R. 2012. Perception of emotionsfrom facial expressions in high-functioning adults with autism.Neuropsychologia.

KIRCHNER, J. C., HATRI, A., HEEKEREN, H. R., AND DZIOBEK,I. 2011. Autistic symptomatology, face processing abilities, andeye fixation patterns. J Autism Dev Disord, 41, 158–167.

KYPRIANIDIS, J. E., AND KANG, H. 2011. Image and videoabstraction by coherence-enhancing filtering. Computer GraphicsForum 30, 2, 593–602.

KYPRIANIDIS, J. E., KANG, H., AND DOLLNER, J. 2009. Imageand video abstraction by anisotropic Kuwahara filtering. Com-puter Graphics Forum 28, 7, 1955–1963.

LOZIER, L. M., VANMETER, J. W., AND MARSH, A. 2014. Impair-ments in facial affect recognition associated with autism spectrumdisorders: A meta-analysis. Development and Psychopathology26, 4 Pt 1, 933–945.

LU, C., XU, L., AND JIA, J. 2012. Combining sketch and tone forpencil drawing production. In Proceedings of NPAR 12, 65–73.

LUFT, T., AND DEUSSEN, O. 2006. Real-time watercolor illustra-tions of plants using a blurred depth test. In Proc. of NPAR 06,11–20.

MAKARAINEN, M., KATSYRI, J., FORGER, K., AND TAKALA,T. 2015. The funcanny valley: A study of positive emotional

reactions to strangeness. In Proceedings of the 19th InternationalAcademic Mindtrek Conference, ACM, 175–181.

MCCLOUD, S. 1994. Understanding Comics: The Invisible Art.HarperPerennial.

MCDONNELL, R., BREIDTY, M., AND BULTHOFFYZ, H. H. 2012.Render me real? Investigating the effect of render style on theperception of animated virtual humans. ACM Trans Graph 31, 4.

MILNE, M., LUERSSEN, M., LEWIS, T., LEIBBRANDT, R., ANDPOWERS, D. 2011. Designing and evaluating interactive agentsas social skills tutors for children with autism spectrum disorder.In Conversational Agents and Natural Language Interaction:Techniques and Effective Practices. IGI Global, 23–48.

MOTTRON, L., DAWSON, M., SOULIERES, I., HUBERT, B., ANDBURACK, J. 2006. Enhanced perceptual functioning in autism:An update, and eight principles of autistic perception. J AutismDev Disord 36, 1, 27–43.

PAPARI, G., PETKOV, N., AND CAMPISI, P. 2007. Artistic edgeand corner enhancing smoothing. IEEE Trans Image Process 16,10, 2449–2462.

PELPHREY, K. A., SASSON, N. J., REZNICK, J. S., PAUL, G.,GOLDMAN, B. D., AND PIVEN, J. 2002. Visual scanning offaces in autism. J Autism Dev Disord 32, 4, 249–261.

RAUH, R., AND SCHALLER, U. 2009. Categorical perceptionof emotional facial expressions in video clips with natural andartificial actors: A pilot study. Tech. rep., University of Freiburg.

ROSSET, D. B., RONDAN, C., DA FONSECA, D., SANTOS, A.,ASSOULINE, B., AND DERUELLE, C. 2008. Typical emotionprocessing for cartoon but not for real faces in children withautistic spectrum disorders. J Autism Dev Disord 38, 5, 919–925.

SERRET, S., HUN, S., IAKIMOVA, G., LOZADA, J., ANAS-TASSOVA, M., SANTOS, A., VESPERINI, S., AND ASKENAZY,F. 2014. Facing the challenge of teaching emotions to individualswith low- and high-functioning autism using a new serious game:a pilot study. Molecular Autism 5, 37.

TARTARO, A., AND CASSELL, J. 2008. Playing with virtual peers:Bootstrapping contingent discourse in children with autism. InProceedings of the 8th International Conference on InternationalConference for the Learning Sciences, ICLS’08, 382–389.

TATAROGLU, E. 2013. A special experience in art education:Autism and painting. EJBSS 1, 11, 63–68.

ULJAREVIC, M., AND HAMILTON, A. 2013. Recognition ofemotions in autism: a formal metaanalysis. J Autism Dev Disord43, 7, 1517–1526.

VALLA, J. M., MAENDEL, J. W., GANZEL, B. L., BARSKY, A. R.,AND BELMONTE, M. K. 2013. Autistic trait interactions un-derlie sex-dependent facial recognition abilities in the normalpopulation. Frontier in Psychology 4, 286.

ZELL, E., ALIAGA, C., JARABO, A., ZIBREK, K., GUTIERREZ,D., MCDONNELL, R., AND BOTSCH, M. 2015. To stylize ornot to stylize? The effect of shape and material stylization on theperception of computer-generated faces. ACM Trans. Graph. 34,6, 184:1–184:12.

ZIBREK, K., HOYET, L., RUHLAND, K., AND MCDONNELL, R.2013. Evaluating the effect of emotion on gender recognitionin virtual humans. In Proceedings of the ACM Symposium onApplied Perception, ACM, SAP ’13, 45–49.


Related Documents